Prompt Engineering with ChatGPT: Understanding Best Practices for Success
Prompt engineering is a vital aspect of user experience design that has gained significant attention in recent years. It involves designing and implementing prompts that guide users toward desired actions or behaviors. Prompt engineering is crucial because well-designed prompts can significantly improve user engagement and conversion rates.
To be effective, prompt engineering requires a deep understanding of user behavior and psychology and careful consideration of context and timing. By leveraging the principles of prompt engineering, businesses can create more intuitive and user-friendly products, leading to increased customer satisfaction and loyalty.
Prompt engineering is not just about creating pop-ups or notifications.
Instead, it involves designing prompts tailored to users’ needs and preferences.
This requires in-depth user data analysis, including browsing history, search queries, and other relevant metrics.
One critical aspect of prompt engineering is timing. The right prompt at the wrong time can be just as ineffective as no prompt at all. Therefore, businesses must carefully consider when to present user prompts based on their behavior patterns.
Another essential element of prompt engineering is context. Prompts must be designed with the overall context to fit seamlessly into the user’s experience. This includes considering device type, screen size, location, and other relevant variables.
Understanding Prompt Engineering: Principles and Best Practices
Principles of Effective Prompt Engineering
Effective, prompt engineering drives user engagement and achieves desired outcomes. To ensure that prompts are effective, it is essential to follow certain principles. These include understanding user behavior, designing relevant and timely prompts, and providing clear instructions.
Understanding User Behavior
The first principle of effective, prompt engineering is to understand user behavior. This involves analyzing how users interact with your product or service and identifying the actions or behaviors you want them to take. By understanding user behavior, you can design prompts more likely to drive the desired actions successfully.
Designing Relevant and Timely Prompts
The second principle of effective prompt engineering is to design relevant and timely prompts. This means creating prompts tailored to specific user segments and triggered at the right time. For example, if you have an e-commerce website, you might use a prompt to remind users about items they left in their shopping cart. By doing so, you can increase the likelihood of them making a purchase.
Providing Clear Instructions
The third principle of prompt practical engineering is providing clear instructions. Users should understand what action to take when presented with a prompt. This means using simple language, avoiding jargon or technical terms, and providing visual cues where necessary.
Best Practices for Prompt Engineering
In addition to following these principles, several best practices for prompt engineering can help improve the effectiveness of your prompts.
Testing and Iterating on Prompts
One best practice is testing and iterating on prompts. This involves creating multiple prompt versions and testing them with user segments to see which performs best. You can create more effective prompts over time by iterating on your prompts based on data-driven insights.
Using Data to Inform Prompt Design
Another best practice is using data to inform prompt design. This means analyzing user behavior data such as click-through or conversion rates to identify the most effective prompts. Using this data to inform prompt design, you can create prompts that are more likely to drive user behavior successfully.
Avoiding Overuse of Prompts
A third best practice is avoiding the overuse of prompts. While prompts can effectively drive user behavior, they can be annoying if used too frequently. It is essential to balance providing helpful prompts and not overwhelming users with too many notifications.
Examples of Successful Prompt Engineering
Many examples of successful prompt engineering can inspire your prompt design.
One example is using progress bars to encourage users to complete a task. For example, if you have an online course platform, you might use a progress bar to show users how far they have progressed through a course. This can help motivate them to continue learning and complete the course.
Another example is the use of notifications to remind users to take action. For example, if you have a fitness app, you might send users a notification reminding them to log their daily exercise or drink water throughout the day. By doing so, you can help users stay on track with their fitness goals.
Challenges in Prompt Engineering
While effective, prompt engineering can drive user engagement and achieve desired outcomes; several challenges are associated.
One challenge is user resistance. Some users may find prompts annoying or intrusive and choose not to engage. Designing relevant and timely prompts to overcome this challenge and avoid overusing them is important.
Limited Attention Spans
Another challenge is limited attention spans. Users may quickly lose interest in a prompt if it takes too long or requires too much effort. To overcome this challenge, it is essential to keep prompts short and straightforward and provide clear instructions on what action needs to be taken.
Changing User Behavior
A third challenge is changing user behavior. Users may resist changing their behavior, even if prompted to do so. To overcome this challenge, it is essential to design prompts tailored to specific user segments and triggered at the right time.
How Do Language Models Work? Exploring Connected Business Frameworks
Language models are algorithms that use statistical methods to learn patterns in language data. These models can be trained on large amounts of text data to predict the likelihood of a given sequence of words. This is achieved through neural networks that mimic how the human brain processes information.
Learning models are used to improve the accuracy of language models by adjusting the weights assigned to different features based on their predictive power. This process involves feeding large amounts of text data into the model and allowing it to learn from this data over time. As it does so, it adjusts its internal parameters better to predict the likelihood of different sequences of words.
One example of a learning model is LLM (Language Model-based Information Retrieval). These models can improve search results by considering the context of a query and the content of the documents being searched. By doing so, they can provide more relevant results that are better tailored to the needs and interests of individual users.
Understanding how language models work is essential for developing connected business frameworks leveraging natural language processing (NLP) technology to improve customer experience, automate workflows, and drive business insights. NLP refers to a set of techniques that enable computers to understand and interpret human language in a way that is similar to how humans do.
By leveraging NLP technology, businesses can develop chatbots and virtual assistants that allow customers to interact with their products or services using natural language commands. This can help improve customer satisfaction by providing them faster and more convenient access to information or support.
In addition, NLP technology can also be used for automating workflows within an organization. For example, it can automatically categorize incoming emails or messages based on their content and route them appropriately within an organization’s workflow.
Finally, NLP technology can also be used for driving business insights by analyzing large amounts of text data such as customer feedback, social media posts, or product reviews. By doing so, businesses can better understand their customer’s needs and preferences and identify trends and patterns that can help inform their decision-making.
The Power of Prompts: Guiding Language Models to Generate Useful Output
Prompts are essential in guiding language models to generate sound output. Augmented language models, such as GPT-3, are large language models that can generate text based on a given prompt. With prompts, language models can be trained to produce desired output through unsupervised learning.
Language models use natural language processing to understand the context of the prompt and learn from it. This means they analyze the words and phrases used in the prompt to determine the expected response. For example, if the prompt is “What is your favorite color?” a language model would understand that it needs to generate a response about someone’s favorite color.
Rephrasing prompts or changing inputs can lead to different text generations, showcasing the emergent property of language models. This means that even small changes in the input can result in vastly different outputs. For example, by changing the prompt from “What is your favorite color?” to “Tell me about colors,” we might get a different response.
Tokens are the basic units of text that language models use to generate output based on the prompt. They represent individual words or groups of words that have meaning within a sentence or paragraph. Language models use these tokens to build their natural language understanding and generate coherent responses.
One benefit of using prompts with large language models is that they allow for greater control over the generated output. By carefully crafting prompts, users can guide the model toward generating specific types of content or achieving certain goals. For example, if you want a language model to write an article about climate change, you could provide it with a series of prompts related to climate science and environmental policy.
Another advantage of using prompts is that they enable unsupervised learning. This means that instead of being explicitly taught what kinds of responses are correct or incorrect, a language model can learn by analyzing patterns in large datasets. A language model can learn to generate more accurate and relevant responses by providing enough examples and variations on a given prompt.
Despite their many benefits, prompts are not without limitations. One challenge is requiring a certain level of expertise to use effectively. Effective prompts require understanding natural language processing and the specific capabilities of different language models.
Another limitation is that language models may still generate inaccurate or inappropriate output even with carefully crafted prompts. This can be due to biases in the training data or limitations in the model’s understanding of context and nuance.
Designing Effective Prompts: Tips for Customer Refunds and Language Models
Clear and concise language in prompts is essential to ensure customers can easily navigate the refund process without confusion. When designing effective prompts for customer refunds, it is essential to consider the specific needs of your customers and provide them with clear instructions on how to proceed.
One way to do this is by including specific instructions in prompts for customers requesting refunds. For example, you may ask them to provide their order number or explain the reason for the refund. This will help customer care agents quickly understand and resolve the issue efficiently.
Another consideration when designing effective prompts is personalization. Language models can be used to personalize prompts based on a customer’s previous interactions with your company. This can help make the refund process more efficient and personalized, increasing customer satisfaction.
However, avoiding using technical jargon or complex language in prompts is essential. This can lead to customer frustration and make it more challenging to complete the refund process. Instead, use simple language that is easy for everyone to understand.
Testing and iterating on prompts regularly are also crucial in ensuring they effectively guide customers through the refund process while minimizing the need for customer care agent intervention. By analyzing data from previous customer interactions, you can identify areas where improvements could be made and adjust your prompts accordingly.
In addition, providing social proof, such as statistics or examples of successful refunds, can help build customer trust and increase their confidence in completing the refund process.
Designing effective prompts requires a deep understanding of your customer’s needs and preferences and prompt engineering best practices. By considering these factors, you can create an efficient and user-friendly refund process that meets your customers’ expectations while reducing the workload on your customer care agents.
- Keep prompts concise and specific to the customer’s needs, avoiding unnecessary information.
- Use language that is clear and easy to understand, avoiding technical jargon or complex terms.
- Personalize prompts based on customer behavior and preferences to create a more engaging experience.
- Test prompts with different customer groups to find the most effective for each demographic.
- Use a conversational tone encouraging customers to interact and engage with the prompt.
- Be empathetic in the wording of the prompt, showing understanding of the customer’s situation and emotions.
- Provide multiple response options to the prompt, including open-ended and multiple-choice questions.
- Use positive reinforcement in the wording of the prompt to encourage customers to take action.
- Consider the context of the prompt, ensuring that it is appropriate for the situation and customer’s needs.
- Continuously monitor and refine the prompts based on customer feedback and performance data to improve effectiveness.
To summarize: Use clear and concise language in prompts to avoid confusion for both the customer and the customer care agent; include specific instructions in prompts for customers requesting refunds; consider using language models to personalize prompts; avoid using technical jargon or complex language in prompts; test and iterate on prompts regularly to ensure they are effective; provide social proof to build trust with customers.
Automatic Prompt Design: Streamlining the Process with AI
Prompt engineering is one of artificial intelligence’s most exciting applications (AI) applications. Automatic prompt design is a process that can be streamlined using AI tools, such as machine learning models. These models can be trained to generate effective prompts based on agent input.
OpenAI is one example of an AI platform that offers API calls for prompt engineering. Through API calls, agents can input data and receive generated prompts from the AI model. This saves time and resources while still producing high-quality prompts for agents.
But how do we know that these generated prompts are effective? Arxiv preprints have shown that using a majority vote system for prompt selection can improve their effectiveness. The AI model generates multiple prompts, and a majority vote determines the one used.
This approach has successfully improved the quality of prompts generated by AI models, making them more useful for agents who need quick and accurate responses to customer inquiries or other tasks.
One advantage of using AI for automatic prompt design is that it allows companies to scale their operations quickly without sacrificing quality. Creating effective prompts would require significant time and resources with traditional manual methods, but with AI, this process becomes much faster and more efficient.
Another benefit is that these systems can learn over time, becoming better at generating effective prompts as they process more data. Companies can continually improve customer service without investing in additional staff or training programs.
However, it’s important to note that while AI has many benefits, there are also some potential downsides. For example, these systems could lead to biased or inaccurate results if not correctly designed or implemented.
Consider factors like data sources and algorithmic transparency to ensure fairness and accuracy in automatic prompt design using AI tools. Companies must also monitor their systems regularly and adjust as needed to ensure they produce the best possible results.
Tips and Extensions for Effective Prompt Engineering
Use External Tools to Streamline Prompt Engineering
As a prompt engineer, you can leverage various external tools to make your job easier. These tools allow you to create chatbots and automate customer conversations without building everything from scratch. One such tool is Dialogflow, a natural language processing platform that allows you to build conversational interfaces for websites, mobile applications, and messaging platforms.
Dialogflow has pre-built templates and workflows that you can customize to suit your needs. You can also use the platform’s machine-learning capabilities to train your chatbot on specific topics or phrases. This will help your chatbot understand user queries better and provide more accurate responses.
Another tool that you can use as a prompt engineer is Botpress. Botpress is an open-source platform that allows you to create chatbots using a visual interface. The platform has pre-built modules for everyday tasks like authentication, database integration, and natural language understanding.
Botpress also has a built-in analytics dashboard that allows you to track user interactions with your chatbot in real time. This will help you identify areas where users struggle, or your prompts need improvement.
Keep in mind:
- Understand the prompt’s purpose and the desired outcome from the user’s response.
- Tailor the prompt to the user’s behavior and preferences, using data and analytics to create a personalized experience.
- Use natural language processing techniques to ensure the prompt is easy to understand and conversational.
- Provide clear and concise instructions to the user, avoiding complex terminology or jargon.
- Use visual aids, such as images or videos, to support the prompt and make it more engaging.
- Provide multiple response options to the prompt, such as open-ended responses or multiple-choice questions.
- Use positive reinforcement in the wording of the prompt to encourage users to take action.
- Continuously monitor and analyze the prompt’s effectiveness, using A/B testing to identify the most successful prompts.
- Consider using chatbots or other AI technologies to improve the speed and accuracy of prompt delivery.
- Keep up-to-date with the latest trends and advancements in prompt engineering, and continue to adapt and evolve strategies accordingly.
Keep Prompts Concise and Relevant
When creating prompts, it’s essential to keep them short and to the point. Long prompts can confuse users and make it difficult to understand what they must do next. Instead, use simple language and focus on the user’s needs.
For example, if someone asks your chatbot for directions to a nearby restaurant, don’t give them a long list of options or ask them additional questions about their preferences. Instead, please provide them with concise directions right away.
It’s also essential that your prompts are relevant to the context of the conversation. If someone asks your chatbot about the weather in New York City, don’t respond by asking them if they want information about other cities too.
Test and Refine Your Prompts Regularly
Prompt engineering is an iterative process; testing and refining your prompts is essential. Use analytics tools to track user interactions with your chatbot and identify areas for improvement.
For example, if you notice that users frequently abandon conversations at a certain point, it might indicate that your prompts are confusing or irrelevant. Use this information to adjust your prompts accordingly.
You can also conduct user surveys or focus groups to gather feedback on your prompts. This will help you understand how users perceive your chatbot and what changes they want.
Advanced Prompting Techniques: Text-to-Image, Prefix-Tuning, and Chain-of-Thought
Text-to-image, prefix-tuning, and chain-of-thought are advanced prompting techniques that have gained popularity in deep learning for reasoning tasks. These methods offer unique ways to inject context into the prompt and guide the model through the reasoning steps.
- The text-to-image approach involves injecting images into the prompt to provide context and better help the model reason. This technique is beneficial when dealing with complex prompts that require visual understanding. By incorporating relevant images, the model can better understand the meaning of the text and make more accurate predictions. For instance, imagine a prompt asking a machine-learning model to identify different types of flowers based on their characteristics. The text alone may not provide enough information for accurate identification. However, by adding images of each flower type alongside their respective descriptions, the model can better understand what it needs to look for to make an accurate prediction.
- Prefix-tuning is another method of fine-tuning the initial instruction by adding relevant prefixes to guide the model through the reasoning steps. This technique has improved performance on tasks such as question answering and language modeling. For example, consider a prompt asking a machine learning model to generate a summary of an article about climate change. By adding prefixes such as “In this article” or “The main point is” before each sentence in the prompt, we can guide the model toward generating a more coherent summary that accurately captures key points from the original article.
- The chain-of-thought technique involves breaking down the prompt into smaller chains or types of content, similar to shot learning in television, to help the model reason more effectively. This method effectively improves performance on natural language inference and commonsense reasoning tasks. For instance, consider a prompt asking a machine learning model whether it’s safe for someone with diabetes to consume artificial sweeteners. Instead of presenting all information in one long paragraph, we could break down the prompt into smaller content chains, such as “What are artificial sweeteners?” and “How do they affect blood sugar levels?” By doing so, we can help the model reason more effectively by presenting information in a more digestible format.
In addition to these three methods, other words can improve prompting techniques. For instance, using context-aware prompts that consider previous inputs and outputs from the model can help guide it toward more accurate predictions. Similarly, incorporating feedback loops that allow the model to learn from its mistakes and adjust its predictions can also improve performance.
It’s worth noting that while these methods have shown promise in improving deep learning models’ performance on reasoning tasks, they are not without their limitations. For example, the text-to-image approach may not always be feasible or practical due to the limited availability of relevant images for specific tasks. Similarly, the chain-of-thought technique may not always be effective for all prompts.
Despite these limitations, we have advanced prompting techniques such as text-to-image, prefix-tuning, and chain-of-thought offer exciting new ways to inject context into prompts and guide deep learning models through reasoning steps. As research in this area continues to evolve, we can expect even more innovative approaches to emerge that further push the boundaries of what is possible with deep learning-based reasoning tasks.
Leveraging Data for Better Prompts: Strategies for Success
Utilizing a Diverse Training Dataset for Improved Accuracy
One of the critical factors in developing an effective, prompt engineering system is the utilization of a diverse training dataset. This can help to improve the accuracy of prompt responses by providing a more comprehensive range of examples for the model to learn from.
When creating a training dataset, it’s essential to include a variety of different types of questions and prompts. This can include both simple and complex queries and those that are more specific or nuanced. By providing this diversity, the model can better understand how to respond to different types of inputs.
Incorporating Reasoning and Rationales into Training Data
Another strategy for success in prompt engineering is incorporating reasoning and rationales into the training data. This can guide the learning process and lead to more consistent performance.
The model can better understand how to arrive at its conclusions by explaining why specific answers are correct or incorrect. This approach can also help with generalization, allowing the system to apply its knowledge in new contexts where it may not have seen specific examples before.
Taking Steps for Correct Answers in Training Data
Another important consideration when developing a training dataset is taking steps to ensure that correct answers are included. This can improve the ability of the model to provide accurate responses in the future by reinforcing correct associations between inputs and outputs.
One way to do this is through human validation or verification, where experts review each example in the dataset and confirm whether it is correct. Another approach is through automated checks or tests that verify whether specific outputs match expected results based on known inputs.
Continuously Updating and Refining Training Datasets
Finally, it’s important to note that prompt engineering is an ongoing process that requires continuous updating and refining of training datasets over time. As new information becomes available or feedback is received from users, these datasets should be updated accordingly.
This can involve adding new examples to the dataset, removing outdated or irrelevant ones, or adjusting the weighting of specific inputs based on their importance. By continuously refining and improving the training data, the overall performance of the prompt engineering system can be enhanced.
Examples and Social Proofs
Many examples show how utilizing a diverse training dataset and incorporating reasoning and rationales can improve performance in prompt engineering systems. For example, a recent study by Google researchers found that using a more diverse set of training data led to significant improvements in model accuracy across multiple domains.
Similarly, companies like OpenAI have demonstrated how incorporating reasoning and rationales into their models can help them achieve state-of-the-art performance on complex question-answering tasks. And organizations like Kaggle regularly host competitions where participants develop new approaches for improving prompt engineering systems based on real-world datasets.
Some tips are:
- Collect and analyze user data to understand their behavior and preferences.
- Use this data to create personalized prompts tailored to the user’s needs.
- Leverage machine learning and natural language processing techniques to identify patterns and trends in user data.
- Use A/B testing to experiment with different prompts and identify the most effective ones.
- Continuously monitor and analyze prompt performance to refine and improve effectiveness.
- Integrate data from multiple sources, such as social media and customer feedback, to comprehensively understand user needs and preferences.
- Use data to predict user behavior and anticipate their needs, delivering timely and relevant prompts.
- Identify key performance indicators (KPIs) for prompt effectiveness, such as response or conversion rates, and track these metrics over time.
- Use data to segment users into different groups based on behavior and preferences, delivering tailored prompts for each group.
- Use data to identify opportunities for innovation and differentiation, creating prompts that stand out from competitors and deliver unique value to users.
ChatGPT and Beyond: Examples of Effective Prompts in Action
Examples of Effective Prompts in Action
Good prompts are essential for effective communication between humans and AI. They provide the context for the AI to understand the user’s intent and generate human-like responses to queries. ChatGPT is an AI language model that uses good prompts to achieve this goal.
ChatGPT is a state-of-the-art natural language processing system developed by OpenAI, which can generate human-like text based on a given prompt. It uses deep learning techniques to analyze large amounts of text data and learn patterns in language usage. This allows it to generate coherent and relevant responses to various queries.
Different Prompt Categories
Different prompt categories can be used depending on the type of task or query. Fact-based prompts are used when the user wants information about a specific topic or subject. Opinion-based prompts are used when the user wants to express their opinion or get feedback from others. Scenario-based prompts are used when the user wants to simulate a real-life situation or scenario.
Training examples are crucial in developing good prompts, as they help the AI learn how to respond appropriately to different types of queries. The more training examples there are, the better ChatGPT becomes at generating accurate and relevant responses.
Examples of Effective Prompts in Action
One example of effective prompts in action is ChatGPT’s ability to generate realistic and engaging conversations with users. For instance, if you ask ChatGPT, “What is your favorite movie?” it might respond with, “My favorite movie is The Matrix because I love science fiction.” This response shows that ChatGPT understands what a movie is and has an opinion about it.
Another example of effective prompts in action is COT (Code-Completion OpenAI Transformer), an AI model developed by OpenAI that assists programmers in writing code more efficiently. COT uses good prompts such as function names, variable names, and comments to suggest code snippets that match the programmer’s intent.
Many social proofs demonstrate the effectiveness of good prompts in AI systems. For example, ChatGPT has generated text for various applications such as chatbots, customer service agents, and virtual assistants. These applications have been shown to improve customer engagement and satisfaction by providing fast and accurate responses to queries.
Another social proof is using AI models like COT in software development. COT has been shown to reduce coding time by up to 50% by suggesting code snippets that match the programmer’s intent. This saves time and improves code quality by reducing errors and bugs.
The Importance of Effective Prompt Engineering in SEO Content Writing
Effective, prompt engineering is crucial for SEO content writing. It involves identifying and targeting specific keywords or phrases users frequently search for on search engines. By doing so, you can ensure that your content is relevant and valuable to your target audience, which can improve your search engine rankings.
The Importance of Effective Prompt Engineering
Incorporating relevant prompts into your content increases the chances of your content appearing in search engine results pages (SERPs) for specific queries. This is because search engines use complex algorithms to determine the relevance of a page to a user’s query based on various factors, including keyword usage, user intent, and other ranking signals.
Poor prompt engineering can result in irrelevant or low-quality content that fails to meet the needs of your target audience. This can negatively impact your SEO efforts by reducing traffic to your website and decreasing engagement with your brand.
Effective, prompt engineering involves conducting thorough keyword research, analyzing search trends, and understanding user intent. Keyword research helps you identify the most relevant keywords and phrases related to your business or industry. Analyzing search trends allows you to stay up-to-date with user behavior and preferences changes over time.
Understanding user intent is critical for creating content that aligns with the needs of your target audience. User intent refers to the reason behind a user’s query when they type it into a search engine. For example, if someone searches for “best running shoes,” their intent may be to find product recommendations or reviews rather than general information about running shoes.
By understanding user intent, you can create content that meets their needs and provides value. This improves the relevance of your content and helps establish trust with potential customers by demonstrating expertise in their area of interest.
The Benefits of Effective Prompt Engineering
Effective, prompt engineering has several benefits for SEO content writing. Firstly, it helps increase organic traffic to your website by improving visibility in SERPs for relevant queries. This means more people are likely to find and engage with your content, which can lead to increased brand awareness and customer acquisition.
Secondly, effective, prompt engineering helps establish authority in your industry by demonstrating expertise and understanding of your target audience’s needs. This can increase trust and credibility with potential customers, translating into higher conversion rates and customer loyalty over time.
Finally, effective, prompt engineering helps you achieve your business goals by driving traffic to specific pages or products on your website. By targeting specific keywords or phrases related to your offerings, you can increase the likelihood of users finding and engaging with these pages.
Wrapping Up: Final Thoughts on Prompt Engineering
In conclusion, prompt engineering is a crucial aspect of language model design that can significantly impact the final answer generated by these models. Effective prompts can guide language models to produce more accurate and relevant output.
To achieve this, it is crucial to understand prompt engineering principles and best practices. This includes designing clear, concise, and specific prompts for the task. It also involves leveraging data to identify patterns and trends that can inform prompt design.
Furthermore, several advanced techniques can be used to enhance prompt engineering. These include text-to-image prompting, prefix-tuning, and chain-of-thought prompting. By incorporating these methods into our immediate design process, we can further improve the accuracy and relevance of language model output.
It is worth noting that AI technology has made significant strides in recent years; businesses can save time and resources while still achieving high-quality results.
Ultimately, practical, prompt engineering is essential for businesses leveraging language models for various tasks, such as customer refunds or content creation for SEO purposes.
By following best practices and utilizing advanced techniques where appropriate, we can consistently ensure that our language models generate accurate and relevant output.
You can learn more about our Prompt Engineering Services here!
Have a question
or a project?
Reach out and let us
know how we can assist!
"*" indicates required fields