Optimise Your Assistant
Last updated
Last updated
Congratulations on successfully launching your Conversational Assistant! Now, it's time to ensure that your users experience nothing short of excellence and receive maximum value from your Assistant.
To help you achieve this, we've compiled a comprehensive guide featuring the best practices for optimizing your Conversational Assistant using the powerful tools within Studio.
The moment you launch your Assistant on your preferred channel, customer or user engagements—meaning the interactions with your Assistant—start to be recorded on the 'Engagements' page.
The Engagements page looks like this:
The 'Engagements' page serves as a dashboard displaying all interactions with your Assistant, arranged chronologically along with their review status. You also have the option to search for specific engagements using keywords or date ranges.
Each engagement contains a valuable transcript of the user's conversation with your Assistant, encompassing the initial question or request, follow-ups, and feedback. From these interactions, you can effectively identify whether your Assistant successfully addressed the customer's query or not.
Your Assistant provides a wide range of response types to cater to user queries. By analyzing the different response types triggered, you can pinpoint areas that require further training to enhance your Assistant's confidence and accuracy or determine the need for new questions and responses.
These responses exhibit a high-confidence match to the user's question, often in the form of multiple-choice answers starting with "I am confident…"
To ensure the accuracy of short tail responses, you can cross-check them with your understanding of the user's question or user feedback. If you find that a short tail response was incorrect, review the associated Q&A and eliminate any inaccuracies or irrelevant information. Train the Assistant by adding the user query as an alternative input to the correct Q&A.
These responses show a slightly lower confidence level, presenting multiple options that the Assistant thinks might be correct.
To optimize long tail responses, identify whether the Assistant answered the user correctly by checking if the user entered a corresponding number from the provided options. Train the Assistant by adding the user input within the session to the matching response (Intent) identified via the long tail. Utilize the Q&A page for easy searching and editing of the responses.
When the Assistant has very low confidence about the user's query, it may ask them to rephrase the question.
To address low confidence responses, search for existing Q&As within the project knowledge base that could support the user query. If a relevant Q&A already exists, add the user query from the session as an alternative input. If not, create a new user intent to cover the user's query.
User feedback plays a pivotal role in improving your Assistant's learning capabilities. Each time the Assistant provides a response, it asks the user if it helped with their query, prompting a 'Yes' or 'No' answer.
Positive Feedback (Yes): No further action is required as the response was satisfactory.
Negative Feedback (No): Investigate the reason behind the user's dissatisfaction with the response. It could be due to the need for more information or a specific response. In this case, review the Q&A and make necessary adjustments or create a new Q&A to cater to the user's query. Additionally, verify the correctness of the Q&A and remove any irrelevant inputs to ensure accurate training.
If you've identified knowledge gaps and require new Q&As, click here to find out more.
If you need to amend existing Q&As, click here to find out more.
Through the process of reviewing engagements, you may discover a growing demand for more self-serve experiences from your customers. Studio offers flow and integration builders to deliver these experiences seamlessly.
Find out more about Integrations.
The frequency of optimizing your Assistant depends on the volume of interactions received and the time since its launch. During the initial post-launch period in a 'live' environment, it's crucial to optimize as much as possible to accelerate your Conversational Assistant's learning curve and avoid negative user experiences.
We recommend reviewing sessions daily during this phase. Depending on the volume of interactions, review all sessions if below 100 or a minimum of 100 if the volume exceeds this.
Beyond the launch period, the frequency and duration of session reviews and optimization can be adjusted based on the volume of interactions received.
With these best practices in mind, you'll be well-equipped to unlock the full potential of your Conversational Assistant and ensure exceptional user experiences. Happy optimizing!