Use AI quality management

Attention: At GoTo, we’re committed to using AI responsibly and according to industry best practices. To learn more about how we implement responsible AI principles, please view our AI Trust and Transparency page. For additional information on ownership of your data and how it’s handled, see the Third-Party Services Provided by Us section of our Terms and Conditions.

You’re in control of your experience with AI Quality Management. If you choose not to use AI-powered features, you can turn them off at any time. To do this for this feature, navigate to Contact Center > Quality management > Configuration > Queues, and disable AI evaluations on the relevant queues as needed.

Leverage the power of AI to effectively and fairly evaluate agents on their handling of all inbound queue calls, thus freeing up your time to focus on assessing and improving customer interactions. This feature may not be available with your plan.

Note: Supported languages for this feature include English, Spanish, French, Portuguese, German, Italian, Chinese, Arabic, Hindi, Japanese, and Korean.
Our AI Quality Management feature is a fully automated call scoring tool that evaluates every inbound queue call for all agents and queues against curated customer service-focused questions, benchmarked for accuracy. It produces fair and actionable results within minutes, filterable by agents, time, and queue.

CC Admins (or supervisors with configuration rights) can select which questions to use in the evaluations. Then the AI quality management tool will automatically receive a temporary transcript of each inbound queue call to evaluate the agent — in a yes/no format — based on the selected questions. Once the evaluation has been generated, that temporary transcript is permanently deleted. Supervisors can then review those evaluations on an overall team level, or drill down by agent and specific interactions, within minutes of the call taking place. As needed, corrections can be made using our feedback feature.

Important:
  • Feedback in Quality Management is never used to train AI models with customer data. Your feedback is analyzed by GoTo product engineers to improve the performance and accuracy of the quality management feature in scoring questions.
  • The feedback is critical to differentiate agent performance issues from system performance issues.

Configure evaluation criteria

Choose your desired criteria for the evaluations from our curated list of questions.

  1. Sign in to the desktop or web app.
  2. From the left navigation menu, select Contact Center > Quality management > Configuration.

Queues

Manage which queues in your contact center have AI-powered Quality Management enabled, and view or update which evaluation forms are assigned to each queue.
Note: Only one form can be assigned to a single queue.
  1. View your list of available queues.
  2. For each queue, assign a form from the available drop-down list.
  3. Enable or disable the evaluation on queues as needed.

Forms

Create, name, and manage versioned evaluation forms. Assign forms to queues as needed. Forms track which questions are used and keep a clear historical record through versioning.
  1. Select + New Form and name it. Then select Create.
  2. Build your evaluation using questions from your question bank.
    Note: The default evaluation form includes every question in the question bank by default. For any new forms, you’ll need to manually add questions.
  3. Optional: Configure form options:
    • N/A scoring: Enable or disable whether responses marked as N/A (Not Applicable) are counted as “credit” in the total evaluation score.
    • Default form: From Status, assign the form as the default if you want it automatically applied to new queues created in AIQM.

Results: The Version number updates each time the form is modified (such as adding questions, changing the N/A setting, or enabling/disabling questions).

Questions

  1. You’ll find separate sections for Default questions (pre-loaded in the system) and Custom questions (created by you).
  2. Select + Add custom question to add new questions.
    • Tip: Custom questions must pass all required validations before you can add them. If you need help, review our best practices for creating custom questions.
    • Note: When you create a custom question, it will be in Preview status for 7 days so you can track its performance. Questions in preview will not affect the total score. After the preview period, you can enable or disable the question for future evaluations.
    • Tip: Create custom questions in over 100+ languages to meet your team’s unique needs.
  3. Enable or disable each question as desired and then select + Apply when finished.

Review agent evaluations

See how your team is doing overall with your selected criteria and dive in to specific interactions by agent to provide coaching and assistance.

Before you begin: You must be set up as a user and be assigned the Supervisor role. If you want to listen to the call recordings for a given interaction, you will also need permission to access call recordings and call reports.
If you are part of the team you lead and take inbound queue calls, it is important to note that you will not see an evaluation for any inbound queue call that you personally take. Another supervisor will need to be assigned to the queue to review your evaluation in that instance.
  1. Sign in to the desktop or web app.
  2. From Contact Center > Quality management > Evaluation, use the provided filters to narrow down your results by date, duration, queue, or status, if desired.

    Result: You will see the overview metrics for your entire team within the selected parameters, including a summary of the total interactions evaluated and the average score amongst them, the highest and lowest performing question, and the leading and trailing agent.

    Troubleshooting: If you don't see an entry as expected, first make sure that the call was made to an inbound queue. If so, it's likely that the evaluation hasn't finished generating (it can take up to two minutes), the interaction was too short to generate an evaluation, or the audio was too faint for the system to detect.

  3. Optional: To review a specific agent, select their name from the list and then drill down by interaction, if desired, to review in more depth. From that interaction's evaluation page, you can select Show details to open the interaction details in the Analytics portal where you can listen to the call, see all applied tags, review notes, and more.
    Tip: When you select an agent, you will see their individual metrics, such as their average and highest scores, as well as a list of all of their interactions within the originally selected parameters. You can update those filters as desired.
What to do next: Provide praise or coaching to your agents. You can also make corrections as needed to the agent's evaluations, see "Modify and approve an evaluation".

Modify and approve an evaluation

Use the feedback feature to manually audit an evaluation and provide helpful context for our AI's prompt engineers.

Before you begin: You must be set up as a user and be assigned the Supervisor role.
  1. Sign in to the desktop or web app.
  2. From Contact Center > Quality management > Evaluation, use the provided filters to narrow down your results by date and queue, if desired.
  3. Select the agent from the list and then select the specific interaction that you want to review.
    Tip: Remember, you can select Show details to open the interaction details in the Analytics portal where you can listen to the call to assist you in your audit.

    Troubleshooting: If you don't see an entry as expected, first make sure that the call was made to an inbound queue. If so, it's likely that the evaluation hasn't finished generating (it can take up to 2 minutes), the interaction was too short to generate an evaluation, or the audio was too faint for the system to detect.

  4. From the feedback column, select the correct reaction next to each question that needs adjusting. Choose from
    • Accept
    • Reject
    • N/A — “Not Applicable” (N/A) scoring is set by default to grant credit for N/A responses. If you prefer not to grant credit for N/A questions, you can disable this setting in Quality management > Configuration > Settings.
  5. Add comments in the open text field.
    Important: The open text feedback helps our prompt engineers find better ways to ask the questions to get relevant answers. For example, if your company name is “Hello Friday”, the evaluation may indicate that your agents missed the question about stating the company name. If you offer feedback that "Hello Friday" is actually your company name, then our engineers can inform AI that "Hello Friday" is more than a greeting, which allows AI to have the context it needs to better evaluate the agent's answers — without adding any of that information into the AI data model.
  6. Select Approve when you are finished.
    Tip: Quickly view who last made modifications by hovering the reaction icons for each question. You can also use the provided filter to narrow your results by score, if desired.

    Result: The agent's score will shift based on your updates.

  7. Optional: You can Reset evaluation questions back to its original score, if desired.
  8. Optional: If you wish to delete an evaluation that should have not been scored, you have the option to Discard that evaluation completely. Discarded evaluations can be recovered if needed.