Open Google Cloud CCAI and select your project where Conversational Insights is enabled.
Click Insights > insert_chartQuality AI.
Make a scorecard
You can create different scorecards for different business units.
Click articleScorecards > + Add scorecard.
Click Untitled Scorecard edit and add a name for your scorecard > Unspecified description edit, and add a description of what the scorecard is for.
Click + Add question > add a question to evaluate agent performance and an optional tag. For each question, you can select a tag: business, customer, or compliance.
Example
Question: Did the agent understand the customer's needs by asking thoughtful questions throughout the conversation?
Tag: Customer
Add instructions to define the interpretation of each answer choice.
Example
Instructions:
Yes: The agent asked thoughtful questions. OR The agent demonstrated active reading skills.
No: The agent did not ask thoughtful questions. The customer had to repeat themselves multiple times due to the agent's lack of understanding toward the customer's needs. The agent only asked necessary questions, such as their zip code or product name.
NA: Unable to ask questions. The interaction was for transfer or was a non-sales related interaction.
Select an answer type, enter the answer choices and their corresponding scores, and check the box to include N/A, if applicable.
Click + Add answer choice to include additional answer choices and their scores.
Example
Answer type: Yes/No
Answer choice and score: Yes 1, No 0
check_boxAdd 'N/A' (not available) as an answer choice. If selected, the question will not be included in the total score calculation.
Click Save.
Repeat steps 3-7 for each of your questions > click Next.
Source menu
By default, Quality AI displays information based on the scorecard you most recently made or accessed. You can also set a default override for new users who haven't created or selected a scorecard. It lets them see data for a project when they first click a Quality AI page in the Conversational Insights console.
As an administrator, you have two options to set the default scorecard for your project.
Choose one of the following options:
Click articleScorecards > Change.
Click settings > Scorecards.
Select a Default scorecard from the menu and click Save.
Calibrate the AI model
You calibrate models using example conversations, which consist of conversations, associated questions, and expected answers. You must upload example conversations as a CSV file. For more details on formatting your example conversations, see the Quality AI best practices page.
Prepare example conversations
Quality AI provides a template which automatically generates the alphanumeric conversation, scorecard, and question IDs for a specific scorecard. You can also filter which conversations to include. You must add the answers for each question.
Follow these steps to create your template for example conversations within the Quality AI console.
Navigate to Conversations and add filters to select specific conversations.
Click Create example conversations template.
Click Scorecard and select the name of your scorecard.
Click Cloud Storage Destination and enter the location of a file in your Cloud Storage bucket.
Upload example conversations
When you have a CSV file with your example conversations, you must upload it to facilitate model calibration.
Within your Quality AI-enabled project, upload example conversations to your Cloud Storage bucket.
For a detailed walkthrough on how to use Google Cloud Storage with the Google Cloud console, see the Cloud Storage documentation.
In the Quality AI console, navigate to Scorecards > select your scorecard > click Next to select example conversations.
Click + Add example conversations and enter the path to your Cloud Storage bucket, then click Add.
Click Begin calibration to initiate model calibration, which can take from four to eight hours. (Time increases for scorecards with more questions or example conversations.)
After calibration, click Launch new version.
Edit example conversations
You can edit example conversations using the FeedbackLabel resource. The API supports creating, updating and deleting existing example conversations. You can also bulk upload example conversations using the bulkUploadFeedbackLabels method. Example conversations can always be updated by uploading the CSV file with the same IDs and updated answer label.
Edit scorecards and recalibrate the model
You can also edit an existing scorecard to change the questions associated with example conversations. Editing scorecards requires that you recalibrate the model to incorporate the changes.
In the Quality AI console, navigate to Scorecards > select your scorecard > click Edit.
Depending on whether you want to add or change a question, choose an option:
Click + Add question.
Click Edit on the question.
Fill in the form.
To initiate model recalibration, click Save > Next > Begin calibration.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eThis page outlines the process of setting up and using Quality AI, including the necessary prerequisites such as enabling Conversational Insights and Dialogflow runtime integration.\u003c/p\u003e\n"],["\u003cp\u003eUsers can create scorecards tailored to different business units by defining questions, tags, and answer choices with corresponding scores to evaluate agent performance.\u003c/p\u003e\n"],["\u003cp\u003eCalibrating the AI model requires uploading example conversations in a CSV format, which includes conversations, questions, and expected answers, that can be prepared using a provided template within Quality AI.\u003c/p\u003e\n"],["\u003cp\u003eAfter preparing and uploading the example conversations to a Cloud Storage bucket, the model calibration can be initiated, which is a process that takes between four and eight hours before launching the new version.\u003c/p\u003e\n"],["\u003cp\u003eExample conversations can be managed through editing, bulk uploading, and updating using the FeedbackLabel resource and its associated API methods.\u003c/p\u003e\n"]]],[],null,["# Get started with Quality AI\n\nThis page walks you through the steps to set up and start using Quality AI.\n\nPrerequisites\n-------------\n\nThe following steps confirm you can access Quality AI:\n\n1. [Enable Conversational Insights](/contact-center/insights/docs/before-you-begin) for your project and verify that you have access to the Conversational Insights API.\n2. [Enable Dialogflow runtime integration](/contact-center/insights/docs/enable-dialogflow-runtime-integration).\n\nUse Quality AI\n--------------\n\n1. Open [Google Cloud CCAI](https://ccai.cloud.google.com/projects) and select your project where Conversational Insights is enabled.\n2. Click **Insights** \\\u003e insert_chart **Quality AI**.\n\n### Make a scorecard\n\nYou can create different scorecards for different business units.\n\n1. Click article **Scorecards** \\\u003e **+ Add scorecard**.\n2. Click **Untitled Scorecard edit** and add a name for your scorecard \\\u003e **Unspecified description edit**, and add a description of what the scorecard is for.\n3. Click **+ Add question** \\\u003e add a question to evaluate agent performance and an optional tag. For each question, you can select a tag: business, customer, or compliance.\n\n *Example*\n\n **Question**: Did the agent understand the customer's needs by asking thoughtful questions throughout the conversation?\n\n **Tag**: Customer\n4. Add instructions to define the interpretation of each answer choice.\n\n *Example*\n\n **Instructions**:\n\n **Yes**: The agent asked thoughtful questions. OR The agent demonstrated active reading skills.\n\n **No**: The agent did not ask thoughtful questions. The customer had to repeat themselves multiple times due to the agent's lack of understanding toward the customer's needs. The agent only asked necessary questions, such as their zip code or product name.\n\n **NA**: Unable to ask questions. The interaction was for transfer or was a non-sales related interaction.\n5. Select an answer type, enter the answer choices and their corresponding scores, and check the box to include `N/A`, if applicable.\n\n6. Click **+ Add answer choice** to include additional answer choices and their scores.\n\n *Example*\n\n **Answer type**: Yes/No\n\n **Answer choice and score**: Yes 1, No 0\n\n check_box **Add 'N/A' (not available) as an answer choice. If selected, the question will not be included in the total score calculation.**\n7. Click **Save**.\n\n8. Repeat steps 3-7 for each of your questions \\\u003e click **Next**.\n\n### Source menu\n\nBy default, Quality AI displays information based on the scorecard you most recently made or accessed. You can also set a default override for new users who haven't created or selected a scorecard. It lets them see data for a project when they first click a Quality AI page in the Conversational Insights console.\n\nAs an administrator, you have two options to set the default scorecard for your project.\n\n1. Choose one of the following options:\n - Click article **Scorecards** \\\u003e **Change**.\n - Click settings \\\u003e **Scorecards**.\n2. Select a **Default scorecard** from the menu and click **Save**.\n\n### Calibrate the AI model\n\nYou calibrate models using example conversations, which consist of conversations, associated questions, and expected answers. You must upload example conversations as a CSV file. For more details on formatting your example conversations, see the [Quality AI best practices page](/contact-center/insights/docs/qai-best-practices#example_conversation_format).\n\n#### Prepare example conversations\n\nQuality AI provides a template which automatically generates the alphanumeric conversation, scorecard, and question IDs for a specific scorecard. You can also filter which conversations to include. You must add the answers for each question.\n\nFollow these steps to create your template for example conversations within the Quality AI console.\n\n1. Navigate to **Conversations** and add [filters](/contact-center/insights/docs/filtering) to select specific conversations.\n2. Click **Create example conversations template**.\n3. Click **Scorecard** and select the name of your scorecard.\n4. Click **Cloud Storage Destination** and enter the location of a file in your Cloud Storage bucket.\n\n#### Upload example conversations\n\nWhen you have a CSV file with your example conversations, you must upload it to facilitate model calibration.\n\n1. Within your Quality AI-enabled project, upload example conversations to your Cloud Storage bucket.\n\n For a detailed walkthrough on how to use Google Cloud Storage with the Google Cloud console, see the [Cloud Storage documentation](/storage/docs/discover-object-storage-console).\n2. In the Quality AI console, navigate to **Scorecards** \\\u003e select your scorecard \\\u003e click **Next** to select example conversations.\n\n3. Click **+ Add example conversations** and enter the path to your Cloud Storage bucket, then click **Add**.\n\n4. Click **Begin calibration** to initiate model calibration, which can take from four to eight hours. (Time increases for scorecards with more questions or example conversations.)\n\n5. After calibration, click **Launch new version**.\n\n#### Edit example conversations\n\nYou can edit example conversations using the [FeedbackLabel](/contact-center/insights/docs/reference/rest/v1/projects.locations.conversations.feedbackLabels) resource. The API supports creating, updating and deleting existing example conversations. You can also bulk upload example conversations using the [bulkUploadFeedbackLabels](/contact-center/insights/docs/reference/rest/v1/projects.locations/bulkUploadFeedbackLabels) method. Example conversations can always be updated by uploading the CSV file with the same IDs and updated answer label.\n\n#### Edit scorecards and recalibrate the model\n\nYou can also edit an existing scorecard to change the questions associated with example conversations. Editing scorecards requires that you recalibrate the model to incorporate the changes.\n\n1. In the Quality AI console, navigate to **Scorecards** \\\u003e select your scorecard \\\u003e click **Edit**.\n2. Depending on whether you want to add or change a question, choose an option:\n - Click **+ Add question**.\n - Click **Edit** on the question.\n3. Fill in the form.\n4. To initiate model recalibration, click **Save** \\\u003e **Next** \\\u003e **Begin calibration**.\n5. Wait at least four to eight hours.\n6. After calibration, click **Launch new version**.\n\nWhat's next?\n------------\n\n- [Quality AI best practices](/contact-center/insights/docs/qai-best-practices)"]]