Sample Conversation Analyst Prompts

Overview

Below you'll find sample prompts you can leverage. This prompts are just a starting point if you need help understanding how to build metrics.

📘

Tips & Tricks

Conversational Insight Prompts

Below is a list of sample prompts that can be used for gleaning insights from a conversation.

Open Ended Topic Classification

> Your job is to analyze a conversation that represents a customer service interaction
between a single customer and one or more customer service agents and determine overall QA rubric scores for the customer service agents.

Rules:
- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign appropriate attributes of the conversation.
- Once you have determined and assigned all individual conversation attributes.
- Remember that you are judging the conversation in its entirety. Consider all messages between the customer and the agents to perform your attributions.

Conversation attribute(s):
- topic

Topic attribution Logic:
- "topic" This main single topic or intent of the conversation in one or two words.

Here is the conversation you will be considering for your task:
//INSERT CONVERSATION HISTORY BLOCK

Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the topic you assigned during your analysis:

{  
  "topic": "<determined main topic>"
}

Ensure the response is valid JSON and can be parsed by JSON.parse()

Predefined Topic Classification

> Your job is to analyze a conversation that represents a customer service interaction
between a single customer and one or more customer service agents and determine overall QA rubric scores for the customer service agents.

Rules:
- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign appropriate attributes of the conversation.
- Once you have determined and assigned all individual conversation attributes.
- Remember that you are judging the conversation in its entirety. Consider all messages between the customer and the agents to perform your attributions.

Conversation attribute(s):
- topic

Topic attribution Logic:
- "topic" This main single topic or intent of the conversation in one or two words.

Here is the conversation you will be considering for your task:
//INSERT CONVERSATION HISTORY BLOCK

Here is a list of topics for you to select from:
//INSERT TOPICS

Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the topic you assigned during your analysis:

{  
  "topic": "<determined main topic>"
}

Ensure the response is valid JSON and can be parsed by JSON.parse()

Classifying CSAT

Your job is to analyze a conversation that represents a customer service interaction  
between a single customer and one or more customer service agents and determine overall QA rubric scores for the customer service agents.

Rules:

- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign appropriate attributes of the conversation.
- Once you have determined and assigned all individual conversation attributes.
- Remember that you are judging the conversation in its entirety. Consider all messages between the customer and the agents to perform your attributions.

Conversation attribute(s):

- estimated CSAT category (very-high, high, medium, low, or very-low)

Topic attribution Logic:

- "estimated_csat_category" This category must be either very-high, high, medium, low, very-low and represents the category of how the customer would have rated/categorized this conversation if given the chance.

Here is the conversation you will be considering for your task:  
//INSERT CONVERSATION HISTORY BLOCK

Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the estimated csat category you assigned during your analysis:

{  
. "estimated_csat_category": "high"  
}

Ensure this response is valid JSON and can be parsed by JSON.parse()

Intent Drift

Your job is to analyze a conversation that represents a customer service interaction
between a single customer and one or more customer service agents and determine overall QA rubric scores for the customer service agents.

Rules:
- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign appropriate attributes of the conversation.
- Once you have determined and assigned all individual conversation attributes.
- Remember that you are judging the conversation in its entirety. Consider all messages between the customer and the agents to perform your attributions.

Conversation attribute(s):
- intent drift (yes or no)

Topic attribution Logic:
- "intent_drift" This intent drift is a yes or no representation on if the original intent of the conversation drifted into one or more additional intents.


Here is the conversation you will be considering for your task:

//INSERT CONVERSATION HISTORY BLOCK

Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the intent drift (yes or no) you assigned during your analysis:

{
. "intent_drift": "yes"
}

Ensure this response is valid JSON and can be parsed by JSON.parse()

Analyze Unresolved Conversation

Your job is to analyze a conversation that represents a customer service interaction
between a single customer and one or more customer service agents and determine overall QA rubric scores for the customer service agents.

Rules:
- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign appropriate attributes of the conversation.
- Once you have determined and assigned all individual conversation attributes.
- Remember that you are judging the conversation in its entirety. Consider all messages between the customer and the agents to perform your attributions.

Conversation attribute(s):
- unresolved conversation (yes or no)

Topic attribution Logic:
- "unresolved_conversation" This is a yes or no representation on if all of the customer's issues were fully resolved or not.


Here is the conversation you will be considering for your task:

//INSERT CONVERSATION HISTORY BLOCK


Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the unresolved conversation (yes or no) you assigned during your analysis:

{
. "unresolved_conversation": "yes"
}

Ensure this response is valid JSON and can be parsed by JSON.parse()

Customer Effort Score

Your job is to analyze a conversation that represents a customer service interaction between a single customer and one or more customer service agents and determine the Customer Effort Score (CES).

Rules:
- Conversation actors will fall into one of three categories: 1. The Customer, 2. An AI Agent, and 3. Human agents
- Your primary output will be to assign a customer effort level based on how much work the customer had to do to resolve their issue
- Customer Effort Score measures the ease of the customer experience - lower effort indicates better service
- Consider the entire conversation when making your determination
- Evaluate both the customer's explicit statements about effort AND implicit signals in their behavior

Conversation attribute(s):
- customer_effort_score (very-low, low, medium, high, or very-high)

Customer Effort Score Attribution Logic:
- "customer_effort_score" This category must be either very-low, low, medium, high, or very-high and represents how much effort the customer had to expend to get their issue resolved.

Factors indicating LOWER effort (very-low to low):
- Issue resolved in first contact
- No repetition of information required
- Agent proactively provided solutions
- Clear, concise responses that directly addressed the question
- No channel switching or escalations needed
- Customer expressed satisfaction with ease of resolution
- Minimal back-and-forth messages required

Factors indicating HIGHER effort (high to very-high):
- Customer had to repeat information multiple times
- Multiple contacts or follow-ups required
- Customer was transferred between agents/departments
- Confusing or unclear responses from agents
- Customer had to seek information from other sources
- Long wait times or delayed responses
- Customer expressed frustration about the process
- Issue remained unresolved or partially resolved
- Customer had to switch channels (e.g., from chat to phone)
- Agent failed to understand the issue initially
- Customer had to explain the problem multiple times

Here is the conversation you will be considering for your task:
//INSERT CONVERSATION HISTORY BLOCK


Please consider the provided conversation and the provided rules, then return a single JSON object in exactly this format, substituting the customer effort score you assigned during your analysis:
{
  "customer_effort_score": "<very-low, low, medium, high, or very-high>"
}

Ensure the response is valid JSON and can be parsed by JSON.parse()

Agent Scoring

Below are some sample agent scoring prompts to help you get started.

📘

In order for your agent metrics to show up in Agent Insights, you'll need to ensure you've followed the instructions in the tracking human agent metrics section.


Professionalism Score

Review the following customer service interaction and assess each agent's professionalism.

Consider the following aspects in your assessment:
1. Professional language and tone (avoiding slang, maintaining courtesy)
2. Appropriate boundaries (not oversharing personal information)
3. Emotional regulation (staying calm under pressure)
4. Adherence to communication standards
5. Respectful treatment of the customer

Please provide a single professionalism score for each agent from the following options:
- very bad
- bad
- neutral
- good
- very good

Note that there may be multiple agents in the conversation and your job is to rate each one separately.

The transcript will be formatted like so:
$messageAuthor: $messageText

Be sure to pick the agentId off of the $messageAuthor and not from the $messageText.

**here is the conversation**
//INSERT CONVERSATION HISTORY BLOCK
**end of conversation**

Format your response in our proprietary json action declaration exactly like this:
{
  "actions": [{
    "action": "recordAgentRating",
    "ratingCategory": "professionalism-score",
    "rating": "good",
    "agentId": "_THE_AGENT_BEING_RATED_1_"
  },
  {
    "action": "recordAgentRating",
    "ratingCategory": "professionalism-score",
    "rating": "very good",
    "agentId": "_THE_AGENT_BEING_RATED_2_"
  }]
}

Ensure your response is valid JSON and can be parsed by JSON.parse()

Gathered unnecessary Info

Your job is to analyze a customer service conversation and determine if any agents gathered unnecessary information from the customer.

**Definition of unnecessary information:**
Information that is not required to resolve the customer's issue or complete their request. This includes:
- Asking for details already provided earlier in the conversation
- Requesting personal information not relevant to the issue
- Collecting data beyond what's needed for the stated purpose
- Repetitive questioning about the same topic

Note that there may be multiple agents in the conversation and your job is to judge each one separately.

The transcript will be formatted like so:
$messageAuthor: $messageText

Be sure to pick the agentId off of the $messageAuthor and not from the $messageText.


**here is the conversation**
//INSERT CONVERSATION HISTORY BLOCK
**end of conversation**

If an agent gathered unnecessary information, flag that behavior with the agentAction "gathered-unnecessary-info" and provide a brief reason.

Format your response in our proprietary json action declaration language like so:
{
  "actions": [{
    "action": "recordAgentAction",
    "agentAction": "gathered-unnecessary-info",
    "agentId": "_THE_AGENT_WHO_GATHERED_UNNECESSARY_INFO_1_",
    "reason": "Asked for billing address when only processing a refund"
  },
  {
    "action": "recordAgentAction",
    "agentAction": "gathered-unnecessary-info",
    "agentId": "_THE_AGENT_WHO_GATHERED_UNNECESSARY_INFO_2_",
    "reason": "Re-requested confirmation number already provided"
  }]
}

Agent Score

Review the following customer service interaction and provide a numerical assessment of the human agent's performance on a scale of 1-5 (where 1 is unsatisfactory and 5 is excellent).


Consider the following aspects in your assessment:
1. Professionalism and tone
2. Clarity and effectiveness of communication
3. Solution quality and accuracy
4. Response completeness
5. Customer-centricity and empathy


Please provide:
1. A single overall score from very bad to very good

here are all of the options:

very bad
bad
neutral
good
very good

Note that there may be multiple agents in the conversation and your job is to rate each one separately.

The transcript will be formatted like so:

$messageAuthor: $messageText

Be sure to pick the agentId off of the $messageAuthor and not from the $messageText.

**here is the conversation**
//INSERT CONVERSATION HISTORY BLOCK
**end of conversation**

Format your response in our proprietary json action declaration exactly like this:

{
"actions": [{
"action": "recordAgentRating",
"ratingCategory": "agent-score",
"rating": "rating",
"agentId": "_THE_AGENT_BEING_RATED_1_"
},
{
"action": "recordAgentRating",
"ratingCategory": "agent-score",
"rating": "rating",
"agentId": "_THE_AGENT_BEING_RATED_2_"
}]
}