Prompt AI Behavior Examples
Overview
In this section you will find examples for using the Prompt AI Behavior in the Function Editor with various LLMs. For a full list of supported models refer to Supported LLMs.
Open AI ChatCompletions Examples
This example will work for any LLM that uses OpenAI's ChatCompletions API, including gpt-35-turbo-0613, gpt-35-turbo-1106 and gpt-4 variants.
Pass the last customer message straight to the LLM (no conversation history), and pass the LLM completion straight back to the customer as text. This example assumes use within a customer assistant:
def build_basic(llm, context):
content = context['derivedData']['lastCustomerMessage']
return {
"request": {
"messages": [{"role": "user", "content": content}]
}
}
def handle_basic(llm, context, prompt, response, completion):
return { "actions": [
{
"action": "sendMessage",
"message": {"text": completion}
}
]}
Claude Examples
This example will work for Claude V3 Haiku and Sonnet
Pass the last customer message straight to the LLM (no conversation history) and pass the LLM completion straight back to the customer as text. This example assumes use within a customer assistant:
def build_basic(llm, context):
last_cust_message = context['derivedData']['lastCustomerMessage']
return {
"request": {
"max_tokens": 256,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": last_cust_message
}
]
}
],
"temperature": 0.0,
}
}
def handle_basic(llm, context, prompt, response, completion):
return { "actions": [
{
"action": "sendMessage",
"message": {"text": completion}
}
]}
llama 3 Bedrock Examples
This example will work with any Llama 3 models
Pass the last customer message straight to the LLM (no conversation history) and pass the LLM completion straight back to the customer as text. This example assumes use within a customer assistant:
def build_basic(llm, context):
last_cust_message = context['derivedData']['lastCustomerMessage']
prompt = f"""
<|begin_of_text|>
<|start_header_id|>user<|end_header_id|>
{last_cust_message}
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>""".strip()
return {
"request": {
"prompt": last_cust_message,
"temperature": 0
}
}
def handle_basic(llm, context, prompt, response, completion):
return { "actions": [
{
"action": "sendMessage",
"message": {"text": completion}
}
]}
Gemini Examples
This example will work with any Gemini models
Pass the last customer message straight to the LLM (no conversation history), and pass the LLM completion straight back to the customer as text. This example assumes use within a customer assistant:
def build_basic(llm, context):
last_cust_message = context['derivedData']['lastCustomerMessage']
return {
"request": {
"contents": [
{
"role": "user",
"parts": [
{
"text": last_cust_message
}
]
}
],
"generationConfig": {
"maxOutputTokens": 512
}
}
}
def handle_basic(llm, context, prompt, response, completion):
return { "actions": [
{
"action": "sendMessage",
"message": {"text": completion}
}
]}
Updated about 1 month ago