Lesson 2: Prompts & Prompt Chaining

Overview

The assistant in this lesson represents the beginnings of a knowledge assistant. Each time a new message arrives from the user, the assistant does the following:

  • Decides if the user's request is "in scope" (message-in-scope prompt).
  • Decides if the next best course of action is to engage in simple dialogue or begin a question answering flow (next-best-action prompt).
  • If the request is in scope and the next best action is to engage in simple dialog, the assistant moves to the Generate Dialog behavior and executes the dialogue prompt to converse with the user.

⛰️

This assistant is grounded in QuiqSilver Bikes - a fictitious retailer of Cannondale bikes.

The AI assistant on the QuiqSilver website uses flows & techniques that are similar to what's covered in the Academy lessons.

This assistant is for educational purposes only, and is not to be representative of a fully hardened production experience.

It lacks the necessary guardrails to ensure that responses are accurate and on-brand. A more robust assistant can be seen in Lesson 4.

What You'll Learn

This lesson is designed to cover basic concepts around how prompts are used, how to use the Debug Workbench to trace events and create tests, and how to modify prompts. If you're new to AI Studio and haven't completed Lesson 1, start there first.

In this lesson you'll learn how to:

  • Understand how you can run prompts in parallel, and chain them together
  • Create a test case to ensure your assistant handles a given question going forward
  • Make changes to a prompt

In this section you'll find a video walkthrough as well as a written playbook you can follow along with. You can view walkthroughs of all the available lessons here

📘

For help getting started and creating a template, refer to the intro guide .

Video Walkthrough

Creating the Template

Select Lesson 2: Prompts & Prompt Chaining template to get started:

Start a Conversation

Select the Chat panel to open your assistant and begin a conversation; in the chat panel type "hello":

You should get a greeting response from the Quiq Silver Bikes Assistant. You may also notice that your assistant has more behaviors than the one in Lesson 1, and that it appears that you now have Prompt behaviors linked to other Prompt behaviors.

Debug Workbench

Now let's take a look at this back and forth and how it moved through our Flow. Click the Debug Workbench icon in the Chat panel header or in the main navigation, and make sure you're looking at Studio Events:

📘

The Studio Events tab only shows events that happened within AI Studio, while Published Events shows events from real user conversations.

In the left hand side, select the hello message you sent in:

You can see your message passes through 4 behaviors, including 3 prompts, before your answer gets sent.

The Examine Customer Message behavior has 2 prompts being used in it; message-in-scope & next-best-action, click on both of them, taking note of the Prompt, as well as the Completion:

The `message-in-scope` prompt, which is designed to determine whether or not the user input is something the assistant should try and generate a response to.

The message-in-scope prompt, which is designed to determine whether or not the user input is something the assistant should try and generate a response to.

For the "hello" utterance you entered, you can see the assistant classified your message as in-scope, and determined the next-best-action to acknowledge, it also provided an explanation as to why it believes that's the best thing to do next:

Next, select the Generate Dialog behavior and review the Prompt, as well as the Completion, noting the completion is the same answer you received from the assistant.

📘

  • In all of these prompts take note that the prompts contain the entirety of your conversation (found within the <conversation> brackets), as well as your most recent message (found within the <user-message> brackets), this is critical for understanding what the assistant should do next.
  • Being able to run multiple prompts in parallel, like the message-in-scope prompt and next-best-action prompt, as well as chaining prompts together like with the above prompts and the dialogue prompt, are core capabilities you'll need to build out great LLM enabled experiences.

In this user turn, you've prompted an LLM simultaneously to determine if the message is in scope and the next best step. Then you've prompted the LLM to generate the next message for your dialog with the user.

Congratulations 👏 you now know how to inspect a user turn with your AI Assistant!

Replaying a User Turn and Creating a Test Case

With "hello" still selected, locate the Event Details action menu up top and take note of the options available:

  1. Use the Replay Event button to rerun that same event and see how the response might differ. You may want to do this after changing a particular prompt and observing how the response changes.
  2. The Create Test Case button enables you to save and replay events (you'll be doing this next).
  3. Use the View Conversation button to view the entire transcript of the conversation.
  4. Use the Show Inspector button to show and hide additional information about this user turn.

Now, select the Create Test Case button and add "hello" to the Small Talk Test Set:

Next, navigate to the Tests tab in Debug Workbench and click the Run button:

This will run the same message through your assistant, and you can view the responses at each step just like you did when looking at events:

Now click Add Assertion:

and define the following rule:

Create a rule where prompts.next-best-action.completion is "acknowledge"

Create a rule where prompts.next-best-action.completion is "acknowledge"

Next, run your user turn again and see that your assertion passes:

📘

Assertions enable you to define additional criteria you'd like to be true when running a test, for example ensuring that the assistant correctly classifies the user's topic, or correctly extracts a phone number. Learn more

Congratulations 👏 you now know how to replay user turns and create test cases!

Modifying a prompt

Next, say: nice thank you, note the response given and open up your Debug Workbench to the message-in-scope prompt for the the phrase you just entered :

You can see the Completion here determines in-scope as no, while we don't want our assistant classifying all sorts of unrelated inquiries as in-scope, this is the sort of benign response the assistant should be able to handle.

Let's go ahead and modify the message-in-scope prompt to better handle this type of response.

Either scroll up to top of the Event Details panel and select the Examine Customer Message behavior, or use control + space to open up the spotlight search and search directly for the message-in-scope prompt:

📘

You can use control + space, or the Search button in the top right of AI Studio, to search for most content within your assistant. Learn more

📘

If you find yourself having too many panels open, you can close them using the X button in the top right hand corner, you can also resize panels by clicking on the edge of them and dragging:

If you navigated via the Event Details header, select the Edit Prompt button in the message-in-scope prompt:

This will take you into the Prompt Editor Panel, where you can now edit the full prompt. Pay attention to the section beginning with "Your job", which provides the main instructions used to determine if the last message from the user was in or out of scope:

📘

Note the dynamic content in the prompt, including the Conversation History block, the lastCustomerMessage, as well as a range of references to different configs - you'll learn how to use these in subsequent lessons.

Go ahead and add in the following phrase right below the "Your job" sentence:

Consider general greetings, niceties and other simple chit chat as in scope.

Once finished, your prompt should look like this:

You've now provided specific instructions around how the assistant should handle small talk.

📘

If you'd like, you can play around with different wordings and variations of the small talk instructions you've added to see how that impacts your assistants behavior.

Next, go ahead and use the Save button to save your changes.

📘

As a reminder, you will need to Save your assistant to test any new changes, you will see the following warning if you've got unsaved changes and have the Chat panel active:

Now enter the same nice thank you phrase again:

You can now see your assistant generates a response back, because the message-in-scope prompt is identifying your message as in-scope:

Congratulations 👏 you now know how to edit a prompt!

Key Concepts

👍

Decision Prompts & Generation Prompts

  • In AI Studio, you can leverage the exceptional reasoning power of LLMs throughout the course of your experience to make decisions, in addition to using them to generate answers.

👍

Parallel Prompts

  • In AI Studio, you have the ability to add multiple prompts that run in parallel within an individual behavior, enabling you to do multiple things simultaneously, creating a more responsive, intelligent assistant. For example you may want to classify the topic of the user's question, determine if it's in scope, if it's sensitive, what you should do next, and much more.

👍

Chaining Prompts

  • In addition to running prompts in parallel, you can also chain them together, enabling you to leverage completions in previous prompts in your next prompt. You'll see more examples of this in the next lessons.

👍

Tests & Assertions

  • Tests are great tools for monitoring regressions in your assistant as you make changes, and they can also be a great tool for quickly testing how changes in your prompts may impact specific tests. Learn more

What's next

You should be beginning to get a handle on how to work within AI Studio. The next lesson will cover Retrieval Augmented Generation (RAG), and you can begin to generate answers from a knowledge base!