How to enhance your chatbot so it can retrieve data from multiple data sources & orchestrate its own plan with C# Semantic Kernel, planner & Azure OpenAI – part 3 (demo app)

In the previous post, we talked about the implementation details of how the demo app works & how to set up Semantic Kernel, with the FunctionCallingStepwisePlanner and Azure OpenAI. Now that you understand the code, let’s look at the demo application.

Here is the link to the GitHub repo.

Use case

As a reminder, we are building a customer support chatbot for our fictional outdoor sporting equipment company. We want to be able to answer common customer support questions by utilizing our internal data sources.

Example query

Will my sleeping bag work for my trip to Patagonia next month?

Fictional data sources

Demo app

This demo app allows the user to make a request. That request is then run through the Semantic Kernel FunctionCallingStepwisePlanner. The plan that is generated is then executed and the response is sent back to the user.

In this example, we are explicitly showing the planner steps to demonstrate the execution plan. In the real world, you wouldn’t show the execution plan (since this reveals internal implementation details such as the data sources used, their inputs & responses, errors, etc).

Click on the “Thought Process” button to see a visualization of the ChatHistory (which includes the plan steps).

Note that we can’t guarantee this order of operations if we are solely relying on the planner. The output of the planner is non-deterministic. You may have to run the demo app several times to get output similar to this run.

We can see that the planner made a plan with numerous steps. Let’s go through the major ones in detail.

0. System

The initial step is the FunctionCallingStepwisePlanner talking to itself about how to answer the user’s question. It takes into account the plugins that have been registered, the memory available, the system prompt & the user’s question.

1. LocationLookupPlugin.LocationLookup

The planner first looks up the GPS coordinates for “Patagonia”. The AI was able to determine from the user’s original question that that is the location that needs to be passed to this API. Obviously, Patagonia is a huge area and a real bot would ask for a more specific location. In this example, my dummy API will return a set of hard-coded GPS coordinates.

2. HistoricalWeatherPlugin.HistoricalWeatherLookup

Now the planner has parsed the output of the LocationLookup plugin and extracted the latitude and longitude that is needed to pass as input parameters to this API. Notice also that it has correctly incremented the month of the year to “next month”.

In this example, I explicitly wrote the sample HistoricalWeatherLookup API to fail if the wrong GPS coordinates are passed in or the wrong date is passed in. I wanted the system to fail if it doesn’t use my custom LocationLookupPlugin to find the exact coordinates of the user’s specified location.

3. OrderHistoryPlugin.OrderHistoryLookup

The planner now calls the OrderHistoryLookup plugin to get the user’s purchase history. It was able to pull the username from the original system prompt and call the API with the right parameter.

Notice that the AI must parse the output and extract the correct field from the JSON data that is returned. The OrderHistoryLookup API doesn’t know anything about OpenAI or that an AI is calling it. There is nothing special about how the data is returned and I as a developer don’t have to parse this output myself.

Note that I didn’t write any code to parse the results, didn’t write any code to define the order of operations, didn’t write any code to explicitly map the username that is stored in the context variables with the input parameter the function needed. Yes, they have the same name, but all the magic is in the Description attribute decorators that explain (in human-understandable terms) what the functions do and how to call them.

OrderHistory Semantic Kernel native function implementation

4. ProductCatalogPlugin_ProductCatalogItemLookup

The planner now calls the ProductCatalogItemLookup plugin to get the product specifications for the specific product ID that the AI extracted from the previous API call.

The AI now needs to extract the relevant temperature information from the response to test the sleeping bag purchased against the expected weather conditions.

5. SendFinalAnswer

The AI finally has all of the relevant information to make an informed decision about whether or not the sleeping bag purchased will work.

Conclusion

Again, the magic is that I (as a human) know the happy path through the system. I know the order of operations, the inputs & outputs. However, I don’t want to have to hard-code this access path, try and make “if/else”, “switch” or “regex” statements to try and pick the right path based upon the user’s question.

The Semantic Kernel FunctionCallingStepwisePlanner & Azure OpenAI managed to figure everything out on its own (based upon my descriptions of what the plugins do).

Caveat emptor!

A key consideration of using any large language model & building it into your application is the non-deterministic nature of the responses. You cannot guarantee responses. All you can do is influence the response based upon the prompts.

The same is true for the FunctionCallingStepwisePlanner and Azure OpenAI. You can’t guarantee that it will call your plugins in the right order, that it will pass in the right data, that it will parse the output correct or that it will glean the correct insight from the data returned.

You have to be ok with the fact that it is non-deterministic. If you have to have a deterministic response, you will end up hard-coding the execution path (even if you use Semantic Kernel).

In the next post, I’ll talk about some of the techniques I used to make the demo app easy to run & deploy.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *