Using Function Calling
(Also known as ‘tool calling’)
Daily Bots makes it easy to build function calling (also known as ‘tool calling’) into your app.
Understanding function calling
It’s helpful to start with an understanding of what function calling is. OpenAI, Anthropic, Gemini, Grok, and Llama 3.1 all have good documentation on the topic, but here’s a quick overview.
Say you want to give your bot the ability to tell your users about the current weather. That information obviously can’t be trained into the LLM, so you’ll need to get it directly from a weather API. But you can actually use the LLM to help you with this. The workflow looks like this:
- When your app starts, it provides the LLM with information about functions, or tools, the LLM can choose to use. This is typically sent with the initial system prompt, but it can be updated at any point in the bot session.
- The user asks a question that the LLM decides it needs to use a function to answer. Instead of responding with normal text, the LLM returns a function call describing how to use a function to supplement its knowledge.
- The LLM can’t actually call the function itself, so your app takes the information from the bot’s function call response, and, well, actually calls the function. Your app saves the result as a function result.
- Your app appends both the function call and function result to the LLM’s message history, and prompts the LLM to run another completion. The LLM sees the function call and result in the message history, and it realizes it now has the info it needs to answer the user’s question, so it generates a text response.
Adding it to your bot
Daily Bots manages most of this for you. You’ll need to do a few things:
- Update your LLM’s configuration to include the function(s) you want to be able to call.
- Register a handler in your app to handle the function call and return the function result.
- Create a route in your app to handle the function call. In this case, we will create a weather route that fetches the weather for a given location.
Updating your LLM configuration
In the previous tutorial, we specified the configuration directly in the page.tsx
code.
This time, we’ll define the configuration in a separate file, rtvi.config.ts
, and import it into the client side.
First, make a file in the app
folder called rtvi.config.ts
. Define the function(s) you want the bot to be able to call.
The specific format of this object will vary depending on which LLM you’re using, but here are examples of a get_weather
function for Anthropic, OpenAI, Gemini, Grok and Llama 3.1:
Next, import the defaultConfig
into your page.tsx
file and pass it to the RTVIClient
instance in the configuration setup within useEffect. This will ensure that the language model settings are applied from the external configuration file, streamlining and centralizing your configuration options.
Daily Bots detects when the LLM returns a function call and passes that to your app. So you’ll need to register a handler using handleFunctionCall
.
Before we can do that, we will need to create an llmhelper
instance in the page.tsx
file:
Now, you can register a handler to handle the function call and return the function result.
The best place to do that is in app/page.tsx
, right after the llmHelper
is created:
If your handler returns anything other than null
, it will be treated as the function result from the examples above. Daily Bots will add the function call and result to the bot’s messages array (the “Tool Call” and “Tool Response” in the example above), and then re-prompt the LLM to generate a voice response from the bot.
If you return null
from your handler, Daily Bots will essentially ignore the function call. This can be useful for enabling the LLM to send various ‘signals’ to your app when parts of a conversation are complete, for example. More documentation on this behavior is coming soon.
Creating a route to handle the function call
Create a new folder within the app/api
directory called weather
:
Then create a file within that folder called route.ts
:
To make this work, you will need to create an account on OpenWeather.
Once you’ve done that, in your .env.local
file, add your OpenWeather API key:
Give it a try
Now you can test your bot by asking it for the weather in a specific location. For example, you could say “What’s the weather in San Francisco?“. The bot will recognise you want to know about the weather in San Francisco.
It will also know that it doesn’t have that information, and will try to call the get_weather
function.
Your app will then fetch the weather data from the OpenWeather API and return it to the bot, which will then use that information to generate a response.