It’s helpful to start with an understanding of what function calling is. OpenAI, Anthropic, Gemini, Grok, and Llama 3.1 all have good documentation on the topic, but here’s a quick overview.
Say you want to give your bot the ability to tell your users about the current weather. That information obviously can’t be trained into the LLM, so you’ll need to get it directly from a weather API. But you can actually use the LLM to help you with this. The workflow looks like this:
When your app starts, it provides the LLM with information about functions, or tools, the LLM can choose to use. This is typically sent with the initial system prompt, but it can be updated at any point in the bot session.
The user asks a question that the LLM decides it needs to use a function to answer. Instead of responding with normal text, the LLM returns a function call describing how to use a function to supplement its knowledge.
The LLM can’t actually call the function itself, so your app takes the information from the bot’s function call response, and, well, actually calls the function. Your app saves the result as a function result.
Your app appends both the function call and function result to the LLM’s message history, and prompts the LLM to run another completion. The LLM sees the function call and result in the message history, and it realizes it now has the info it needs to answer the user’s question, so it generates a text response.
In the previous tutorial, we specified the configuration directly in the page.tsx code.
This time, we’ll define the configuration in a separate file, rtvi.config.ts, and import it into the client side.
First, make a file in the app folder called rtvi.config.ts. Define the function(s) you want the bot to be able to call.
The specific format of this object will vary depending on which LLM you’re using, but here are examples of a get_weather function for Anthropic, OpenAI, Gemini, Grok and Llama 3.1:
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "system", content: [ { type: "text", text: "You are a TV weatherman named Wally. Your job is to present the weather to me. You can call the 'get_weather' function to get weather information. Start by asking me for my location. Then, use 'get_weather' to give me a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality.", }, ], }, ], }, { name: "run_on_config", value: true, }, { name: "tools", value: [ { name: "get_weather", description: "Get the weather in a given location. This includes the conditions as well as the temperature.", input_schema: { type: "object", properties: { location: { type: "string", description: "The city, e.g. San Francisco", }, format: { type: "string", enum: ["celsius", "fahrenheit"], description: "The temperature unit to use. Infer this from the users location.", }, }, required: ["location", "format"], }, }, ], }, ], },];
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "system", content: [ { type: "text", text: "You are a TV weatherman named Wally. Your job is to present the weather to me. You can call the 'get_weather' function to get weather information. Start by asking me for my location. Then, use 'get_weather' to give me a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality.", }, ], }, ], }, { name: "run_on_config", value: true, }, { name: "tools", value: [ { name: "get_weather", description: "Get the weather in a given location. This includes the conditions as well as the temperature.", input_schema: { type: "object", properties: { location: { type: "string", description: "The city, e.g. San Francisco", }, format: { type: "string", enum: ["celsius", "fahrenheit"], description: "The temperature unit to use. Infer this from the users location.", }, }, required: ["location", "format"], }, }, ], }, ], },];
This function calling approach applies towards all OpenAI spec providers. If you’re using a Custom LLM, you can follow this approach.
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "system", content: "You are a TV weatherman named Dallas Storms. Your job is to present the weather to me. You can call the 'get_weather' function to get weather information. Start by asking me for my location. Then, use 'get_weather' to give me a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality.", }, ], }, { name: "run_on_config", value: true }, { name: "tools", value: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather for a location. This includes the conditions as well as the temperature.", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, format: { type: "string", enum: ["celsius", "fahrenheit"], description: "The temperature unit to use. Infer this from the users location.", }, }, required: ["location", "format"], }, }, }, ], }, ], },];
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "system", content: "You are a TV weatherman named Dallas Storms. Your job is to present the weather to me. Start by asking me for my location. Then, use 'get_weather_current' to give me a forecast. If the user asks for a forecast, use 'get_weather_forecast' to give them a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality. Your responses will be converted to audio.", }, ], }, { name: "run_on_config", value: true, }, { name: "tools", value: { function_declarations: [ { name: "get_weather_current", description: "Get the current weather for a location. This includes the conditions as well as the temperature.", parameters: { type: "object", properties: { location: { type: "string", description: "The user's location in the form 'city,state,country'. For example, if the user is in Austin, TX, use 'austin,tx,us'.", }, format: { type: "string", enum: ["celsius", "fahrenheit"], description: "The temperature unit to use. Infer this from the user's location.", }, }, required: ["location", "format"], }, }, ], }, }, ], },];
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "system", content: "You are a TV weatherman named Dallas Storms. Your job is to present the weather to me. Start by asking me for my location. Then, use 'get_weather_current' to give me a forecast. If the user asks for a forecast, use 'get_weather_forecast' to give them a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality. Your responses will be converted to audio.", }, ], }, { name: "run_on_config", value: true, }, { name: "tools", value: [ { type: "function", function: { name: "get_weather_current", description: "Get the current weather for a location. This includes the conditions as well as the temperature.", parameters: { type: "object", properties: { location: { type: "string", description: "The user's location in the form 'city,state,country'. For example, if the user is in Austin, TX, use 'austin,tx,us'.", }, format: { type: "string", enum: ["celsius", "fahrenheit"], description: "The temperature unit to use. Infer this from the user's location.", }, }, required: ["location", "format"], }, }, }, ], }, ], },];
This function calling approach applies towards Llama 3.1 models, like Groq.
First, define a weatherTool object, and/or any other functions you want to call:
Copy
Ask AI
const weatherTool = { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, }, required: ["location"], },};
Then, reference that weatherTool object in your system prompt:
rtvi.config.ts
Copy
Ask AI
export const defaultConfig = [ { service: "llm", options: [ { name: "initial_messages", value: [ { role: "user", content: ` You have access to the following functions: Use the function '${weatherTool["name"]}' to '${ weatherTool["description"] }': ${json.stringify(weatherTool)} If you choose to call a function ONLY reply in the following format with no prefix or suffix: <function=example_function_name>{{\"example_name\": \"example_value\"}}</function> Reminder: - Function calls MUST follow the specified format, start with <function= and end with </function> - Required parameters MUST be specified - Only call one function at a time - Put the entire function call reply on one line - If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls You are a TV weatherman named Dallas Storms. Your job is to present the weather to me. You can call the 'get_weather' function to get weather information. Start by asking me for my location. Then, use 'get_weather' to give me a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality.`, }, ], }, { name: "run_on_config", value: true }, ], },];
Next, import the defaultConfig into your page.tsx file and pass it to the RTVIClient instance in the configuration setup within useEffect. This will ensure that the language model settings are applied from the external configuration file, streamlining and centralizing your configuration options.
Daily Bots detects when the LLM returns a function call and passes that to your app. So you’ll need to register a handler using handleFunctionCall.
Before we can do that, we will need to create an llmHelper instance in the page.tsx file:
app/page.tsx
Copy
Ask AI
// Below the RTVIClient instance you createdconst llmHelper = newVoiceClient.registerHelper( "llm", new LLMHelper({ callbacks: {}, })) as LLMHelper;
Now, you can register a handler to handle the function call and return the function result.
The best place to do that is in app/page.tsx, right after the llmHelper is created:
If your handler returns anything other than null, it will be treated as the function result from the examples above. Daily Bots will add the function call and result to the bot’s messages array (the “Tool Call” and “Tool Response” in the example above), and then re-prompt the LLM to generate a voice response from the bot.
If you return null from your handler, Daily Bots will essentially ignore the function call. This can be useful for enabling the LLM to send various ‘signals’ to your app when parts of a conversation are complete, for example. More documentation on this behavior is coming soon.
Now you can test your bot by asking it for the weather in a specific location. For example, you could say “What’s the weather in San Francisco?”.
The bot will recognise you want to know about the weather in San Francisco.
It will also know that it doesn’t have that information, and will try to call the get_weather function.
Your app will then fetch the weather data from the OpenWeather API and return it to the bot, which will then use that information to generate a response.