Daily Bots makes it easy to build function calling (also known as ‘tool calling’) into your app.

Understanding function calling

It’s helpful to start with an understanding of what function calling is. OpenAI, Anthropic, Gemini, Grok, and Llama 3.1 all have good documentation on the topic, but here’s a quick overview.

Say you want to give your bot the ability to tell your users about the current weather. That information obviously can’t be trained into the LLM, so you’ll need to get it directly from a weather API. But you can actually use the LLM to help you with this. The workflow looks like this:

  1. When your app starts, it provides the LLM with information about functions, or tools, the LLM can choose to use. This is typically sent with the initial system prompt, but it can be updated at any point in the bot session.
  2. The user asks a question that the LLM decides it needs to use a function to answer. Instead of responding with normal text, the LLM returns a function call describing how to use a function to supplement its knowledge.
  3. The LLM can’t actually call the function itself, so your app takes the information from the bot’s function call response, and, well, actually calls the function. Your app saves the result as a function result.
  4. Your app appends both the function call and function result to the LLM’s message history, and prompts the LLM to run another completion. The LLM sees the function call and result in the message history, and it realizes it now has the info it needs to answer the user’s question, so it generates a text response.

Adding it to your bot

Daily Bots manages most of this for you. You’ll need to do a few things:

  1. Update your LLM’s configuration to include the function(s) you want to be able to call.
  2. Register a handler in your app to handle the function call and return the function result.
  3. Create a route in your app to handle the function call. In this case, we will create a weather route that fetches the weather for a given location.

Updating your LLM configuration

In the previous tutorial, we specified the configuration directly in the page.tsx code.

This time, we’ll define the configuration in a separate file, rtvi.config.ts, and import it into the client side.

First, make a file in the app folder called rtvi.config.ts. Define the function(s) you want the bot to be able to call. The specific format of this object will vary depending on which LLM you’re using, but here are examples of a get_weather function for Anthropic, OpenAI, Gemini, Grok and Llama 3.1:

rtvi.config.ts
export const defaultConfig = [
  {
    service: "llm",
    options: [
      {
        name: "initial_messages",
        value: [
          {
            role: "system",
            content: [
              {
                type: "text",
                text: "You are a TV weatherman named Wally. Your job is to present the weather to me. You can call the 'get_weather' function to get weather information. Start by asking me for my location. Then, use 'get_weather' to give me a forecast. Then, answer any questions I have about the weather. Keep your introduction and responses very brief. You don't need to tell me if you're going to call a function; just do it directly. Keep your words to a minimum. When you're delivering the forecast, you can use more words and personality.",
              },
            ],
          },
        ],
      },
      {
        name: "run_on_config",
        value: true,
      },
      {
        name: "tools",
        value: [
          {
            name: "get_weather",
            description:
              "Get the weather in a given location. This includes the conditions as well as the temperature.",
            input_schema: {
              type: "object",
              properties: {
                location: {
                  type: "string",
                  description: "The city, e.g. San Francisco",
                },
                format: {
                  type: "string",
                  enum: ["celsius", "fahrenheit"],
                  description:
                    "The temperature unit to use. Infer this from the users location.",
                },
              },
              required: ["location", "format"],
            },
          },
        ],
      },
    ],
  },
];

Next, import the defaultConfig into your page.tsx file and pass it to the RTVIClient instance in the configuration setup within useEffect. This will ensure that the language model settings are applied from the external configuration file, streamlining and centralizing your configuration options.

app/page.tsx
import { defaultConfig } from "./rtvi.config";

useEffect(() => {
  if (voiceClient) {
    return;
  }

  const newVoiceClient = new RTVIClient({
    transport: new DailyTransport(),
    params: {
      baseUrl: `/api`,
      requestData: {
        services: {
          stt: "deepgram",
          tts: "cartesia",
          llm: "anthropic",
        },
      },
      endpoints: {
        connect: "/connect",
        action: "/actions",
      },
      config: [
        {
          service: "tts",
          options: [
            {
              name: "voice",
              value: "79a125e8-cd45-4c13-8a67-188112f4dd22",
            },
          ],
        },
        ...defaultConfig,
      ],
    },
  });
});

Daily Bots detects when the LLM returns a function call and passes that to your app. So you’ll need to register a handler using handleFunctionCall.

Before we can do that, we will need to create an llmhelper instance in the page.tsx file:

app/page.tsx
// Below the RTVIClient instance you created
const llmHelper = newVoiceClient.registerHelper(
  "llm",
  new LLMHelper({
    callbacks: {},
  })
) as LLMHelper;

Now, you can register a handler to handle the function call and return the function result. The best place to do that is in app/page.tsx, right after the llmHelper is created:

app/page.tsx
llmHelper.handleFunctionCall(async (fn: FunctionCallParams) => {
  const args = fn.arguments as any;
  if (fn.functionName === "get_weather" && args.location) {
    const response = await fetch(
      `/api/weather?location=${encodeURIComponent(args.location)}`
    );
    const json = await response.json();
    return json;
  } else {
    return { error: "couldn't fetch weather" };
  }
});

If your handler returns anything other than null, it will be treated as the function result from the examples above. Daily Bots will add the function call and result to the bot’s messages array (the “Tool Call” and “Tool Response” in the example above), and then re-prompt the LLM to generate a voice response from the bot.

If you return null from your handler, Daily Bots will essentially ignore the function call. This can be useful for enabling the LLM to send various ‘signals’ to your app when parts of a conversation are complete, for example. More documentation on this behavior is coming soon.

Creating a route to handle the function call

Create a new folder within the app/api directory called weather:

mkdir app/api/weather

Then create a file within that folder called route.ts:

app/api/weather/route.ts
import { NextResponse } from "next/server";

const API_KEY = process.env.OPENWEATHERMAP_API_KEY;
const BASE_URL = "https://api.openweathermap.org/data/2.5/weather";

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const location = searchParams.get("location");
  const format = searchParams.get("format") || "celsius";

  if (!location) {
    return NextResponse.json(
      { error: "Location is required" },
      { status: 400 }
    );
  }

  try {
    const response = await fetch(
      `${BASE_URL}?q=${encodeURIComponent(location)}&appid=${API_KEY}&units=${
        format === "celsius" ? "metric" : "imperial"
      }`
    );
    const data = await response.json();

    if (data.cod !== 200) {
      throw new Error(data.message);
    }

    return NextResponse.json({
      location: data.name,
      temperature: data.main.temp,
      condition: data.weather[0].main,
      description: data.weather[0].description,
    });
  } catch (error) {
    console.error("Error fetching weather:", error);
    return NextResponse.json(
      { error: "Failed to fetch weather data" },
      { status: 500 }
    );
  }
}

To make this work, you will need to create an account on OpenWeather. Once you’ve done that, in your .env.local file, add your OpenWeather API key:

.env.local
OPENWEATHERMAP_API_KEY=your open weather API key

Give it a try

Now you can test your bot by asking it for the weather in a specific location. For example, you could say “What’s the weather in San Francisco?“. The bot will recognise you want to know about the weather in San Francisco.

It will also know that it doesn’t have that information, and will try to call the get_weather function. Your app will then fetch the weather data from the OpenWeather API and return it to the bot, which will then use that information to generate a response.