1. Sign up to get your API Key

To get started, you’ll need to sign up for a Daily Bots account and enter your credit card information to get your API Key.

2. Set up and run the example project

As described in the Architecture guide, you need a Client Application and a Client Server to build a bot. We’ve wrapped up both of those in a single Next.js app to make it easy to get started. You can clone the repo by running git clone https://github.com/daily-demos/daily-bots-web-demo.git, or visit the repo to learn more.

Once you’ve cloned the repo, you can run npm install or yarn inside the repo’s folder to install all of the dependencies you’ll need.

Configuring your environment

Next, you’ll need to configure your environment. Find the file named env.example and rename it to .env.local. The file currently has four keys in it:

  • NEXT_PUBLIC_BASE_URL: This controls how the ‘Client Application’ part of the app finds the ‘Client Server’ part. You should leave this as-is for now. The /api path is pointing to /app/api/route.ts, which we’ll explore later.
  • DAILY_API_KEY: Put the API key you got in step 1 here.
  • OPENAI_API_KEY: If you want to use OpenAI models, you’ll need to supply your own key. You can set it here, and the Client Server will use it when you run your bot.
  • DAILY_BOTS_URL: This tells your app where to find the hosted backend. You should set this to https://api.daily.co/v1/bots/start.

With those values set, you should be ready to run your bot. Just run yarn dev or npm run dev in a terminal. If everything works, you should be able to visit http://localhost:3000 to see your bot. Go ahead and have a quick chat to verify it works!

3. Explore the example project

There are a few important parts of the example project that you’ll want to get familiar with.

Bot configuration: /rtvi.config.ts

rtvi.config.ts
export const defaultBotProfile = "voice_2024_10";

export const defaultServices = {
  stt: "deepgram",
  llm: "anthropic",
  tts: "cartesia",
};

export const defaultServiceOptions = {
  service_options: {
    deepgram: {
      model: "nova-2-general",
      language: "en",
    },
    anthropic: {
      model: "claude-3-5-sonnet-latest",
    },
  },
};

export const defaultConfig = [
  {
    service: "tts",
    options: [{ name: "voice", value: "d46abd1d-2d02-43e8-819f-51fb652c1c61" }],
  },
  {
    service: "llm",
    options: [
      {
        name: "initial_messages",
        value: [
          {
            // anthropic: user; openai: system

            role: "system",
            content: "You are a helpful voice bot.",
          },
        ],
      },
      { name: "run_on_config", value: true },
    ],
  },
];

// These are your app's endpoints, which are used to initiate the /bots/start
// API call or initiate actions
export const defaultEndpoints = {
  connect: "/connect",
  actions: "/actions",
};

Daily Bots are extremely configurable. There’s a lot you can change about a bot while it’s running, but most of the important configuration happens when a bot session starts. It’s a good idea to manage as much of that as possible in the rtvi.config.ts file in the root of the project repo. Many of the available options are already in that file, but you can refer to the API Reference and RTVI docs for more available configuration.

Bot initialization and the voiceClient: /app/page.tsx

/app/page.tsx
const voiceClient = new RTVIClient({
  params: {
    baseUrl: process.env.NEXT_PUBLIC_BASE_URL || "/api",
    endpoints: defaultEndpoints,
    config: defaultConfig,
  },
  transport: new DailyTransport(),
  enableMic: true,
  enableCam: false,
  timeout: BOT_READY_TIMEOUT,
});

voiceClient.registerHelper(
  "llm",
  new LLMHelper({
    callbacks: {},
  })
) as LLMHelper;

The /app/page.tsx file is the main Next.js file for the demo app. It initializes a RTVIClient object that you’ll use throughout your app. page.tsx is also where you’ll want to register any additional helpers you need.

In the session: /components/Session/index.tsx

/components/Session/index.tsx is the component that controls an active bot session, and it’s usually the place you’ll want to do a with the voiceClient object. In the example app, you’ll see a few things like metrics collection happening in useRTVIClientEvent hooks.

/components/Session/index.tsx
useRTVIClientEvent(
  RTVIEvent.Metrics,
  useCallback((metrics) => {
    metrics?.ttfb?.map((m: { processor: string; value: number }) => {
      stats_aggregator.addStat([m.processor, "ttfb", m.value, Date.now()]);
    });
  }, [])
);

The Client Server: /app/api/route.ts

/app/api/route.ts
const payload = {
  bot_profile: defaultBotProfile,
  service_options: defaultServiceOptions,
  services,
  api_keys: {
    together: process.env.TOGETHER_API_KEY,
    cartesia: process.env.CARTESIA_API_KEY,
  },
  config: [...config],
};

When a user wants to talk to a bot, your Client Application will make a request to a Client Server, which is a server process you control. That server then makes a POST request to the Daily Bots API to start a bot. In that request, you need to specify which Bot Profile you want to use. You can also include API keys for services that are supported by RTVI and Pipecat but not built into Daily Bots, such as OpenAI. In the example above, the Client Server is including keys for Together and Cartesia, but that isn’t strictly necessary, because both of those service have built-in support in Daily Bots.

4. Check out the other demos!

That should be enough for you to start finding your way around the example app. If you want to see some other interesting use cases, you can check out these hosted examples. The source code for these examples is also available in branches in the RTVI examples repo.