Get Started
Learn how to build Daily Bots by example
1. Sign up to get your API Key
To get started, you’ll need to sign up for a Daily Bots account and enter your credit card information to get your API Key.
2. Set up and run the example project
As described in the Architecture guide, you need a Client Application and a Client Server to build a bot. We’ve wrapped up both of those in a single Next.js app to make it easy to get started. You can clone the repo by running git clone https://github.com/daily-demos/daily-bots-web-demo.git
, or visit the repo to learn more.
Once you’ve cloned the repo, you can run npm install
or yarn
inside the repo’s folder to install all of the dependencies you’ll need.
Configuring your environment
Next, you’ll need to configure your environment. Find the file named env.example
and rename it to .env.local
. The file currently has four keys in it:
NEXT_PUBLIC_BASE_URL
: This controls how the ‘Client Application’ part of the app finds the ‘Client Server’ part. You should leave this as-is for now. The/api
path is pointing to/app/api/route.ts
, which we’ll explore later.DAILY_API_KEY
: Put the API key you got in step 1 here.OPENAI_API_KEY
: If you want to use OpenAI models, you’ll need to supply your own key. You can set it here, and the Client Server will use it when you run your bot.DAILY_BOTS_URL
: This tells your app where to find the hosted backend. You should set this tohttps://api.daily.co/v1/bots/start
.
With those values set, you should be ready to run your bot. Just run yarn dev
or npm run dev
in a terminal. If everything works, you should be able to visit http://localhost:3000
to see your bot. Go ahead and have a quick chat to verify it works!
3. Explore the example project
There are a few important parts of the example project that you’ll want to get familiar with.
Bot configuration: /rtvi.config.ts
Daily Bots are extremely configurable. There’s a lot you can change about a bot while it’s running, but most of the important configuration happens when a bot session starts. It’s a good idea to manage as much of that as possible in the rtvi.config.ts
file in the root of the project repo. Many of the available options are already in that file, but you can refer to the API Reference and RTVI docs for more available configuration.
Bot initialization and the voiceClient: /app/page.tsx
The /app/page.tsx
file is the main Next.js file for the demo app. It initializes a RTVIClient
object that you’ll use throughout your app. page.tsx
is also where you’ll want to register any additional helpers you need.
In the session: /components/Session/index.tsx
/components/Session/index.tsx
is the component that controls an active bot session, and it’s usually the place you’ll want to do a with the voiceClient
object. In the example app, you’ll see a few things like metrics collection happening in useRTVIClientEvent
hooks.
The Client Server: /app/api/route.ts
When a user wants to talk to a bot, your Client Application will make a request to a Client Server, which is a server process you control. That server then makes a POST request to the Daily Bots API to start a bot. In that request, you need to specify which Bot Profile you want to use. You can also include API keys for services that are supported by RTVI and Pipecat but not built into Daily Bots, such as OpenAI. In the example above, the Client Server is including keys for Together and Cartesia, but that isn’t strictly necessary, because both of those service have built-in support in Daily Bots.
4. Check out the other demos!
That should be enough for you to start finding your way around the example app. If you want to see some other interesting use cases, you can check out these hosted examples. The source code for these examples is also available in branches in the RTVI examples repo.
Talk To The Bot
A great starting place showing off how to interact with a bot and turn all the possible knobs. Built using React and NextJS.
Weather Reporter
Demonstrates using LLM Tooling features to build out more complex
interactions with the bot. See the cb/function-calling
branch for the
source code.
Vision
Demonstrates using our vision bot profile to interact and send video frames
to the bot for even more context. Check out the khk/vision-for-launch
branch for the code.