Google Gemini
- Supported service:
llm
- Key:
gemini
- Integrated: No. See BYO Keys for more details.
Service options
The model that will complete your prompt. See the available Gemini models here.
Configuration options
The model that will complete your prompt. See the available Gemini models here.
The maximum number of tokens to generate before stopping. See the Gemini docs for more information.
Amount of randomness injected into the response.
Use temperature closer to the lower end of the range for analytical / multiple choice, and closer to the high end of the range for creative and generative tasks.
Note that even with temperature of 0.0, the results will not be fully deterministic.
See the Gemini docs for range information for each model.
Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses. Learn more technical details here.
Recommended for advanced use cases only. You usually only need to use temperature.
Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. See the Gemini docs for more information.
Recommended for advanced use cases only. You usually only need to use temperature.
A dictionary that can contain any additional parameters supported by Gemini that you want to pass to the API. Refer to the Gemini reference docs for more information on each of these configuration options.
Function Calling
Gemini’s function calling documentation is located here.
For more info on how to use function calling in Daily Bots, take a look at the tutorial page.