Anthropic
- Supported service:
llm
- Key:
anthropic
- Integrated: Yes
Service options
The model that will complete your prompt. Supported models are:
claude-3-5-sonnet-20241022
claude-3-5-sonnet-20240620
claude-3-5-sonnet-latest
claude-3-5-haiku-20241022
claude-3-5-haiku-latest
Configuration options
The model that will complete your prompt. Supported models are:
claude-3-5-sonnet-20241022
claude-3-5-sonnet-20240620
claude-3-5-sonnet-latest
claude-3-5-haiku-20241022
claude-3-5-haiku-latest
The maximum number of tokens to generate before stopping.
Amount of randomness injected into the response.
Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.
Note that even with temperature of 0.0, the results will not be fully deterministic.
Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses. Learn more technical details here.
Recommended for advanced use cases only. You usually only need to use temperature.
Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.
Recommended for advanced use cases only. You usually only need to use temperature.
A dictionary that can contain any additional parameters supported by Anthropic that you want to pass to the API. Refer to the Anthropic docs for more information on each of these configuration options.
Function Calling
Anthropic’s function calling documentation is located here.
For more info on how to use function calling in Daily Bots, take a look at the tutorial page.