Model name to use
Number of completions to generate for each prompt
Whether to stream the results or not. Enabling disables tokenUsage reporting
Sampling temperature to use
Total probability mass of tokens to consider at each step
Optional cacheOptional callbackUse callbacks instead
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Optional callbacksOptional logprobsWhether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Optional maxThe maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
Optional maxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Optional maxMaximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.
Optional metadataOptional modelHolds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.
Optional onCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
Optional prefixChatGPT messages to pass as a prefix to the prompt
Optional stopList of stop words to use when generating
Optional tagsOptional timeoutTimeout to use when making requests to OpenAI.
Optional togetherAIApiThe TogetherAI API key to use for requests.
process.env.TOGETHER_AI_API_KEY
Optional topAn integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Optional userUnique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Optional verboseGenerated using TypeDoc
Represents the parameters for a base chat model.