pub struct Answer<'a> {Show 18 fields
pub model: Model,
pub question: Cow<'a, str>,
pub examples: Vec<[Cow<'a, str>; 2]>,
pub examples_context: Cow<'a, str>,
pub documents: Vec<Cow<'a, str>>,
pub file: Option<Cow<'a, str>>,
pub search_model: Model,
pub max_rerank: u32,
pub temperature: f32,
pub logprobs: u32,
pub max_tokens: u32,
pub stop: Option<Vec<Cow<'a, str>>>,
pub n: u32,
pub logit_bias: HashMap<Cow<'a, str>, i32>,
pub return_metadata: bool,
pub return_prompt: bool,
pub expand: Vec<Cow<'a, str>>,
pub user: Cow<'a, str>,
}
Expand description
Given a question, a set of documents, and some examples, the API generates an answer to the question based on the information in the set of documents. This is useful for question-answering applications on sources of truth, like company documentation or a knowledge base.
Fields§
§model: Model
ID of the engine to use for completion. You can select one of ada, babbage, curie, or davinci.
question: Cow<'a, str>
The question to answer.
examples: Vec<[Cow<'a, str>; 2]>
A list of documents to use for answering the question.
examples_context: Cow<'a, str>
A text snippet containing the contextual information used to generate the answers for the examples you provide.
documents: Vec<Cow<'a, str>>
List of documents from which the answer for the input question should be derived. If this is an empty list, the question will be answered based on the question-answer examples. You should specify either documents or a file, but not both.
file: Option<Cow<'a, str>>
The ID of an uploaded file that contains documents to search over. See upload file for how to upload a file of the desired format and purpose. You should specify either documents or a file, but not both.
search_model: Model
ID of the engine to use for Search. You can select one of ada, babbage, curie, or davinci.
max_rerank: u32
The maximum number of documents to be ranked by Search when using file. Setting it to a higher value leads to improved accuracy but with increased latency and cost.
temperature: f32
What sampling temperature to use. Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined answer.
logprobs: u32
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please contact support@openai.com and describe your use case.
max_tokens: u32
The maximum number of tokens allowed for the generated answer
stop: Option<Vec<Cow<'a, str>>>
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
n: u32
How many answers to generate for each question.
logit_bias: HashMap<Cow<'a, str>, i32>
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
return_metadata: bool
A special boolean flag for showing metadata. If set to true, each document entry in the returned JSON will contain a “metadata” field. This flag only takes effect when file is set.
return_prompt: bool
If set to true, the returned JSON will include a “prompt” field containing the final prompt that was used to request a completion. This is mainly useful for debugging purposes.
expand: Vec<Cow<'a, str>>
If an object name is in the list, we provide the full information of the object; otherwise, we only provide the object ID. Currently we support completion and file objects for expansion.
user: Cow<'a, str>
A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.