Davinci
A crate for simply using davinci model from OpenAi API.
This library provides a function for asking questions to the OpenAI Davinci model and getting a response.
Dependencies
This library use 3 unique dependencies:
reqwest: for making the API call -> 144 kBtokio: for manage async and await -> 625 kBserde: for parsing the OpenAi api response -> 77.1 kB
fn davinci
davinci is the main function, and it has 4 parameters:
api_key-> String - This is the OpenAi api key. It can be obtained herecontext-> String - The context for the question.question-> String - The question or phrase to ask the model.tokens-> i32 - The maximum number of tokens to use in the response.
context and question
The context and question are the prompt for the model.
A prompt is a text string given to a model as input that gives the model a specific task to perform.
Providing good and strong context to the model (such as by giving a few high-quality examples of desired behavior prior to the new input) can make it easier to obtain better and desired outputs.
tokens
Tokens is the max. tokens to be generated (counting the prompt) by a model.
The GPT family of models process text using tokens, which are common sequences of characters found in text. The models understand the statistical relationships between these tokens, and excel at producing the next token in a sequence of tokens.
Token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words).
Another thing to keep in mind, is that the tokens highest value is 2048 (4096 in new models).
One way to know the number of tokens your prompt has is using this site
Example of usage
In this quick example we use davinci to find a answer to user's question.
use davinci;
use io;