1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
//! Use the davinci model from the OpenAI API
//!
//! This library provides a function for asking questions to the OpenAI Davinci model and getting a response.
//! # davinci
//! `davinci` is the main function, and it has 4 parameters:
//! * `api_key` -> String - This is the OpenAi api key.
//! It can be obtained [here](https://beta.openai.com/account/api-keys)
//! * `context` -> String - The context for the question.
//! * `question` -> String - The question or phrase to ask the model.
//! * `tokens` -> i32 - The maximum number of tokens to use in the response.
//!
//! ## `context` and `question`
//! The `context` and `question` are the prompt for the model.
//! A prompt is a text string given to a model as input that gives the model a specific task to perform.
//!
//! Providing good and strong context to the model
//! (such as by giving a few high-quality examples of desired behavior prior to the new input)
//! can make it easier to obtain better and desired outputs.
//!
//! ## `tokens`
//! Tokens is the max. tokens to be generated (counting the prompt) by a model.
//!
//! The GPT family of models process text using tokens, which are common sequences of characters found in text.
//! The models understand the statistical relationships between these tokens, and excel at producing the next token in a sequence of tokens.
//!
//!
//! Token generally corresponds to ~4 characters of text for common English text.
//! This translates to roughly ¾ of a word (so 100 tokens ~= 75 words).
//!
//! Another thing to keep in mind, is that the `tokens` highest value is 2048 (4096 in new models).
//!
//! One way to know the number of tokens your prompt has is using [this site](https://beta.openai.com/tokenizer)
//!
//! ## Example of usage
//! In this quick example we use davinci to find a answer to user's question.
//!
//! ```
//! use davinci::davinci;
//! use std::io;
//!
//! fn main() {
//! let api: String = String::from("vj-JZkjskhdksKXOlncknjckukNKKnkJNKJNkNKNk");
//! let max_tokens: i32 = 100;
//! let context: String =
//! String::from("The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?");
//! println!("What is your question?");
//! // Reading the user input
//! let mut question: String = String::new();
//! io::stdin()
//! .read_line(&mut question)
//! .expect("Error, you have to write something!");
//! let response: String = match davinci(api, context, question, max_tokens) {
//! Ok(res) => res,
//! Err(error) => error.to_string(),
//! };
//! println!("{}", response);
//! }
//! ```
//!
use ;
use ;
/// # Parameters
///
/// * `api_key` - The OpenAI API key.
/// This must be well written, as it will throw an error if not.
/// * `context` - The context for the question.
/// The context is important for good responses as it tells the model how it should be it's behavior.
/// * `question` - The question or phrase to ask the model.
/// * `tokens` - The maximum number of tokens to use in the response.
///
/// # Returns
///
/// Returns the model's response as a Ok(String) or an Error.
///
pub async