Struct spider::features::openai_common::GPTConfigs
source · pub struct GPTConfigs {
pub prompt: Prompt,
pub model: String,
pub max_tokens: u16,
pub temperature: Option<f32>,
pub user: Option<String>,
pub top_p: Option<f32>,
pub prompt_url_map: Option<HashMap<CaseInsensitiveString, Self>>,
pub extra_ai_data: bool,
pub paths_map: bool,
pub screenshot: bool,
pub api_key: Option<String>,
pub cache: Option<AICache>,
}
Expand description
The GPT configs to use for dynamic Javascript execution and other functionality.
Fields§
§prompt: Prompt
The prompt to use for the Chat. Example: Search for movies. This will attempt to get the code required to perform the action on the page.
model: String
The model to use. Example: gpt-4-1106-preview or gpt-3.5-turbo-16k
max_tokens: u16
The max tokens to use for the request.
temperature: Option<f32>
The temperature between 0 - 2.
user: Option<String>
The user for the request.
top_p: Option<f32>
The top priority for the request.
prompt_url_map: Option<HashMap<CaseInsensitiveString, Self>>
Prompts to use for certain urls. If this is set only the urls that match exactly are ran.
extra_ai_data: bool
Extra data, this will merge the prompts and try to get the content for you. Example: extracting data from the page.
paths_map: bool
Map to paths. If the prompt_url_map has a key called /blog and all blog pages are found like /blog/something the same prompt is perform unless an exact match is found.
screenshot: bool
Take a screenshot of the page after each JS script execution. The screenshot is stored as a base64.
api_key: Option<String>
The API key to use for the request.
cache: Option<AICache>
Use caching to cache the prompt. This does nothing without the ‘cache_openai’ flag enabled.
Implementations§
source§impl GPTConfigs
impl GPTConfigs
sourcepub fn new(model: &str, prompt: &str, max_tokens: u16) -> GPTConfigs
pub fn new(model: &str, prompt: &str, max_tokens: u16) -> GPTConfigs
GPTConfigs for OpenAI chrome dynamic scripting.
sourcepub fn new_cache(
model: &str,
prompt: &str,
max_tokens: u16,
cache: Option<AICache>
) -> GPTConfigs
pub fn new_cache( model: &str, prompt: &str, max_tokens: u16, cache: Option<AICache> ) -> GPTConfigs
GPTConfigs for OpenAI chrome dynamic scripting and caching.
sourcepub fn new_multi<I, S>(model: &str, prompt: I, max_tokens: u16) -> GPTConfigs
pub fn new_multi<I, S>(model: &str, prompt: I, max_tokens: u16) -> GPTConfigs
GPTConfigs for OpenAI chrome dynamic scripting multi chain prompts.
sourcepub fn new_multi_cache<I, S>(
model: &str,
prompt: I,
max_tokens: u16,
cache: Option<AICache>
) -> GPTConfigs
pub fn new_multi_cache<I, S>( model: &str, prompt: I, max_tokens: u16, cache: Option<AICache> ) -> GPTConfigs
GPTConfigs for OpenAI chrome dynamic scripting multi chain prompts with prompt caching. The feature flag ‘cache_openai’ is required.
Trait Implementations§
source§impl Clone for GPTConfigs
impl Clone for GPTConfigs
source§fn clone(&self) -> GPTConfigs
fn clone(&self) -> GPTConfigs
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more