NovelAI API
Rust API client for NovelAI API
Overview
This API client provides you access to the NovelAI API, the following endpoints have been implemented:
- Current Novel AI API version: 1.0
Installation
Add the following to Cargo.toml under [dependencies]:
novelai_api = "0.2"
or by running
cargo add novelai_api
Documentation
Documentation can be found at:
Example Usage
Generating Text
use novelai_api::{api::ai_generate_text, model::*};
#[tokio::main]
async fn main() {
let mut conf = Configuration::new();
conf.bearer_access_token =
Some("Your Token".to_string());
let prompt = "Tell me about the lost world!".to_string();
let response = ai_generate_text(
&conf,
AiGenerateRequest::new(
prompt.clone(),
novelai_api::model::TextModel::KayraV1,
AiGenerateParameters::new(),
),
)
.await.unwrap();
println!("{}{}", prompt, response.output);
}
Generating TTS
use novelai_api::{api::*, model::Configuration};
use std::fs;
#[tokio::main]
async fn main() {
let prompt = "Hello world!";
let prompt: Vec<String> = process_string_for_voice_generation(prompt);
let mut conf = Configuration::new();
conf.bearer_access_token = Some("Your Token".to_string());
for (i, chunk) in prompt.iter().enumerate() {
let output = ai_generate_voice(&conf, chunk, "Aini", -1.0, true, "v2")
.await
.unwrap();
fs::write(format!("./{}_output.webm", i), output).unwrap();
}
}
Customizing Ai Parameters
TODO: Create Example
use novelai_api::{api::ai_generate_text, model::*};
#[tokio::main]
async fn main() {
let model = novelai_api::model::TextModel::KayraV1;
let generation_parameters = AiGenerateParameters {
temperature: Some(2.8),
min_length: 50,
max_length: 300,
top_a: Some(1.0),
..Default::default()
};
let request_settings = AiGenerateRequest::new(
"Your Prompt".to_string(),
model,
generation_parameters
);
let mut conf = Configuration::new();
conf.bearer_access_token =
Some("Your Token".to_string());
let output = ai_generate_text(&conf, request_settings).await.unwrap();
println!("{}", output.output);
}