LLM API Access
The llm_api_access
library (available as a Rust crate and Python package) provides a unified way to interact with different large language models (LLMs) like OpenAI, Gemini, and Anthropic. It aims to simplify the process of sending messages, managing conversations, and generating embeddings across various LLM providers.
Key Features
- Unified LLM Access: Interact with OpenAI, Gemini, and Anthropic through a consistent interface.
- Conversation Management: Easily send single messages or manage multi-turn conversations.
- Embeddings Generation: Generate text embeddings using OpenAI.
- Secure Credential Loading: Utilizes
dotenv
to load API keys securely from a.env
file.
Installation
Rust
Add llm_api_access
to your Cargo.toml
so it can install from Crates:
[]
= "0.1.XX" # Update this to be the latest version
= { = "1.28.0", = ["full"] } # Required for async operations
Python
Install from PyPI:
Loading API Credentials with dotenv
The llm_api_access
library uses the dotenv
library to securely load API credentials from a .env
file in your project's root directory. This file should contain key-value pairs for each LLM provider you want to use.
Example .env
Structure:
OPEN_AI_ORG=your_openai_org
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
Steps:
- Create
.env
File: Create a file named.env
at the root of your project directory. - Add API Keys: Fill in the
.env
file with the format shown above, replacing placeholders with your actual API keys.
Important Note:
- Never commit your
.env
file to version control systems like Git. It contains sensitive information like API keys.
Rust Usage
The llm_api_access
crate provides the LLM
enum and the Access
trait for interacting with LLMs.
LLM Enum
This enum represents the supported LLM providers:
OpenAI
: Represents the OpenAI language model.Gemini
: Represents the Gemini language model.Anthropic
: Represents the Anthropic language model.
Access Trait
The Access
trait defines asynchronous methods for interacting with LLMs:
send_single_message
: Sends a single message and returns the generated response.send_convo_message
: Sends a list of messages as a conversation and returns the generated response.get_model_info
: Gets information about a specific LLM model.list_models
: Lists all available LLM models.count_tokens
: Counts the number of tokens in a given text.
The LLM
enum implements Access
, providing specific implementations for each method based on the chosen LLM provider.
Note: Currently, get_model_info
, list_models
, and count_tokens
only work for the Gemini LLM. Other providers return an error indicating this functionality is not yet supported.
send_single_message
Example (Rust)
use ;
async
send_convo_message
Example (Rust)
use ;
use Message;
async
Python Usage
The Python package exposes two main asynchronous functions: call_llm
for interacting with LLMs and get_embedding
for generating OpenAI embeddings.
call_llm
Function (Python)
This function takes:
llm_type
: A string representing the LLM provider ("openai", "gemini", or "anthropic").messages
: A list of dictionaries, where each dictionary represents a message with "role" (e.g., "user", "model", "system") and "content".
It returns a string containing the LLM's response.
send_single_message
Example (Python)
# Send a single message to OpenAI
=
= await
send_convo_message
Example (Python)
# Send a conversation to Gemini
=
= await
Embeddings
The library provides support for generating text embeddings through the OpenAI API.
OpenAI Embeddings (Rust)
The openai
module includes functionality to generate vector embeddings:
pub async
This function takes:
input
: The text to generate embeddings fordimensions
: Optional parameter to specify the number of dimensions (if omitted, uses the model default)
It returns a vector of floating point values representing the text embedding.
Example Usage (Rust):
use get_embedding;
async
The get_embedding
function uses the "text-embedding-3-small" model by default and requires the same environment variables as other OpenAI API calls (OPENAI_API_KEY
and OPEN_AI_ORG
).
get_embedding
Function (Python)
This function takes:
input
: The text to generate embeddings for.dimensions
: An optional integer parameter to specify the number of dimensions (ifNone
, uses the model default).
It returns a list of floating-point values representing the text embedding.
Example Usage (Python):
# Generate an embedding with default dimensions
= await
# Print first 5 elements
# Generate an embedding with custom dimensions
= await
assert == 64
Testing
The llm_api_access
library includes unit tests that showcase usage and expected behavior with different LLM providers and the embedding functionality.