langchainrust 0.2.0

A LangChain-inspired framework for building LLM applications in Rust. Supports OpenAI, Agents, Tools, Memory, Chains, and RAG.
Documentation
# AGENTS.md - Agentic Coding Guidelines for langchainrust


This document provides guidelines for AI agents working on this codebase.

## Project Overview


`langchainrust` is a LangChain-inspired framework for building LLM applications in Rust. It provides:
- LLM integrations (OpenAI-compatible, Qwen)
- Prompt templates (PromptTemplate, ChatPromptTemplate)
- Chains (PromptChain, SequentialChain)
- Tools and Agents (ReActAgent)
- Memory management
- Retrieval components (experimental)

## Build, Lint, and Test Commands


### Build

```bash
cargo build          # Debug build
cargo build --release # Release build
```

### Lint and Format

```bash
cargo clippy         # Run clippy lints
cargo clippy -- -D warnings  # Treat warnings as errors
cargo fmt            # Format code
cargo fmt -- --check # Check formatting without modifying
```

### Test

```bash
cargo test           # Run all tests
cargo test <name>    # Run tests matching <name>
cargo test -- --nocapture  # Show println output during tests
```

Run a single test:
```bash
cargo test test_function_name -- --nocapture
```

### Documentation

```bash
cargo doc            # Generate documentation
cargo doc --open     # Generate and open docs
cargo doc --no-deps  # Generate docs without dependencies
```

### Misc

```bash
cargo check          # Check code without building
cargo update         # Update dependencies
cargo tree           # Show dependency tree
```

## Code Style Guidelines


### Formatting

- Use `cargo fmt` for automatic formatting (Rust default style)
- 4-space indentation (Rust standard)
- Maximum line length: 100 characters (default rustfmt)
- Use trailing commas in structs and enums

### Imports

- Use absolute imports via `crate::` for internal modules
- Group imports by crate: std → external → crate
- Use `use` statements rather than full paths in functions
- Example:
```rust
use std::collections::HashMap;
use std::error::Error;

use futures_util::stream::{Stream, StreamExt};
use serde::Deserialize;

use crate::prompts::ChatPromptTemplate;
```

### Naming Conventions

- **Structs/Enums**: PascalCase (e.g., `OpenAIConfig`, `Message`)
- **Functions/Methods**: snake_case (e.g., `invoke_chat_template`)
- **Variables**: snake_case (e.g., `api_key`, `max_concurrent`)
- **Traits**: PascalCase (e.g., `IntoApiMessage`)
- **Constants**: SCREAMING_SNAKE_CASE for const, lowercase for statics
- **Modules**: snake_case (e.g., `mod prompts;`)

### Types

- Use explicit types in public APIs
- Prefer standard library types where possible
- Use `Box<dyn Error>` for error types in async functions
- Use generics appropriately for reusable code
- Derive `Clone`, `Debug`, `Serialize`, `Deserialize` where appropriate

### Error Handling

- Use `Result<T, Box<dyn std::error::Error>>` for async fallible functions
- Use `?` operator for error propagation
- Create custom error types for domain-specific errors
- Include context in error messages:
```rust
.map_err(|e| format!("Template formatting failed: {}", e))?
```

### Async Code

- Use `tokio` for async runtime (already in dependencies)
- Use `async-trait` for async methods in traits
- Handle streaming responses with `Stream` from `futures_util`
- Example streaming pattern:
```rust
type TokenStream = Pin<Box<dyn Stream<Item = Result<String, Box<dyn Error>>> + Send>>;
```

### Module Organization

- Public API in `src/lib.rs` - re-export all public modules
- One module per directory (e.g., `mod prompts;` in `src/prompts/mod.rs`)
- Use `pub use` for re-exporting types
- Feature-gated modules if needed:
```rust
#[cfg(feature = "experimental")]
pub mod retrieval;
```

### Documentation

- Document public APIs with doc comments (`///`)
- Include usage examples in complex functions
- Document configuration options in structs

### Testing

- Integration tests in `tests/` directory
- Unit tests in `src/` using `#[cfg(test)]` module
- Use `#[tokio::test]` for async tests
- Use descriptive test names describing the behavior being tested

### Dependencies

- Keep dependencies minimal
- Use `reqwest` with features for HTTP
- Use `serde` with derive feature for serialization
- Avoid introducing new heavy dependencies without discussion

### Security

- Never log or commit API keys
- Use environment variables for secrets
- Validate API keys at runtime

### Git Conventions

- Commit messages: imperative mood ("Add feature" not "Added feature")
- Keep commits atomic and focused
- No commit of secrets or credentials

## Common Patterns


### Creating an LLM Client

```rust
use crate::llms::{LLM, OpenAIConfig};

let config = OpenAIConfig {
    api_key: std::env::var("OPENAI_API_KEY").unwrap(),
    base_url: "https://api.openai.com/v1".to_string(),
    model: "gpt-4".to_string(),
    streaming: false,
};
let llm = LLM::new(config);
```

### Using Prompt Templates

```rust
use crate::prompts::ChatPromptTemplate;
use crate::messages::Message;

let template = ChatPromptTemplate::from_messages([
    Message::system("You are a helpful assistant"),
    Message::human("{question}"),
]);

let values: std::collections::HashMap<&str, &str> = 
    [("question", "What is Rust?")].iter().cloned().collect();

let result = llm.invoke_chat_template(&template, &values).await?;
```

### Running an Agent

```rust
use crate::agent::{Agent, AgentExecutor};
use crate::tools::{Tool, Calculator};

let agent = Agent::new(tools, prompt_template);
let executor = AgentExecutor::new(agent);
let result = executor.run("Your question here", memory).await?;
```