Mini LangChain ⚡
Next-Gen LLM Framework. Built in Rust. Bindings for Python & Node.js. High Performance. Low Token Overhead. Type Safe.
🔮 Why Mini LangChain?
Standard frameworks are bloated. Mini LangChain is stripped down to the bare metal.
- 🚀 Blazing Fast: Core logic runs in native Rust. No GIL bottlenecks for heavy lifting.
- 💰 Token Efficient: Native token counting and automatic prompt minification.
- 🌐 Cross-Language: Write your logic in Python or Node.js; let Rust handle the heavy lifting.
- 🧠 Smart Memory: Thread-safe
ConversationBufferMemoryshared across chains.
⚡ Features (Ready)
| Module | Status | Description |
|---|---|---|
| 🧠 Memory | ✅ | ConversationBufferMemory (Context preservation) |
| 📂 Loaders | ✅ | TextLoader & Document Schema |
| 🔍 RAG | ✅ | InMemoryVectorStore & Embeddings (Cosine Sim) |
| 🤖 Agents | ✅ | AgentExecutor (Zero-shot Tool Use) |
| ⛓️ Chains | ✅ | LLMChain with Prompt Templates |
🛠️ Installation
Python 🐍
Node.js 💚
💻 Usage
1. RAG & Vector Search 🔍
Embed documents and search by semantic similarity.
Rust Core 🦀
use ;
use Arc;
async
Python
# 1. Initialize
=
# 2. Add Documents
# 3. Search
=
# "Rust is memory safe 🦀"
Node.js
const = require;
// 1. Initialize
const store = ;
// 2. Add Documents
await store.;
// 3. Search
const docs = await store.;
console.log;
2. Conversational Chains 💬
Maintain context effortlessly.
# 1. Setup
=
=
=
# 2. Run
=
# Memory remembers!
🗺️ Roadmap
- Embeddings: Integration with SambaNova/OpenAI embeddings.
- Persistent Storage: SQLite/PGVector support.
- Tools: Search & Calculator implementations.