Offline Intelligence Library
A high-performance library for offline AI inference with context management, memory optimization, and multi-format model support.
Overview
Offline Intelligence Library provides developers with powerful tools for running large language models locally without internet connectivity. The library offers intelligent context management, memory optimization, and hybrid search capabilities across multiple programming languages.
Features
- Offline AI Inference: Run LLMs locally without internet connection
- Context Management: Intelligent conversation context optimization
- Memory Search: Hybrid semantic and keyword search across conversations
- Multi-format Support: Support for GGUF, GGML, ONNX, TensorRT, and Safetensors models
- Cross-platform: Works on Windows, macOS, and Linux
- Multi-language: Native bindings for Python, Java, JavaScript/Node.js, and C++
Supported Languages
- Python (PyO3 bindings)
- Java (JNI bindings)
- JavaScript/Node.js (N-API bindings)
- C++ (C FFI bindings)
- Rust (Native library)
Quick Start
Prerequisites
- Rust toolchain (latest stable)
- For language-specific bindings, see individual requirements below
Building
Using Build Scripts
# Linux/macOS
# Windows
# Using Make (Linux/macOS)
Manual Build
# Build core library
# Build specific language bindings
&&
&&
&&
&&
Language-Specific Usage
Python
=
=
=
Java
;
OfflineIntelligence oi ;
Message[] messages ;
OptimizationResult result ;
JavaScript/Node.js
const = require;
const oi = ;
const messages = ;
const result = await oi.;
C++
using namespace offline_intelligence;
OfflineIntelligence oi;
std::vector<Message> messages = ;
auto result = oi.;
Configuration
Set environment variables before using the library:
Documentation
Development
Running Tests
# Run all tests
# Run tests for specific crate
&&
Code Formatting
# Format all code
# Check formatting
Linting
# Run clippy
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Acknowledgments
- Built with Rust for performance and reliability
- Uses various ML frameworks for model support
- Inspired by the need for offline AI capabilities