docs.rs failed to build oxify-0.1.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build:
oxify-0.0.1
OxiFY - LLM Workflow Orchestration Platform
OxiFY is a graph-based LLM workflow orchestration platform built in Rust, designed to compose complex AI applications using directed acyclic graphs (DAGs). This meta-crate provides unified access to all OxiFY components.
Features
- Graph-Based Workflows: Define LLM applications as visual DAGs
- Type-Safe Execution: Compile-time guarantees for workflow structure
- Multi-Provider Support: OpenAI, Anthropic, local models, and more
- Vector Database Integration: Qdrant, in-memory vector search for RAG workflows
- Vision/OCR Processing: Multi-provider OCR with Tesseract, Surya, PaddleOCR
- MCP Support: Native support for Model Context Protocol
- REST API: Full-featured API for workflow management
- Pure Rust: No C/Fortran dependencies (COOLJAPAN Policy)
Quick Start
Add OxiFY to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
Using the Prelude
The prelude provides convenient access to commonly used types:
use *;
Direct Module Access
You can also access individual modules directly:
use ;
use ;
use ;
use ;
Module Overview
This meta-crate re-exports all OxiFY library crates:
| Module | Crate | Description |
|---|---|---|
model |
oxify-model |
Domain models for workflows, nodes, edges, and execution |
vector |
oxify-vector |
High-performance vector search with HNSW indexing |
authn |
oxify-authn |
Authentication (OAuth2, API keys, JWT tokens) |
authz |
oxify-authz |
ReBAC authorization (Zanzibar-style) |
server |
oxify-server |
Axum-based HTTP server infrastructure |
mcp |
oxify-mcp |
Model Context Protocol implementation |
connect_llm |
oxify-connect-llm |
LLM provider integrations (OpenAI, Anthropic, Ollama) |
connect_vector |
oxify-connect-vector |
Vector database integrations (Qdrant) |
connect_vision |
oxify-connect-vision |
Vision/OCR integrations |
storage |
oxify-storage |
Persistent storage layer |
engine |
oxify-engine |
Workflow execution engine |
Architecture
+-------------------------------------------------------------+
| OxiFY Platform |
+-------------------------------------------------------------+
| API Layer (oxify-server) |
| +-> Authentication (oxify-authn) |
| +-> Authorization (oxify-authz) |
| +-> Middleware (CORS, logging, compression) |
+-------------------------------------------------------------+
| Workflow Engine (oxify-engine) |
| +-> DAG Execution |
| +-> Node Processors (LLM, Vision, Retriever, Code) |
| +-> Plugin System |
+-------------------------------------------------------------+
| Connector Layer |
| +-> LLM Clients (oxify-connect-llm) |
| +-> Vision/OCR (oxify-connect-vision) |
| +-> Vector DB (oxify-connect-vector) |
| +-> Vector Search (oxify-vector) |
+-------------------------------------------------------------+
Examples
Creating a Workflow
use ;
Vector Search with HNSW
use ;
Using LLM Providers
use ;
async
Node Types
OxiFY supports 16+ workflow node types:
| Category | Node Types |
|---|---|
| Core | Start, End |
| LLM | GPT-3.5/4, Claude 3/3.5, Ollama |
| Vector | Qdrant, In-memory with hybrid search |
| Vision | Tesseract, Surya, PaddleOCR |
| Control | IfElse, Switch, Conditional |
| Loops | ForEach, While, Repeat |
| Error Handling | Try-Catch-Finally |
| Advanced | Sub-workflow, Code execution, HTTP Tool |
Supported Providers
LLM Providers
- OpenAI (GPT-3.5, GPT-4, GPT-4-Turbo)
- Anthropic (Claude 3 Opus, Sonnet, Haiku)
- Ollama (Local models: Llama, Mistral, etc.)
- Gemini, Mistral, Cohere, Bedrock
Vector Databases
- Qdrant (cloud and self-hosted)
- In-memory HNSW index
Embedding Providers
- OpenAI (text-embedding-ada-002, text-embedding-3-small/large)
- Ollama (local embeddings)
Vision/OCR Providers
- Tesseract OCR
- Surya
- PaddleOCR
Performance
- LLM Response Caching: 1-hour TTL for cost savings
- Execution Plan Caching: 100-entry LRU cache
- Rate Limiting: Configurable (default 500 req/min)
- Parallel Execution: Level-based DAG parallelism
- SIMD Acceleration: Vector operations use SIMD when available
- Retry Logic: Exponential backoff with configurable limits
Individual Crates
If you only need specific functionality, you can depend on individual crates:
# Just the workflow model
= "0.1"
# Just vector search
= "0.1"
# Just LLM connections
= "0.1"
Binary Applications
The following binary applications are available separately:
- oxify-api: REST API server
- oxify-cli: Command-line workflow runner
- oxify-ui: Web-based workflow editor (coming soon)
Development Status
Version 0.1.0 - Production-ready!
- Core Infrastructure: Complete
- LLM Workflow Engine: Complete
- API Layer: Complete
- CLI Tool: Complete
- Web UI: In Progress
Related Projects
OxiFY is part of the COOLJAPAN ecosystem:
- SciRS2 - Scientific computing in Pure Rust
- NumRS2 - Numerical computing library
- ToRSh - PyTorch-like tensor library
- OxiRS - Semantic web platform
License
Apache-2.0 - See LICENSE file for details.
Author
COOLJAPAN OU (Team Kitasan)