Expand description
§cognis
Implementation layer for the Cognis LLM framework. This crate provides concrete chat model integrations, agent execution, chains, memory strategies, document loaders, text splitters, embedding providers, and built-in tools.
§Chat Model Providers
Each provider is gated behind a feature flag:
| Feature | Provider |
|---|---|
anthropic | Anthropic Claude |
openai | OpenAI GPT |
google | Google Gemini |
ollama | Ollama (local) |
azure | Azure OpenAI |
all-providers | All of the above |
§Quick Example
ⓘ
use cognis::chat_models::anthropic::ChatAnthropic;
use cognis_core::runnables::Runnable;
use serde_json::json;
let model = ChatAnthropic::new("claude-sonnet-4-20250514");
let result = model.invoke(json!({"messages": []}), None).await.unwrap();§Modules
chat_models– Chat model implementations for each provider.embeddings– OpenAI and Ollama embedding providers.agents– Agent executor with a pluggable middleware pipeline.chains– LLM chain, conversation chain, and sequential chain.memory– Buffer, window, and summary memory strategies.document_loaders– Text, CSV, JSON, and directory document loaders.text_splitter– Character, recursive, markdown, HTML, JSON, code, and token splitters.tools– Calculator, shell, and JSON query tools.
Re-exports§
pub use cognis_core as core;
Modules§
- agents
- Agent module providing middleware, tool-calling agents, structured output support, and output parsers for converting raw LLM text into structured agent actions.
- cache
- LLM response caching backends.
- caching
- API response caching for LLM calls.
- callbacks
- Centralized callback system for the entire execution lifecycle.
- chains
- Chain abstractions for composing prompts, models, and sequential pipelines.
- chat_
models - Chat model implementations, wrappers, and provider registry.
- chat_
sessions - Chat session manager with persistence and lifecycle management.
- document_
loaders - Document loader implementations for ingesting data from various sources.
- document_
transformers - Document transformer pipeline for processing, filtering, and enriching documents.
- embeddings
- Embeddings factory and provider registry.
- evaluation
- Evaluation framework for LLM outputs.
- indexing
- Indexing pipeline for incremental document ingestion.
- memory
- Conversation memory systems for managing chat history in chains.
- output_
parsers - Output parsers with LLM-based error correction and structured extraction.
- prompts
- Higher-level prompt management for Cognis.
- providers
- Provider integration framework for connecting to LLM APIs.
- resilience
- Resilience patterns for LLM API calls.
- retrievers
- Retriever implementations that compose and extend
BaseRetriever. - stores
- Key-value store implementations.
- streaming
- Streaming utilities for LLM responses.
- text_
splitter - text_
splitters - Text splitting strategies for document chunking.
- tools
- Concrete tool implementations for use with the agent executor.
- vectorstores