1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
//! # cognis
//!
//! Implementation layer for the Cognis LLM framework. This crate provides
//! concrete chat model integrations, agent execution, chains, memory strategies,
//! document loaders, text splitters, embedding providers, and built-in tools.
//!
//! ## Chat Model Providers
//!
//! Each provider is gated behind a feature flag:
//!
//! | Feature | Provider |
//! |---------|----------|
//! | `anthropic` | Anthropic Claude |
//! | `openai` | OpenAI GPT |
//! | `google` | Google Gemini |
//! | `ollama` | Ollama (local) |
//! | `azure` | Azure OpenAI |
//! | `all-providers` | All of the above |
//!
//! ## Quick Example
//!
//! ```rust,ignore
//! use cognis::chat_models::anthropic::ChatAnthropic;
//! use cognis_core::runnables::Runnable;
//! use serde_json::json;
//!
//! let model = ChatAnthropic::new("claude-sonnet-4-20250514");
//! let result = model.invoke(json!({"messages": []}), None).await.unwrap();
//! ```
//!
//! ## Modules
//!
//! - [`chat_models`] -- Chat model implementations for each provider.
//! - [`embeddings`] -- OpenAI and Ollama embedding providers.
//! - [`agents`] -- Agent executor with a pluggable middleware pipeline.
//! - [`chains`] -- LLM chain, conversation chain, and sequential chain.
//! - [`memory`] -- Buffer, window, and summary memory strategies.
//! - [`document_loaders`] -- Text, CSV, JSON, and directory document loaders.
//! - [`text_splitter`] -- Character, recursive, markdown, HTML, JSON, code, and token splitters.
//! - [`tools`] -- Calculator, shell, and JSON query tools.
// Re-export core for convenience
pub use cognis_core as core;
/// `#[cognis::tool]` — attribute macro that generates a `BaseTool`
/// implementation from an `async fn` (or an `impl` block containing one).
/// See [`cognis_core::tool`] for the full documentation.
pub use tool;