cognis-core
The foundation traits and types for the Cognis LLM framework.
cognis-core defines the trait interfaces and types that every crate in the Cognis workspace builds on. It has zero workspace dependencies — everything starts here.
Core Abstractions
BaseChatModel ─ Chat model providers (Anthropic, OpenAI, ...)
BaseTool ─ Tools that agents can call
Runnable ─ Composable async computation unit (the LCEL equivalent)
Message ─ Human | AI | System | Tool message types
Embeddings ─ Vector embedding providers
VectorStore ─ Similarity search interface
Document ─ Text + metadata, used across loaders and retrievers
Composable Chains with chain!
The Runnable trait and its combinators let you compose pipelines:
use Arc;
use chain;
use ;
use StrOutputParser;
use ChatPromptTemplate;
use Runnable;
use json;
async
Runnable Combinators
| Combinator | What it does |
|---|---|
chain! / RunnableSequence |
Pipeline: A then B then C |
RunnableParallel |
Fan-out: run A, B, C concurrently |
RunnableBranch |
Conditional routing based on input |
RunnableLambda |
Wrap any closure as a Runnable |
RunnableWithFallbacks |
Try A, fall back to B on error |
RunnableRetry |
Retry with configurable backoff |
Testing
Fake model implementations are included for testing without API keys or network:
FakeListChatModel— cycles through predefined string responsesFakeMessagesListChatModel— cycles through predefinedMessageresponsesGenericFakeChatModel— word-level streaming from predefined messagesFakeListLLM— completion-style fake modelDeterministicFakeEmbedding— produces reproducible embeddings
Part of the Cognis Workspace
| Crate | Role |
|---|---|
| cognis-core | Foundation traits and types (you are here) |
| cognis | LLM providers, chains, memory, tools |
| cognisgraph | State graph orchestration engine |
| cognisagent | High-level agent framework |