Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
StateSet NSR - Neuro-Symbolic Reasoning Framework
A state-of-the-art hybrid AI framework in Rust that combines neural network pattern recognition with symbolic logical reasoning for robust, explainable AI systems.
Research-grade neuro-symbolic AI: Implements ICLR 2024 NSR architecture with advanced features including program synthesis, MCTS-based abduction, Graph-of-Thoughts reasoning, Vector Symbolic Architecture, and compositional generalization.
Table of Contents
- Overview
- Architecture
- The Core NSR Feedback Loop
- Installation
- Quick Start
- API Reference
- Advanced Features
- Examples
- Documentation
- Benchmarks
- Contributing
- License
Overview
StateSet NSR implements a two-tier Neural-Symbolic Reasoning architecture:
Tier 1: NSR Engine (Production)
Traditional hybrid reasoning engine combining symbolic and neural approaches:
- 5 Reasoning Strategies: SymbolicFirst, NeuralFirst, HybridWeighted, Cascading, Ensemble
- Knowledge Graph: Entity-relation-entity triples with rule-based inference
- LLM Integration: OpenAI, Anthropic, Local/Ollama providers
- REST API: Production-ready Axum-based async API
Tier 2: NSR Machine (Research/ICLR 2024)
State-of-the-art neuro-symbolic machine with cutting-edge ML features:
- Grounded Symbol System (GSS): Core representation from ICLR 2024
- Program Synthesis: Functional programs for semantic computation
- 10+ Advanced Modules: MCTS, VSA, Graph-of-Thoughts, Metacognition, and more
Architecture
┌─────────────────────────────────────────────────────────────────────────┐
│ StateSet NSR Framework │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ NSR Engine (Production) │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │Neural Layer │ │Symbolic Layer│ │ Logic Engine │ │ │
│ │ │• LLM Clients │ │• Facts/Rules │ │• Unification │ │ │
│ │ │• Embeddings │ │• Knowledge │ │• Resolution │ │ │
│ │ │• Inference │ │• Constraints │ │• Chaining │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ │ │ │
│ │ 5 Reasoning Strategies │ │
│ │ SymbolicFirst │ NeuralFirst │ HybridWeighted │ Cascading │ Ensemble │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ NSR Machine (Research/ICLR 2024) │ │
│ │ ┌────────────────────────────────────────────────────────────┐ │ │
│ │ │ Grounded Symbol System (GSS) │ │ │
│ │ │ Input (x) → Symbol (s) → Value (v) → Edges (e) │ │ │
│ │ └────────────────────────────────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │Perception│ │ Parser │ │Synthesis │ │ Learning │ │ │
│ │ │ Module │ │(Dep Tree)│ │(Programs)│ │(Ded-Abd) │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────────────────────────────────────────────────┐│ │
│ │ │ Advanced Modules (10+) ││ │
│ │ │ • Graph-of-Thoughts • VSA (Hyperdimensional) ││ │
│ │ │ • MCTS Abduction • Metacognition ││ │
│ │ │ • Library Learning • Continual Learning ││ │
│ │ │ • Differentiable Logic • Probabilistic Inference ││ │
│ │ │ • Inference Scaling • Explainability ││ │
│ │ └─────────────────────────────────────────────────────────────┘│ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ REST API (Axum) │ │
│ │ /api/v1/reason │ /api/v1/entities │ /api/v1/query │ /health │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
The Core NSR Feedback Loop
What makes this a true Neural-Symbolic Recursive AI (not just neural + symbolic bolted together) is the feedback loop where each component improves the others through recursive self-modification:
┌─────────────────────────────────────────────────────────────────────────┐
│ NSR RECURSIVE FEEDBACK LOOP │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ INPUT │ Raw data: text, images, numbers │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ NEURAL PERCEPTION │ │
│ │ │ │
│ │ p(s|x; θ_p) - Maps raw input to symbol probabilities │ │
│ │ "sees fur, whiskers, pointy ears, hears meow" │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ symbols: [fur, whiskers, meow] │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ DEPENDENCY PARSER │ │
│ │ │ │
│ │ p(e|s; θ_s) - Builds syntactic structure │ │
│ │ Creates tree: meow(fur, whiskers) │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ edges: [(meow→fur), (meow→whiskers)] │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ SYMBOLIC EVALUATOR │ │
│ │ │ │
│ │ p(v|e,s; θ_l) - Executes functional programs │ │
│ │ Rule: IF meow THEN CAT │ │
│ │ Output: "CAT" │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ output: CAT │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ COMPARE TO TARGET │ │
│ │ │ │
│ │ output == expected? │ │
│ │ CAT == CAT? ✓ Success → Continue │ │
│ │ CAT != DOG? ✗ Error → Trigger ABDUCTION │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ (on error) │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ ★ RECURSIVE ABDUCTION (The Key Innovation) ★ │ │
│ │ │ │
│ │ Search for modifications that produce correct output: │ │
│ │ │ │
│ │ 1. CHANGE SYMBOLS: Try different symbol assignments │ │
│ │ "Maybe 'tailless' should map to CAT_VARIANT?" │ │
│ │ │ │
│ │ 2. RESTRUCTURE EDGES: Try different parse trees │ │
│ │ "Maybe meow should be the root, not fur?" │ │
│ │ │ │
│ │ 3. UPDATE PROGRAMS: Modify semantic rules │ │
│ │ "Rule update: TAIL is no longer mandatory for CAT" │ │
│ │ │ │
│ │ Uses: MCTS, Beam Search, or Gradient Descent │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ refined hypothesis │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ TRAIN ALL COMPONENTS │ │
│ │ │ │
│ │ • Update perception weights (better symbol grounding) │ │
│ │ • Update parser weights (better structure learning) │ │
│ │ • Update program library (better semantic rules) │ │
│ │ • Discover abstractions (library learning) │ │
│ └──────┬──────────────────────────────────────────────────────┘ │
│ │ │
│ └──────────────► REPEAT (Recursive Self-Improvement) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
The Manx Cat Example
This feedback loop enables the system to adapt to anomalies automatically:
BEFORE: System knows cats have tails
Input: [fur, tail, meow] → CAT ✓
Input: [fur, NO_TAIL, meow] → ??? (confused)
ABDUCTION TRIGGERED:
- System encounters tailless cats (Manx breed)
- Searches for hypothesis modifications
- Discovers: "meow" is the key discriminator, not "tail"
- Updates rule: TAIL is no longer mandatory for CAT
AFTER: System adapted its rules
Input: [fur, NO_TAIL, meow] → CAT ✓ (correct!)
Why This Matters
| Traditional AI | NSR AI |
|---|---|
| Neural OR Symbolic | Neural AND Symbolic in feedback loop |
| Fixed rules | Rules that adapt through abduction |
| Black box | Explainable reasoning |
| Fails on anomalies | Learns from anomalies |
| No self-improvement | Recursive self-modification |
Run the Example
This demonstrates all three pillars:
- Neural Perception:
[fur, whiskers, meow]→ symbol probabilities - Symbolic Reasoning:
meow → CAT(learned rule) - Recursive Learning: System adapts to Manx cats (35 abduction steps)
Quick Start
⚠️ Default builds use deterministic, exact-match-friendly perception and a lightweight parser with deterministic fallbacks for fast demos. For production-quality neural perception and parsing, configure real backends (e.g., ONNX/Candle/Torch) and train the models before relying on results.
Use OpenAI embeddings for perception
use ;
use MockNeuralBackend; // replace with real OpenAI backend via config/env
use Arc;
// Example with a backend-provided embedding dimension
let backend = new;
let machine = new
.with_neural_backend
.add_symbol
.add_symbol
.build;
Set NSR_NEURAL_BACKEND=openai (or backend = "openai" in config.toml) with a valid OPENAI_API_KEY to enable the HTTP backend globally.
Defaults:
- Binds to
127.0.0.1unless you explicitly setNSR_HOSTor--host - Neural backend defaults to
auto: prefers OpenAI if a key is present, otherwise FastEmbed (if compiled), otherwise mock. Supported backends:auto,mock,fastembed,openai(plusonnxwith--features onnx) - When binding to a non-loopback host,
NSR_CORS_ORIGINS(orserver.cors_allowed_origins) must be set; the server will refuse to start with permissive CORS in that case. - Persistence defaults to
data/knowledge.jsonwhen not configured; setNSR_KB_PATHorNSR_DATABASE_URL(recommended for non-loopback/prod deployments) to pick an explicit store. - API keys are required; none are printed to stdout
- Postgres persistence: set
NSR_DATABASE_URL(optionalNSR_DATABASE_MAX_CONNECTIONS); otherwise file snapshot or in-memory is used
Quality & verification
- Offline sanity:
cargo test --test cat_end_to_end(deterministic perception → program → evaluation). - Full suite:
cargo test --all-features(also run in CI). - Real OpenAI embeddings:
cargo test --test openai_perception -- --nocapture(requiresOPENAI_API_KEYorNSR_NEURAL_API_KEY). - Quality/readiness details and roadmap live in
QUALITY.md.
Installation
# Clone the repository
# Build the project
# Run examples
Basic Usage: NSR Engine
use NSREngine;
async
Advanced Usage: NSR Machine
use ;
async
Notes on Running Tests in Containerized/Networked Filesystems
If you see Invalid cross-device link (os error 18) while running cargo test or cargo build, it
usually means the build artifacts cannot be hard-linked across mount points (common with Docker
volumes or networked filesystems). Point Cargo to a target directory on the same filesystem to
avoid the issue:
# Optional: disable incremental to avoid hard-linking entirely
NSR Engine
The NSR Engine is the production-ready hybrid reasoning system combining neural and symbolic AI.
Reasoning Strategies
| Strategy | Description | Use Case |
|---|---|---|
| SymbolicFirst | Try logic first, fall back to neural | Rule-heavy domains |
| NeuralFirst | Use neural embeddings, enhance with symbolic | Pattern matching |
| HybridWeighted | Combine both with configurable weights | Balanced reasoning |
| Cascading | Neural first, symbolic verification | High-confidence needs |
| Ensemble | Multiple approaches vote on result | Maximum accuracy |
use ;
let engine = builder
.with_strategy
.build?;
Forward & Backward Chaining
// Forward chaining - derive new facts from rules
let inferred = engine.forward_chain;
println!;
// Backward chaining - prove a goal
let goal = new;
let proofs = engine.backward_chain;
for proof in proofs
Knowledge Graph Operations
// Add entities and relationships
engine.add_entity.await?;
engine.add_entity.await?;
engine.add_triple.await?;
// Query the knowledge base
let result = engine.reason.await?;
To keep your facts across restarts, set knowledge.persistence_path (or NSR_KB_PATH) and the server/agents/REPL will load on startup and persist on shutdown.
For Postgres-backed persistence, set database.url/NSR_DATABASE_URL; SeaORM will auto-create core tables (entities, relations, triples, rules, constraints, snapshot) and store both rows and a snapshot row that takes precedence over file snapshots.
NSR Machine
The NSR Machine implements the ICLR 2024 neuro-symbolic recursive architecture with 10+ advanced modules.
When you enable advanced modules, inference now engages them directly: Graph-of-Thoughts seeds a reasoning scaffold, VSA supplies semantic priors, differentiable logic refreshes symbol embeddings, and MCTS can refine low-confidence parses. Because of that stateful interplay, NSRMachine::infer takes &mut self.
Core: Grounded Symbol System (GSS)
The GSS is the unified representation combining:
- Grounded Input (x): Raw input segment (text, image, number)
- Abstract Symbol (s): Learned symbol ID
- Semantic Value (v): Computed meaning
- Edges (e): Dependency structure
use ;
let mut gss = new;
gss.add_node;
gss.add_node;
gss.add_edge; // hello -> world dependency
Advanced Modules
Graph-of-Thoughts (GoT)
Complex reasoning with branching and merging thought paths:
let mut machine = new
.with_graph_of_thoughts
.build;
// Add thoughts and create reasoning graph
let root = machine.add_thought.unwrap;
let branches = machine.branch_thought;
// Connect reasoning paths
machine.connect_thoughts;
Vector Symbolic Architecture (VSA)
Hyperdimensional computing for robust semantic memory (10,000-dimensional vectors):
let mut machine = new
.with_vsa
.add_symbol
.add_symbol
.build;
// Encode symbols as hypervectors
let cat_id = machine.vocabulary.get_by_name.unwrap;
let dog_id = machine.vocabulary.get_by_name.unwrap;
let cat_hv = machine.vsa_encode_symbol.unwrap;
let dog_hv = machine.vsa_encode_symbol.unwrap;
// Bind: Create composite concept (cat-chases-dog)
let bound = machine.vsa_bind.unwrap;
// Bundle: Create set union {cat, dog} = animals
let bundled = machine.vsa_bundle.unwrap;
// Add named concepts for memory
machine.vsa_add_concept;
machine.vsa_add_concept;
MCTS Abduction
Monte Carlo Tree Search for hypothesis exploration:
use MCTSConfig;
let mcts_config = MCTSConfig ;
let mut machine = new
.with_mcts_config
.build;
// Search for GSS that produces target output
let target = Integer;
if let Some = machine.mcts_search
Metacognition
Self-monitoring with uncertainty estimation:
let mut machine = new
.with_metacognition
.build;
// Track predictions for uncertainty modeling
machine.track_prediction;
machine.track_prediction;
// Get uncertainty estimate
if let Some = machine.estimate_uncertainty
Inference Scaling
Adaptive computation depth based on confidence:
let machine = new
.with_inference_scaling
.build;
// Check if more computation is within budget
let can_continue = machine.can_continue_reasoning;
// Get recommended depth based on confidence
let recommended = machine.scale_reasoning_depth;
Library Learning
DreamCoder-style automatic abstraction discovery:
let mut machine = new
.with_library_learning
.build;
// After training on multiple programs...
let abstractions = machine.learn_library;
for abs in &abstractions
Continual Learning
Prevent catastrophic forgetting with EWC and replay buffers:
let mut machine = new
.with_continual_learning
.build;
// Record experiences for replay
machine.record_experience;
All Features at Once
let machine = new
.embedding_dim
.hidden_size
.with_all_advanced_features // Enable everything
.build;
// Check enabled features
println!;
// ["library_learning", "metacognition", "explainability", "continual_learning",
// "mcts", "probabilistic", "graph_of_thoughts", "inference_scaling", "vsa",
// "differentiable_logic"]
Presets for Common Tasks
use presets;
// SCAN-like compositional tasks
let machine = scan_machine;
// PCFG string manipulation
let machine = pcfg_machine;
// HINT arithmetic tasks
let machine = hint_machine;
// COGS semantic parsing
let machine = cogs_machine;
REST API
Start the Server
Core Endpoints
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/metrics |
GET | Prometheus metrics |
/api/v1/reason |
POST | Main reasoning endpoint |
/api/v1/forward-chain |
POST | Forward chaining inference |
/api/v1/backward-chain |
POST | Backward chaining inference |
/api/v1/explain |
POST | Generate explanations |
/api/v1/entities |
GET/POST | Entity management |
/api/v1/entities/:id |
DELETE | Delete an entity |
/api/v1/entities/batch |
POST/DELETE | Batch create/delete entities |
/api/v1/triples |
POST/DELETE | Create/delete facts |
/api/v1/triples/batch |
POST | Batch create triples |
/api/v1/rules |
GET/POST | Rule management |
/api/v1/rules/:id |
DELETE | Delete a rule |
/api/v1/query |
POST | Query knowledge base |
/api/v1/sessions |
POST/GET | Session management |
Example Requests
# Reasoning with strategy selection
# Response
{
}
Examples
The repository includes comprehensive examples demonstrating various features:
Basic Examples
Advanced NSR Machine Examples
Domain-Specific Examples
Project Structure
src/
├── app/ # NSR AI Application Layer
│ ├── engine.rs # NSREngine (main entry point)
│ ├── llm.rs # LLM client abstraction
│ ├── logic.rs # Prolog-like logic engine
│ ├── reasoning.rs # Recursive reasoning loop
│ ├── symbolic.rs # Facts, Rules, Tasks
│ ├── pipeline.rs # NSR pipeline orchestration
│ ├── cache.rs # Query result caching
│ └── tools.rs # Tool registry
├── nsr/ # NSR Machine (ICLR 2024)
│ ├── machine.rs # NSRMachine orchestrator (1,560 lines)
│ ├── gss.rs # Grounded Symbol System (706 lines)
│ ├── perception.rs # Neural perception module
│ ├── parser.rs # Dependency parser
│ ├── program.rs # Program synthesis (1,185 lines)
│ ├── learning.rs # Deduction-Abduction learning
│ ├── graph_of_thoughts.rs # GoT reasoning
│ ├── vsa.rs # Vector Symbolic Architecture (1,045 lines)
│ ├── mcts.rs # Monte Carlo Tree Search (990 lines)
│ ├── metacognition.rs # Self-monitoring (978 lines)
│ ├── library_learning.rs # DreamCoder-style learning (981 lines)
│ ├── continual.rs # Continual learning
│ ├── probabilistic.rs # Probabilistic inference (927 lines)
│ ├── differentiable_logic.rs # Differentiable theorem proving
│ ├── inference_scaling.rs # Adaptive computation
│ ├── explain.rs # Explainability
│ └── transformer.rs # Transformer architecture
├── reasoning/ # NSR Engine reasoning
│ ├── mod.rs # NSREngine + strategies (1,156 lines)
│ ├── logic.rs # First-order logic
│ ├── hybrid.rs # Hybrid reasoning
│ └── explanation.rs # Explanation generation
├── knowledge/ # Knowledge representation
│ ├── graph.rs # Knowledge graph
│ ├── rules.rs # Rule engine
│ ├── query.rs # Query execution
│ └── constraints.rs # Constraint validation
├── neural/ # Neural backends
│ ├── embeddings.rs # Vector embeddings
│ ├── inference.rs # Neural inference
│ ├── fastembed.rs # FastEmbed integration
│ └── ort.rs # ONNX Runtime
├── api/ # REST API (Axum)
│ ├── routes.rs # Route definitions
│ ├── handlers.rs # Endpoint handlers (1,250+ lines)
│ ├── state.rs # Application state
│ └── metrics.rs # Prometheus metrics
├── agents/ # Autonomous agents
├── config/ # Configuration
├── bin/nsr_ai.rs # CLI application
├── main.rs # Server entry point
└── lib.rs # Library exports
Configuration
Config File (config.toml)
[]
= "0.0.0.0"
= 8080
= 60 # requests per minute
[]
= 0.7
= 10
= 0.8
= "HybridWeighted"
[]
= "openai" # auto-detected if OpenAI key is present
= 1536 # matches text-embedding-3-small
= "text-embedding-3-small"
[]
= 256
= 512
= 100
= 8
= true
[]
= "info"
= "json"
Environment Variables
# openai, anthropic, local, mock
# API key for LLM provider
# Model name
# Base URL for local LLM
# Optional; auto-selected if OpenAI key is present
OpenAI quickstart
export OPENAI_API_KEY=sk-...
export NSR_NEURAL_MODEL=text-embedding-3-small # or text-embedding-3-large (3072-dim)
export NSR_NEURAL_BACKEND=openai
# Optional: NSR_NEURAL_API=https://api.openai.com/v1/embeddings
# Optional: NSR_PARSER_CHECKPOINT=/path/to/parser.safetensors (required for non-loopback binds)
cargo run -- serve --host 0.0.0.0 --port 8080
On non-loopback hosts the server will refuse to start if it would fall back to the mock backend or if a parser checkpoint is missing. Provide a real backend key and a trained parser checkpoint for production.
For highest embedding quality, use NSR_NEURAL_MODEL=text-embedding-3-large and set NSR_EMBEDDING_DIM=3072.
Production profile (secure defaults)
NSR_NEURAL_BACKEND=openai,NSR_NEURAL_MODEL=text-embedding-3-large,NSR_EMBEDDING_DIM=3072,OPENAI_API_KEYset.NSR_PUBLIC_ROUTES_REQUIRE_AUTH=true(default) andNSR_API_KEYSconfigured; always sendx-api-key.NSR_CORS_ORIGINS=https://your.domain(comma-separated). Required when binding to non-loopback hosts; server will refuse to start otherwise.NSR_KB_PATH=/var/lib/nsr/knowledge.json(or Postgres URL) so knowledge persists.NSR_PARSER_CHECKPOINT=/var/lib/nsr/parser.safetensors(required for non-loopback/prod deployments).- Rate-limit tuned:
NSR_RATE_LIMIT=60(or stricter) per minute; optionally set per-keyrate_limit_per_minuteinserver.api_keys. KeepNSR_MAX_BODY_SIZEbounded. - Metrics: Prometheus
/metricsexports LLM/embedding latencies and token counts; scrape and alert onnsr_llm_requests_total/nsr_llm_request_duration_ms.
Feature Flags
[]
= ["server"]
= ["axum", "tower", "tower-http"]
= ["ort"] # ONNX Runtime support
= ["tch"] # PyTorch bindings
= ["fastembed"] # FastEmbed support
Testing & Benchmarks
# Run all tests
# Run specific module tests
# Run integration tests
# Deterministic reasoning and neural backend contract tests
# Opt-in FastEmbed smoke (requires embeddings feature and local model availability)
NSR_RUN_FASTEMBED_TESTS=1
# Run benchmarks
# Run with coverage
For planned lightweight eval expectations and how to run them, see docs/EVALS.md.
Performance
The NSR framework is optimized for production use:
- Async/await throughout with Tokio runtime
- DashMap for concurrent thread-safe state
- Query caching for repeated operations
- Clause indexing for efficient rule matching
- Batch inference for neural operations
Research References
The NSR Machine implements concepts from:
- ICLR 2024 NSR Paper: Grounded Symbol System architecture
- DreamCoder: Library learning and program synthesis
- AlphaGo/Zero: MCTS for hypothesis search
- Neural Theorem Provers: Differentiable logic
- Elastic Weight Consolidation: Continual learning
Documentation
- API Reference - REST API documentation
- Advanced Modules - MCTS, GoT, VSA, Metacognition, Library Learning
- Deployment Guide - Production deployment with Docker/Kubernetes
- Benchmarks - Performance benchmarks
- Migration Guide - Basic to advanced NSR
- Optional OpenAI smoke:
NSR_RUN_OPENAI_SMOKE=1 OPENAI_API_KEY=... cargo test -p stateset-nsr --test openai_smoke
License
This project is licensed under the Business Source License 1.1 (BSL-1.1).
Key Terms:
- Free Use: Development, testing, personal, and non-production use
- Production Use: Allowed, except for offering a competing hosted NSR service
- Change Date: December 8, 2028
- Change License: Apache License 2.0
After the Change Date, this software will be available under the Apache 2.0 license.
Contributing
Contributions welcome! Please read CONTRIBUTING.md first.
Citation
If you use StateSet NSR in your research, please cite:
Related Projects
- stateset-agents - RL agents framework
- stateset-api - Commerce API