TensorLogic Meta Crate
Unified access to all TensorLogic components
This is the top-level umbrella crate that re-exports all TensorLogic components for convenient access. Instead of importing individual crates, you can use this meta crate to access the entire TensorLogic ecosystem.
Overview
TensorLogic compiles logical rules (predicates, quantifiers, implications) into tensor equations (einsum graphs) with a minimal DSL + IR, enabling neural/symbolic/probabilistic models within a unified tensor computation framework.
Quick Start
Add to your Cargo.toml:
[]
= "0.1.0-beta.1"
Basic Usage
use *;
// Define logical expressions
let x = var;
let y = var;
let knows = pred;
// Compile to tensor graph
let graph = compile_to_einsum?;
// Execute with backend
let mut executor = new;
let result = executor.forward?;
Architecture
The meta crate provides organized access to three layers:
Planning Layer (Engine-Agnostic)
use *; // AST and IR types
use *; // Logic → tensor compilation
use *; // Execution traits
use *; // Symbol tables, domains
Components:
tensorlogic::ir- Core IR types (Term,TLExpr,EinsumGraph)tensorlogic::compiler- Logic-to-tensor mapping with static analysistensorlogic::infer- Execution/autodiff traits (TlExecutor,TlAutodiff)tensorlogic::adapters- Symbol tables, axis metadata, domain masks
Execution Layer (SciRS2-Powered)
use *; // SciRS2 runtime executor
use *; // Training infrastructure
Components:
tensorlogic::scirs_backend- Runtime executor with CPU/SIMD/GPU featurestensorlogic::train- Training loops, loss wiring, schedules, callbacks
Integration Layer
use *; // RDF*/SHACL integration
use *; // ML kernels
use *; // PGM integration
use *; // Transformer components
Components:
tensorlogic::oxirs_bridge- RDF*/GraphQL/SHACL → TL rules; provenance bindingtensorlogic::sklears_kernels- Logic-derived similarity kernels for SkleaRStensorlogic::quantrs_hooks- PGM/message-passing interop for QuantrS2tensorlogic::trustformers- Transformer-as-rules (attention/FFN as einsum)
Prelude Module
For convenience, commonly used types are available through the prelude:
use *;
This imports:
- Core types:
Term,TLExpr,EinsumGraph,EinsumNode - Compilation:
compile_to_einsum,CompilerContext,CompilationConfig - Execution:
TlExecutor,TlAutodiff,Scirs2Exec - Errors:
IrError,CompilerError
Examples
This crate includes 5 comprehensive examples demonstrating all features:
# Basic predicate and compilation
# Existential quantifier with reduction
# Full execution with SciRS2 backend
# OxiRS bridge with RDF* data
# Comparing 6 compilation strategy presets
Features
Compilation Strategies
TensorLogic supports 6 preset compilation strategies:
- soft_differentiable - Neural network training (smooth gradients)
- hard_boolean - Discrete Boolean logic (exact semantics)
- fuzzy_godel - Gödel fuzzy logic (min/max operations)
- fuzzy_product - Product fuzzy logic (probabilistic)
- fuzzy_lukasiewicz - Łukasiewicz fuzzy logic (bounded)
- probabilistic - Probabilistic interpretation
use CompilationConfig;
let config = soft_differentiable;
let graph = compile_with_config?;
Logic-to-Tensor Mapping
Default mappings (configurable per use case):
| Logic Operation | Tensor Equivalent | Notes |
|---|---|---|
AND(a, b) |
a * b (Hadamard) |
Element-wise multiplication |
OR(a, b) |
max(a, b) |
Or soft variant |
NOT(a) |
1 - a |
Or temperature-controlled |
∃x. P(x) |
sum(P, axis=x) |
Or max for hard |
∀x. P(x) |
Dual of ∃ | Or product reduction |
a → b |
max(1-a, b) |
Or ReLU variant |
Feature Flags
Control which components are included:
[]
= { = "0.1.0-beta.1", = ["simd"] }
Available features:
simd- Enable SIMD acceleration in SciRS2 backend (2-4x speedup)gpu- Enable GPU support (future)
Documentation
- Project Guide: CLAUDE.md
- API Reference: docs.rs/tensorlogic
- Main README: README.md
- Tutorial: Check individual crate READMEs for detailed guides
Component Documentation
Each component has comprehensive documentation:
- tensorlogic-ir - IR and AST types
- tensorlogic-compiler - Compilation
- tensorlogic-infer - Execution traits
- tensorlogic-scirs-backend - SciRS2 backend
- tensorlogic-train - Training
- tensorlogic-adapters - Symbol tables
- tensorlogic-oxirs-bridge - RDF* integration
- tensorlogic-sklears-kernels - ML kernels
- tensorlogic-quantrs-hooks - PGM integration
- tensorlogic-trustformers - Transformers
Development
Building
# Build the meta crate
# Build with SIMD support
# Run tests
# Run examples
Testing
The meta crate includes all component tests:
# Run all tests
# Run with nextest (faster)
Version Compatibility
This meta crate version 0.1.0-beta.1 includes:
| Component | Version | Status |
|---|---|---|
| tensorlogic-ir | 0.1.0-beta.1 | ✅ Production Ready |
| tensorlogic-compiler | 0.1.0-beta.1 | ✅ Production Ready |
| tensorlogic-infer | 0.1.0-beta.1 | ✅ Production Ready |
| tensorlogic-scirs-backend | 0.1.0-beta.1 | ✅ Production Ready |
| tensorlogic-train | 0.1.0-beta.1 | ✅ Complete |
| tensorlogic-adapters | 0.1.0-beta.1 | ✅ Complete |
| tensorlogic-oxirs-bridge | 0.1.0-beta.1 | ✅ Complete |
| tensorlogic-sklears-kernels | 0.1.0-beta.1 | ✅ Core Features |
| tensorlogic-quantrs-hooks | 0.1.0-beta.1 | ✅ Core Features |
| tensorlogic-trustformers | 0.1.0-beta.1 | ✅ Complete |
All components are synchronized to version 0.1.0-beta.1.
Migration from Individual Crates
If you were using individual crates:
Before:
[]
= "0.1.0-beta.1"
= "0.1.0-beta.1"
= "0.1.0-beta.1"
After:
[]
= "0.1.0-beta.1"
Your code remains the same, just update imports:
Before:
use ;
use compile_to_einsum;
After:
use ;
use compile_to_einsum;
// Or use prelude for common types
use *;
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
Licensed under Apache 2.0 License. See LICENSE for details.
References
- Tensor Logic Paper: https://arxiv.org/abs/2510.12269
- Project Repository: https://github.com/cool-japan/tensorlogic
- Documentation: https://docs.rs/tensorlogic
Part of the COOLJAPAN Ecosystem
For questions and support, please open an issue on GitHub.