Embeddenator โ Holographic Computing Substrate
โ ๏ธ EARLY DEVELOPMENT: This project is in active development (v0.20.0-alpha). APIs are unstable and subject to change. Not recommended for production use.
Version 0.20.0-alpha | Experimental Rust implementation of sparse ternary Vector Symbolic Architecture (VSA) for holographic data encoding.
Author: Tyler Zervas tz-dev@vectorweight.com
License: MIT (see LICENSE file)
Component Architecture
Embeddenator has been refactored into a modular component architecture with 6 independent library crates:
- embeddenator-vsa - Sparse ternary VSA primitives
- embeddenator-io - Codebook, manifest, engram I/O
- embeddenator-retrieval - Query engine with shift-sweep search
- embeddenator-fs - FUSE filesystem integration
- embeddenator-interop - Python/FFI bindings
- embeddenator-obs - Observability and metrics
๐ Documentation: Component Architecture | Local Development | Versioning
๐ณ Docker: Multi-arch images available at ghcr.io/tzervas/embeddenator (amd64 + arm64)
Current Capabilities
Implemented Features
- Engram Encoding/Decoding: Create holographic encodings (
.engramfiles) of filesystems - Bit-Perfect Reconstruction: Verified reconstruction of text and binary files from engrams
- VSA Operations: Bundle, bind, and other vector symbolic operations on sparse ternary vectors
- Hierarchical Encoding: Multi-level chunking for handling larger datasets
- SIMD Support: Optional AVX2/NEON optimizations (experimental, 2-4x speedup on supported hardware)
- CLI Tool: Command-line interface for ingest, extract, and query operations
- Component Architecture: Modular design with 6 independent library crates
- Test Coverage: 160+ integration tests covering core functionality (97.6% pass rate)
Experimental/In Development
- FUSE Filesystem: EmbrFS integration (partial implementation)
- Query Performance: Similarity search and retrieval (basic implementation)
- Docker Support: Multi-arch containers (in development)
- Large-Scale Testing: TB-scale validation (planned)
- OS Container Encoding: Full system encoding (proof-of-concept only)
What's New in v0.20.0-alpha
- ๐ฏ Deterministic hierarchical artifacts - Stable manifest/sub-engram generation with sorted iteration
- ๐ Optional node sharding -
--max-chunks-per-nodecap for bounded per-node indexing cost - ๐ Multi-input ingest - Ingest files and/or multiple directories with automatic namespacing
- โก Query performance - Reusable codebook index across shift-sweep + increased candidate pool
- ๐งช Expanded test coverage - New determinism and E2E hierarchical artifact tests
- ๐ Updated documentation - CLI reference, hierarchical format, and selective unfolding guides
What's New in v0.2.0
- โจ 6 comprehensive E2E regression tests including critical engram modification test
- ๐งช Comprehensive test suite (unit + integration + e2e + doc tests)
- ๐ Intelligent test runner with accurate counting and debug mode
- ๐ฆ Dual versioning strategy for OS builds (LTS + nightly)
- ๐ฏ Zero clippy warnings (29 fixes applied)
- ๐ง Extended OS support: Debian 12 LTS, Debian Testing/Sid, Ubuntu 24.04 LTS, Ubuntu Devel/Rolling
- ๐ Native amd64 CI (required pre-merge check) + arm64 ready for self-hosted runners
- ๐ Automated documentation with rustdoc and 9 doc tests
Core Concepts
Vector Symbolic Architecture (VSA)
Embeddenator uses sparse ternary vectors to represent data holographically:
- Bundle (โ): Superposition operation for combining vectors
- Bind (โ): Compositional operation with approximate self-inverse property
- Cosine Similarity: Measure of vector similarity for retrieval
The ternary representation {-1, 0, +1} enables efficient computation:
- 39-40 trits can be encoded in a 64-bit register
- Sparse representation reduces memory and computation requirements
- Based on balanced ternary arithmetic
Current Configuration:
- 10,000 dimensions with ~1% sparsity (~100-200 non-zero elements per vector)
- Provides balance between collision resistance and computational efficiency
- Higher dimensions and sparsity configurations are under investigation
Engrams
An engram is a holographic encoding of an entire filesystem or dataset:
- Single root vector containing superposition of all chunks
- Secure codebook with VSA-lens encoded data (not plaintext)
- Manifest tracking file structure and metadata
Data Encoding: The codebook stores encoded vector representations of data chunks. The encoding mechanism:
- Requires the codebook for reconstruction (codebook acts as a key)
- Uses sparse ternary vectors for holographic superposition
- Supports deterministic encoding and decoding
- Security Note: The cryptographic properties of this encoding are under research. Do not use for security-critical applications.
See ADR-007 for details on the encoding model.
Quick Start
Installation
# Clone the repository
# Build with Cargo
# Or use the orchestrator
Basic Usage
# Ingest a directory into an engram
# Extract from an engram
# Query similarity
Using the Orchestrator
The orchestrator provides unified build, test, and deployment workflows:
# Quick start: build, test, and package everything
# Run integration tests
# Build Docker image
# Display system info
# Clean all artifacts
CLI Reference
Embeddenator provides the following commands for working with holographic engrams:
embeddenator --help
Get comprehensive help information:
# Show main help with examples
# Show detailed help for a specific command
ingest - Create Holographic Engram
Process one or more files and/or directories and encode them into a holographic engram.
) )
# Basic ingestion
# Mix files and directories (repeat -i/--input)
# With verbose output
# Custom filenames
What it does:
- Recursively scans any input directories
- Ingests any input files directly
- Chunks files (4KB default)
- Encodes chunks using sparse ternary VSA
- Creates holographic superposition in root vector
- Saves engram (holographic data) and manifest (metadata)
extract - Reconstruct Files
Bit-perfect reconstruction of all files from an engram.
# Basic extraction
# With default filenames
# From backup
What it does:
- Loads engram and manifest
- Reconstructs directory structure
- Algebraically unbinds chunks from root vector
- Writes bit-perfect copies of all files
- Preserves file hierarchy and metadata
query - Similarity Search
Compute cosine similarity between a query file and engram contents.
)
)
# Query similarity
# With verbose output
# Using default engram
What it does:
- Encodes query file using VSA
- Computes cosine similarity with engram
- Returns similarity score
If --hierarchical-manifest and --sub-engrams-dir are provided, it also runs a store-backed hierarchical query and prints the top hierarchical matches.
Similarity interpretation:
- >0.75: Strong match, likely contains similar content
- 0.3-0.75: Moderate similarity, some shared patterns
- <0.3: Low similarity, likely unrelated content
query-text - Similarity Search (Text)
Encode a literal text string as a query vector and run the same retrieval path as query.
# With hierarchical selective unfolding:
bundle-hier - Build Hierarchical Retrieval Artifacts
Build a hierarchical manifest and a directory of sub-engrams from an existing flat root.engram + manifest.json. This enables store-backed selective unfolding queries.
# Optional: deterministically shard large nodes (bounds per-node indexing cost)
Docker Usage (Experimental)
Note: Docker support is in development and may not be fully functional.
Build Tool Image
Run in Container
# Ingest data
# Extract data
Test Coverage
Embeddenator has comprehensive test coverage:
- 160+ integration tests across 23 test suites
- 97.6% pass rate (166/170 tests passing)
- Test categories: Balanced ternary, codebook operations, VSA properties, error recovery, hierarchical operations, CLI integration
- Continuous testing: All core functionality verified with each build
Verified Capabilities
- โ Text file reconstruction: Byte-for-byte identical reconstruction verified
- โ Binary file recovery: Exact binary reconstruction tested
- โ VSA operations: Bundle, bind, and similarity operations tested
- โ Hierarchical encoding: Multi-level chunking verified
- โ Error recovery: Corruption and concurrency handling tested
In Development
- โ ๏ธ Large-scale testing: TB-scale datasets not yet fully validated
- โ ๏ธ Performance optimization: Benchmarking and tuning ongoing
- โ ๏ธ Security audit: Cryptographic properties under research
Architecture
Core Components
-
SparseVec: Sparse ternary vector implementation
pos: Indices with +1 valueneg: Indices with -1 value- Efficient operations: bundle, bind, cosine similarity
- Hardware-optimized: 39-40 trits per 64-bit register
-
EmbrFS: Holographic filesystem layer
- Chunked encoding (4KB default)
- Manifest for file metadata
- Codebook for chunk storage
-
CLI: Command-line interface
- Ingest: directory โ engram
- Extract: engram โ directory
- Query: similarity search
Architecture Decision Records (ADRs)
Comprehensive architectural documentation is available in docs/adr/:
-
ADR-001: Sparse Ternary VSA
- Core VSA design and sparse ternary vectors
- Balanced ternary mathematics and hardware optimization
- 64-bit register encoding (39-40 trits per register)
-
ADR-002: Multi-Agent Workflow System
-
ADR-003: Self-Hosted Runner Architecture
-
ADR-004: Holographic OS Container Design
- Configuration-driven builder for Debian/Ubuntu
- Dual versioning strategy (LTS + nightly)
- Package isolation capabilities
-
ADR-005: Hologram-Based Package Isolation
- Factoralization of holographic containers
- Balanced ternary encoding for compact representation
- Package-level granular updates
- Hardware optimization strategy for 64-bit CPUs
-
ADR-006: Dimensionality and Sparsity Scaling
- Scaling holographic space to TB-scale datasets
- Adaptive sparsity strategy (maintain constant computational cost)
- Performance analysis and collision probability projections
- Impact on 100% bit-perfect guarantee
- Deep operation resilience for factoralization
-
ADR-007: Codebook Security and Reversible Encoding
- VSA-as-a-lens cryptographic primitive
- Quantum-resistant encoding mechanism
- Mathematically trivial with key, impossible without
- Bulk encryption with selective decryption
- Integration with holographic indexing
See docs/adr/README.md for the complete ADR index.
File Format
Engram (.engram):
- Binary serialized format (bincode)
- Contains root SparseVec and codebook
- Self-contained holographic state
Manifest (.json):
- Human-readable file listing
- Chunk mapping and metadata
- Required for extraction
Development
API Documentation
Comprehensive API documentation is available:
# Generate and open documentation locally
# Or use the automated script
# View online (after publishing)
# https://docs.rs/embeddenator
The documentation includes:
- Module-level overviews with examples
- Function documentation with usage patterns
- 9 runnable doc tests demonstrating API usage
- VSA operation examples (bundle, bind, cosine)
Running Tests
# Recommended: everything Cargo considers testable (lib/bin/tests/examples/benches)
# Doc tests only
# Optimized build tests (useful before benchmarking)
# Feature-gated correctness/perf gates
# Long-running/expensive tests are explicitly opt-in:
# - QA memory scaling (requires env var + ignored flag)
EMBEDDENATOR_RUN_QA_MEMORY=1
# - Multi-GB soak test (requires env var + ignored flag)
EMBEDDENATOR_RUN_SOAK=1
# Integration tests via orchestrator
# Full test suite
Notes:
- Seeing many tests marked as "ignored" during
cargo benchis expected: Cargo runs the unit test harness in libtest's--benchmode, which skips normal#[test]functions (it printsifor each). Usecargo test(commands above) to actually execute tests. cargo test --workspace --all-targetswill also compile/run Criterion benches in a fast "smoke" mode (they printTesting ... Success). This is intended to catch broken benches early.
CI/CD and Build Monitoring
The project uses separated CI/CD workflows for optimal performance and reliability:
# Test CI build locally with monitoring
# Monitor for specific timeout (in seconds)
CI Workflow Structure:
Three separate workflows eliminate duplication and provide clear responsibilities:
- ci-pre-checks.yml - Fast validation (fmt, clippy, unit tests, doc tests)
- ci-amd64.yml - Full AMD64 build and test (REQUIRED PRE-MERGE CHECK)
- ci-arm64.yml - ARM64 build and test (configured for self-hosted runners)
CI Features:
- Separated workflows prevent duplicate runs
- AMD64 workflow is a required status check - PRs cannot merge until it passes
- Parallel builds using all available cores
- Intelligent timeout management (15min tests, 10min builds, 30min total)
- Build artifact upload on failure
- Performance metrics reporting
- Automatic parallelization with
CARGO_BUILD_JOBS
Architecture Support:
| Architecture | Status | Runner Type | Trigger | Notes |
|---|---|---|---|---|
| amd64 (x86_64) | โ Production | GitHub-hosted (ubuntu-latest) | Every PR (required check) | Stable, 5-7min |
| arm64 (aarch64) | ๐ง Ready | Self-hosted (pending deployment) | Manual only | Will enable on merge to main |
ARM64 Deployment Roadmap:
- โ Phase 1: Root cause analysis completed - GitHub doesn't provide standard ARM64 runners
- โ
Phase 2: Workflow configured for self-hosted runners with labels
["self-hosted", "linux", "ARM64"] - ๐ง Phase 3: Deploy self-hosted ARM64 infrastructure (in progress)
- โณ Phase 4: Manual testing and validation
- โณ Phase 5: Enable automatic trigger on merge to main only
Why Self-Hosted for ARM64?
- GitHub Actions doesn't provide standard hosted ARM64 runners
- Self-hosted provides native execution (no emulation overhead)
- Cost-effective for frequent builds
- Ready to deploy when infrastructure is available
See .github/workflows/README.md for complete CI/CD documentation and ARM64 setup guide.
Self-Hosted Runner Automation
Embeddenator includes a comprehensive Python-based automation system for managing GitHub Actions self-hosted runners with complete lifecycle management and multi-architecture support:
Features:
- โจ Automated registration with short-lived tokens
- ๐ Complete lifecycle management (register โ run โ deregister)
- โฑ๏ธ Configurable auto-deregistration after idle timeout
- ๐ฏ Manual mode for persistent runners
- ๐ Multi-runner deployment support
- ๐๏ธ Multi-architecture support (x64, ARM64, RISC-V)
- ๐ง QEMU emulation for cross-architecture runners
- ๐ Health monitoring and status reporting
- ๐งน Automatic cleanup of Docker resources
- โ๏ธ Flexible configuration via .env file or CLI arguments
Supported Architectures:
- x64 (AMD64) - Native x86_64 runners
- ARM64 (aarch64) - ARM64 runners (native or emulated via QEMU)
- RISC-V (riscv64) - RISC-V runners (native or emulated via QEMU)
Quick Start:
# 1. Copy and configure environment file
# Edit .env and set GITHUB_REPOSITORY and GITHUB_TOKEN
# 2. Run in auto mode (registers, starts, monitors, auto-deregisters when idle)
# 3. Or use manual mode (keeps running until stopped)
RUNNER_MODE=manual
Multi-Architecture Examples:
# Deploy ARM64 runners on x86_64 hardware (with emulation, auto-detect runtime)
RUNNER_TARGET_ARCHITECTURES=arm64
# Deploy runners for all architectures
RUNNER_TARGET_ARCHITECTURES=x64,arm64,riscv64 RUNNER_COUNT=6
# Deploy with automatic QEMU installation (requires sudo)
RUNNER_EMULATION_AUTO_INSTALL=true RUNNER_TARGET_ARCHITECTURES=arm64
# Use specific emulation method (docker, podman, or qemu)
RUNNER_EMULATION_METHOD=podman RUNNER_TARGET_ARCHITECTURES=arm64
# Use Docker for emulation
RUNNER_EMULATION_METHOD=docker RUNNER_TARGET_ARCHITECTURES=arm64,riscv64
Individual Commands:
# Register runner(s)
# Start runner service(s)
# Monitor and manage lifecycle
# Check status
# Stop and deregister
Advanced Usage:
# Deploy multiple runners
# Custom labels
# Auto-deregister after 10 minutes of inactivity
RUNNER_IDLE_TIMEOUT=600
Configuration Options:
Key environment variables (see .env.example for full list):
GITHUB_REPOSITORY- Repository to register runners for (required)GITHUB_TOKEN- Personal access token with repo scope (required)RUNNER_MODE- Deployment mode:auto(default) ormanualRUNNER_IDLE_TIMEOUT- Auto-deregister timeout in seconds (default: 300)RUNNER_COUNT- Number of runners to deploy (default: 1)RUNNER_LABELS- Comma-separated runner labelsRUNNER_EPHEMERAL- Enable ephemeral runners (deregister after one job)RUNNER_TARGET_ARCHITECTURES- Target architectures:x64,arm64,riscv64(comma-separated)RUNNER_ENABLE_EMULATION- Enable QEMU emulation for cross-architecture (default: true)RUNNER_EMULATION_METHOD- Emulation method:auto,qemu,docker,podman(default: auto)RUNNER_EMULATION_AUTO_INSTALL- Auto-install QEMU if missing (default: false, requires sudo)
See .env.example for complete configuration documentation.
Deployment Modes:
-
Auto Mode (default): Runners automatically deregister after being idle for a specified timeout
- Perfect for cost optimization
- Ideal for CI/CD pipelines with sporadic builds
- Runners terminate when queue is empty
-
Manual Mode: Runners keep running until manually stopped
- Best for development environments
- Useful for persistent infrastructure
- Explicit control over runner lifecycle
See .github/workflows/README.md for complete CI/CD documentation and ARM64 setup guide.
Project Structure
embeddenator/
โโโ Cargo.toml # Rust dependencies
โโโ src/
โ โโโ main.rs # Complete implementation
โโโ tests/
โ โโโ e2e_regression.rs # 6 E2E tests (includes critical engram modification test)
โ โโโ integration_cli.rs # 7 integration tests
โ โโโ unit_tests.rs # 11 unit tests
โโโ Dockerfile.tool # Static binary packaging
โโโ Dockerfile.holographic # Holographic OS container
โโโ orchestrator.py # Unified build/test/deploy
โโโ runner_manager.py # Self-hosted runner automation entry point (NEW)
โโโ runner_automation/ # Runner automation package (NEW)
โ โโโ __init__.py # Package initialization (v1.1.0)
โ โโโ config.py # Configuration management
โ โโโ github_api.py # GitHub API client
โ โโโ installer.py # Runner installation
โ โโโ runner.py # Individual runner lifecycle
โ โโโ manager.py # Multi-runner orchestration
โ โโโ emulation.py # QEMU emulation for cross-arch (NEW)
โ โโโ cli.py # Command-line interface
โ โโโ README.md # Package documentation
โโโ .env.example # Runner configuration template (NEW)
โโโ ci_build_monitor.sh # CI hang detection and monitoring
โโโ generate_docs.sh # Documentation generation
โโโ .github/
โ โโโ workflows/
โ โโโ ci-pre-checks.yml # Pre-build validation (every PR)
โ โโโ ci-amd64.yml # AMD64 build (required for merge)
โ โโโ ci-arm64.yml # ARM64 build (self-hosted, pending)
โ โโโ build-holographic-os.yml# OS container builds
โ โโโ build-push-images.yml # Multi-OS image pipeline
โ โโโ nightly-builds.yml # Nightly bleeding-edge builds
โ โโโ README.md # Complete CI/CD documentation
โโโ input_ws/ # Example input (gitignored)
โโโ workspace/ # Build artifacts (gitignored)
โโโ README.md # This file
Contributing
We welcome contributions to Embeddenator! Here's how you can help:
Getting Started
- Fork the repository on GitHub
- Clone your fork locally:
- Create a feature branch:
Development Workflow
- Make your changes with clear, focused commits
- Add tests for new functionality:
- Unit tests in
src/modules - Integration tests in
tests/integration_*.rs - End-to-end tests in
tests/e2e_*.rs
- Unit tests in
- Run the full test suite:
# Run all Rust tests # Run integration tests via orchestrator # Run full validation suite - Check code quality:
# Run Clippy linter (zero warnings required) # Format code # Check Python syntax - Test cross-platform (if applicable):
# Build Docker images # Test on different architectures
Pull Request Guidelines
- Write clear commit messages describing what and why
- Reference issues in commit messages (e.g., "Fixes #123")
- Keep PRs focused - one feature or fix per PR
- Update documentation if you change CLI options or add features
- Ensure all tests pass before submitting
- Maintain code coverage - aim for >80% test coverage
Code Style
- Rust: Follow standard Rust conventions (use
cargo fmt) - Python: Follow PEP 8 style guide
- Comments: Document complex algorithms, especially VSA operations
- Error handling: Use proper error types, avoid
.unwrap()in library code
Areas for Contribution
We especially welcome contributions in these areas:
- ๐ฌ Performance optimizations for VSA operations
- ๐ Benchmarking tools and performance analysis
- ๐งช Additional test cases covering edge cases
- ๐ Documentation improvements and examples
- ๐ Bug fixes and error handling improvements
- ๐ Multi-platform support (Windows, macOS testing)
- ๐ง New features (incremental updates, compression options, etc.)
Reporting Issues
When reporting bugs, please include:
- Embeddenator version (
embeddenator --version) - Operating system and architecture
- Rust version (
rustc --version) - Minimal reproduction steps
- Expected vs. actual behavior
- Relevant log output (use
--verboseflag)
Questions and Discussions
- Issues: Bug reports and feature requests
- Discussions: Questions, ideas, and general discussion
- Pull Requests: Code contributions with tests
Code of Conduct
- Be respectful and inclusive
- Provide constructive feedback
- Focus on the technical merits
- Help others learn and grow
Thank you for contributing to Embeddenator! ๐
Advanced Usage
Custom Chunk Size
Modify chunk_size in EmbrFS::ingest_file for different trade-offs:
let chunk_size = 8192; // Larger chunks = better compression, slower reconstruction
Hierarchical Encoding
For very large datasets, implement multi-level engrams:
// Level 1: Individual files
// Level 2: Directory summaries
// Level 3: Root engram of all directories
Algebraic Operations
Combine multiple engrams:
let combined = engram1.root.bundle;
// Now combined contains both datasets holographically
Troubleshooting
Out of Memory
Reduce chunk size or process files in batches:
# Process directories separately
for; do
done
Reconstruction Mismatches
Verify manifest and engram are from the same ingest:
# Check manifest metadata
# Re-ingest if needed
Performance Tips
- Use release builds:
cargo build --releaseis 10-100x faster - Enable SIMD acceleration: For query-heavy workloads, build with
--features simdandRUSTFLAGS="-C target-cpu=native"
See docs/SIMD_OPTIMIZATION.md for details on 2-4x query speedup# Build with SIMD optimizations RUSTFLAGS="-C target-cpu=native" - Batch processing: Ingest multiple directories separately for parallel processing
- SSD storage: Engram I/O benefits significantly from fast storage
- Memory: Ensure sufficient RAM for large codebooks (~100 bytes per chunk)
License
MIT License - see LICENSE file for details
References
Vector Symbolic Architectures (VSA)
- Vector Symbolic Architectures: Kanerva, P. (2009)
- Sparse Distributed Representations
- Holographic Reduced Representations (HRR)
Ternary Computing and Hardware Optimization
- Balanced Ternary - Wikipedia overview
- Ternary Computing - Historical and mathematical foundations
- Three-Valued Logic and Quantum Computing
- Optimal encoding: 39-40 trits in 64-bit registers (39 for signed, 40 for unsigned)
Architecture Documentation
- ADR-001: Sparse Ternary VSA - Core design and hardware optimization
- ADR-005: Hologram Package Isolation - Balanced ternary implementation
- Complete ADR Index - All architecture decision records
Use Cases and Applications
- Specialized AI Assistant Models - Architecture for deploying coding and research assistant LLMs with embeddenator-enhanced retrieval, multi-model parallel execution, and document-driven development workflows
Support
Getting Help
- Documentation: This README and built-in help (
embeddenator --help) - Issues: Report bugs or request features at https://github.com/tzervas/embeddenator/issues
- Discussions: Ask questions and share ideas at https://github.com/tzervas/embeddenator/discussions
- Examples: See
examples/directory (coming soon) for usage patterns
Common Questions
Q: What file types are supported?
A: All file types - text, binary, executables, images, etc. Embeddenator is file-format agnostic.
Q: Is the reconstruction really bit-perfect?
A: Yes, for files tested so far. We have 160+ tests verifying reconstruction accuracy. However, large-scale (TB) testing is still in progress.
Q: What's the project's development status?
A: This is alpha software (v0.20.0-alpha). Core functionality works and is tested, but APIs are unstable and not recommended for production use. See PROJECT_STATUS.md for details.
Q: Can I combine multiple engrams?
A: Yes! The bundle operation allows combining engrams. This is tested for basic cases but advanced algebraic operations are still experimental.
Q: What's the maximum data size?
A: Hierarchical encoding is designed for large datasets. Currently tested with MB-scale data; TB-scale testing is planned but not yet validated.
Q: How does this compare to compression?
A: Embeddenator is not primarily a compression tool. It creates holographic representations that enable algebraic operations on encoded data. Size characteristics vary by data type.
Reporting Issues
When reporting bugs, please include:
- Embeddenator version:
embeddenator --version - Operating system and architecture
- Rust version:
rustc --version - Minimal reproduction steps
- Expected vs. actual behavior
- Relevant log output (use
--verboseflag)
Security
โ ๏ธ Security Notice: The cryptographic properties of the encoding mechanism are under research. Do not use Embeddenator for security-critical applications or as a replacement for established cryptographic systems.
If you discover a security vulnerability, please email tz-dev@vectorweight.com or create a private security advisory on GitHub rather than opening a public issue.
Documentation
Project Documentation
- PROJECT_STATUS.md - Complete status: what works, what's experimental, what's planned
- TESTING.md - Comprehensive testing guide and infrastructure documentation
- LICENSE - MIT License terms
Technical Documentation
- Component Architecture - Modular crate structure
- Local Development - Development environment setup
- ADR Index - Architecture Decision Records
API Documentation
# Generate and view API documentation
Handoff Documentation
- QA to Documentation Handoff - Latest QA phase completion report
License: MIT - See LICENSE file for full text
Copyright: 2025-2026 Tyler Zervas tz-dev@vectorweight.com
Built with Rust and Vector Symbolic Architecture principles.