docs.rs failed to build mirror-log-0.1.4
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build:
mirror-log-0.1.3
mirror-log
An append-only event log for capturing thoughts, notes, and data you do not want to lose.
mirror-log is local-first, SQLite-backed, and designed to be boring in the best way: easy to inspect, easy to script, and hard to accidentally lose context.
What Changed in v0.1.4
init_dbnow accepts path-like inputs (&str,PathBuf, etc.) for cleaner integration usage.- Duplicate events are allowed again (append-only semantics preserved), while hash-based lookup remains indexed for dedupe stats.
is_duplicatenow safely returnsfalsewhen no matching hash exists.- Chunk splitting is more robust and UTF-8 safe for large content.
- Integration tests and CLI-path handling were cleaned up; test and lint pipeline is green.
Core Principles
- Append-only: events are never updated or deleted.
- SQLite is the source of truth: your data stays local and inspectable.
- No hidden layers: direct SQL remains first-class.
- Source-aware logging: every event tracks where it came from.
Installation
Binary output:
target/release/mirror-log
Optional local install:
Quick Start
# Add one event
# Add from file
# Bulk import from stdin (one line = one event)
|
# Show recent
# Search full events
# Search chunked content
# Ingestion stats (total/unique/duplicates)
# Database summary
CLI Overview
Global flags:
--db <path>: SQLite database path (default:mirror.db)--batch-size <n>: stdin ingest batch size (default:1000)
Commands:
add <content> [--source <name>] [--meta <json-or-text>]add-file <path> [--source <name>] [--meta <json-or-text>]stdin [--source <name>] [--meta <json-or-text>]show [--last <n>] [--source <name>] [--preview <chars>]search <term> [--preview <chars>] [--chunks]get <event-id>statsinfo
Data Model
Main table: events
id TEXT PRIMARY KEY(UUID)timestamp INTEGER NOT NULL(event timestamp)source TEXT NOT NULLcontent TEXT NOT NULLmeta TEXT NULLingested_at INTEGER NOT NULLcontent_hash TEXT NULL(SHA256 for dedupe analytics)
Chunk table: chunks
- Stores chunked slices of event content (
event_id,chunk_index, offsets, text, timestamp). - Used by
search --chunksand large-content workflows.
Additional enrichment tables also exist (event_tags, event_links, event_embeddings, enrichment_jobs) for future layering without mutating raw events.
Direct SQLite Access
SELECT datetime(timestamp, 'unixepoch'), source, content
FROM events
ORDER BY timestamp DESC
LIMIT 10;
SELECT source, COUNT(*)
FROM events
GROUP BY source
ORDER BY COUNT(*) DESC;
SELECT COUNT(*) AS total,
COUNT(DISTINCT content_hash) AS unique_events
FROM events;
Development
License
AGPL-3.0-or-later. See LICENSE.