animedb
animedb is a Rust-first metadata project for local media servers.
Advisory:
animedbstores and normalizes public metadata, but it does not override provider Terms of Service, attribution requirements, authentication rules, rate limits or mature-content restrictions. Before enabling bulk sync for a source, verify that your intended usage is allowed by that source and configure conservative sync budgets.
It has two consumption modes that can be used separately or together:
- local-first: manage a local SQLite catalog with schema, downloads, sync, FTS5 search and JSON source payloads
- remote-first: query normalized metadata from remote providers without forcing client applications to manage persistence or provider-specific normalization
The project also ships a Rust GraphQL API on top of the same crate.
See REFERENCE.md for the current library and API reference.
Supported providers
| Provider | Media kinds | Data source | Episodes | Licensed under |
|---|---|---|---|---|
| AniList | Anime, Manga | GraphQL API | N | AniList Terms |
| Jikan (MyAnimeList) | Anime, Manga | REST API | Y | Jikan MIT |
| Kitsu | Anime, Manga | REST API | Y | Kitsu API Policy |
| TVmaze | Shows | REST API | Y | CC BY-SA 4.0 |
| IMDb | Movies, Shows | Official TSV datasets | Y | IMDb Conditions |
Workspace
crates/animedb— library crate with SQLite schema management, sync and query APIscrates/animedb-api— GraphQL API binary built on top ofanimedb
Feature flags
# Full featured (local SQLite + all providers) — default
= "0.6.2"
# Remote-only, no SQLite dependency (safe for sqlx-based projects)
= { = "0.6.2", = false, = ["remote"] }
local-db(default): local SQLite storage, sync state persistence, and the [AnimeDb] type. This feature pulls inrusqlitewith a bundled SQLite.remote(default): remote provider clients and the normalized data model. Zero native dependencies.
Why feature gates? local-db requires rusqlite (bundled SQLite). Many Rust projects
already use sqlx with its own SQLite linkage, and Cargo rejects putting both in the same
dependency graph. If your project uses sqlx, depend on animedb with only features = ["remote"]
to get all provider clients, normalization types, and sync data structures without any SQLite
conflict.
Current Rust surface
Local-first
use ;
let = generate_database_with_report?;
println!;
let updated = sync_database?;
println!;
let monster = db.anime_metadata.by_external_id?;
println!;
let show = db.show_metadata.search?;
let show = db.get_by_external_id?;
let movies = db.movie_metadata.search?;
println!;
# Ok::
Remote-first
use AnimeDb;
let remote = remote_anilist;
let results = remote.anime_metadata.search?;
let media = remote.anime_metadata.by_id?;
# Ok::
Or choose the provider dynamically:
use ;
let remote = remote;
let results = remote.show_metadata.search?;
# Ok::
Media kinds
All providers map to one of four supported kinds:
SQLite schema
The SQLite catalog is created and migrated by the crate itself. The current schema includes:
media— canonical normalized recordsmedia_alias— normalized aliases and synonymsmedia_external_id— source-specific identifierssource_record— raw per-source JSON payloads and source update metadatafield_provenance— winner-per-field audit trail for canonical merge decisionssync_state— persisted sync checkpoints/cursorsmedia_fts—FTS5index for title, alias and synopsis searchepisode— episode metadata for anime and shows
Episodes
The episode table stores enriched episode data fetched from providers. Key fields:
season_number,episode_number,absolute_number— episode numberingtitle_display,title_original— localized titlessynopsis,air_date,runtime_minutes,thumbnail_url— metadata
Single Media Sync
Query and merge episodes for a media record:
use ;
let mut db = open?;
// Finds the stored media by Kitsu ID, then tries every episode-capable
// external ID attached to that merged media record (Jikan/MAL, Kitsu, TVmaze).
db.fetch_and_store_episodes_by_external_id?;
// Retrieve the media document with its episode list
let doc = db.media_document_by_external_id?;
println!;
For remote-only callers that want one merged episode record per flat episode number, use the
merged aggregation helper. It fetches from every episode-capable external ID, groups by
absolute_number.or(episode_number), skips records with no episode number, and selects each
field from the highest-priority provider that supplied a value. For anime episode data, Jikan
wins over Kitsu when both have a value, while Kitsu can still fill fields missing from Jikan.
use ;
let media = jikan.anime_metadata.by_id?.unwrap;
let episodes = fetch_merged_episodes_from_external_ids?;
println!;
If you need the raw per-provider records instead, call the lower-level aggregation API directly:
use ;
let media = jikan.anime_metadata.by_id?.unwrap;
let provider_records = fetch_episodes_from_external_ids?;
println!;
If you intentionally want one provider only, call the provider facade directly:
let episodes = jikan.anime_metadata.episodes?;
Bulk Seeding
To seed the entire database with episode metadata (including high-performance IMDb bulk dump ingestion):
use AnimeDb;
let mut db = open?;
// Ingest IMDb episode dumps and query APIs for all other providers
let total = db.sync_service.sync_all_episodes?;
println!;
Note: media.episodes is the total episode count from provider metadata.
MediaDocument.episodes is the enriched list of persisted episode records
fetched from a specific provider.
The connection is configured with:
PRAGMA journal_mode=WALPRAGMA synchronous=NORMALPRAGMA foreign_keys=ONPRAGMA busy_timeout=5000PRAGMA temp_store=MEMORY
GraphQL API
The GraphQL API is provided by animedb-api. Run it locally:
Environment variables:
ANIMEDB_DATABASE_PATH— SQLite file path, default/data/animedb.sqliteANIMEDB_LISTEN_ADDR— bind address, default0.0.0.0:8080
Query example (search shows and movies):
{
search(query: "breaking bad", options: { limit: 5, mediaKind: SHOW }) {
mediaId
titleDisplay
mediaKind
genres
externalIds { source sourceId }
}
}
Docker
Build and run the Rust GraphQL API:
Make targets
The repository includes a Makefile for common workflows:
make build— compile the workspacemake test— run the Rust test suitemake test-e2e— run the end-to-end integration test (scripts/e2e_test.sh)make docker-build— build the API imagemake docker-run— run the API image locallymake debug-api— run the GraphQL API directly withcargo run