Skip to main content

Crate narrate_this

Crate narrate_this 

Source
Expand description

§narrate-this

A Rust SDK that turns text, URLs, or search queries into narrated videos — complete with TTS, captions, and stock visuals.

§Quick start

use narrate_this::{
    ContentPipeline, ContentSource, ElevenLabsConfig, ElevenLabsTts,
    FfmpegRenderer, FirecrawlScraper, FsAudioStorage, OpenAiConfig,
    OpenAiKeywords, PexelsSearch, RenderConfig, StockMediaPlanner,
};

let pipeline = ContentPipeline::builder()
    .content(FirecrawlScraper::new("http://localhost:3002"))
    .tts(ElevenLabsTts::new(ElevenLabsConfig {
        api_key: "your-key".into(),
        ..Default::default()
    }))
    .media(StockMediaPlanner::new(
        OpenAiKeywords::new(OpenAiConfig {
            api_key: "your-key".into(),
            ..Default::default()
        }),
        PexelsSearch::new("your-key"),
    ))
    .renderer(FfmpegRenderer::new(), RenderConfig::default())
    .audio_storage(FsAudioStorage::new("./output"))
    .build()?;

let output = pipeline
    .process(ContentSource::ArticleUrl {
        url: "https://example.com/article".into(),
        title: Some("My Article".into()),
    })
    .await?;

§Pipeline stages

Content Source → Narration → Text Transforms → TTS → Media → Audio Storage → Video Render

Only TTS is required. Everything else is optional — skip content sourcing if you pass raw text, skip media if you just want audio, skip rendering if you don’t need video.

§Local / self-hosted AI

All OpenAI-backed providers accept a base_url for any OpenAI-compatible server (Ollama, LM Studio, vLLM, LocalAI, llama.cpp, etc.):

use narrate_this::{
    ContentPipeline, ContentSource, OpenAiConfig, OpenAiKeywords,
    OpenAiTts, OpenAiTtsConfig, PexelsSearch, StockMediaPlanner,
};

let pipeline = ContentPipeline::builder()
    .tts(OpenAiTts::new(OpenAiTtsConfig {
        base_url: "http://localhost:8880".into(), // e.g. Kokoro
        caption_model: Some("whisper-1".into()),
        ..Default::default()
    }))
    .media(StockMediaPlanner::new(
        OpenAiKeywords::new(OpenAiConfig {
            base_url: "http://localhost:11434".into(), // e.g. Ollama
            model: "llama3".into(),
            ..Default::default()
        }),
        PexelsSearch::new("your-key"),
    ))
    .build()?;

let output = pipeline
    .process(ContentSource::Text("Hello world".into()))
    .await?;

§Custom providers

Swap any stage by implementing the matching trait: TtsProvider, ContentProvider, KeywordExtractor, MediaSearchProvider, MediaPlanner, TextTransformer, AudioStorage, CacheProvider, or VideoRenderer.

Re-exports§

pub use traits::AudioStorage;
pub use traits::CacheCategory;
pub use traits::CacheProvider;
pub use traits::ContentProvider;
pub use traits::KeywordExtractor;
pub use traits::MediaPlanner;
pub use traits::MediaSearchProvider;
pub use traits::MediaSearchResult;
pub use traits::PlannedMedia;
pub use traits::RenderConfig;
pub use traits::TextTransformer;
pub use traits::TtsProvider;
pub use traits::VideoRenderer;
pub use providers::elevenlabs::ElevenLabsConfig;
pub use providers::elevenlabs::ElevenLabsTts;
pub use providers::firecrawl::FirecrawlConfig;
pub use providers::firecrawl::FirecrawlScraper;
pub use providers::ffmpeg_renderer::FfmpegRenderer;
pub use providers::fs_storage::FsAudioStorage;
pub use providers::openai::LlmMediaPlanner;
pub use providers::openai::OpenAiConfig;
pub use providers::openai::OpenAiKeywords;
pub use providers::openai::OpenAiTransform;
pub use providers::openai_tts::OpenAiTts;
pub use providers::openai_tts::OpenAiTtsConfig;
pub use providers::pexels::PexelsSearch;
pub use providers::stock_planner::StockMediaPlanner;

Modules§

providers
traits

Structs§

AudioTrack
A background audio track to mix with the narration audio.
CaptionSegment
A single word-level caption with timing information.
ContentOutput
Complete pipeline output returned by ContentPipeline::process.
ContentPipeline
The main pipeline that orchestrates narration, TTS, media, and video rendering.
KeywordResult
Output from keyword extraction.
MediaAsset
A user-provided media asset with a description for AI-based media planning.
MediaSegment
A media segment tied to a time range in the narration audio.
NarrationStyle
Configurable style variables for narration prompt templates.
PipelineBuilder
Builder for ContentPipeline.
TimedChunk
A narration chunk with timing information, used by media planners.
TtsResult
Output from a TTS synthesis call.

Enums§

ContentSource
Input source for content creation.
MediaFallback
What to do when the media planner can’t match a user asset to a narration chunk.
MediaKind
The type of media asset (image or video).
MediaSource
The source of a media asset — a URL, local file path, or raw bytes.
PipelineProgress
Progress events during pipeline execution.
SdkError
Errors returned by the SDK.

Type Aliases§

Result
Convenience alias for std::result::Result<T, SdkError>.