pub struct FilterGraphBuilder { /* private fields */ }Expand description
Builder for constructing a FilterGraph.
Create one with FilterGraph::builder(), chain the desired filter
methods, then call build to obtain the graph.
§Examples
use ff_filter::{FilterGraph, ToneMap};
let graph = FilterGraph::builder()
.scale(1280, 720)
.tone_map(ToneMap::Hable)
.build()?;Implementations§
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn afade_in(self, start_sec: f64, duration_sec: f64) -> Self
pub fn afade_in(self, start_sec: f64, duration_sec: f64) -> Self
Audio fade-in from silence, starting at start_sec seconds and reaching
full volume after duration_sec seconds.
build returns FilterError::InvalidConfig if
duration_sec is ≤ 0.0.
Sourcepub fn afade_out(self, start_sec: f64, duration_sec: f64) -> Self
pub fn afade_out(self, start_sec: f64, duration_sec: f64) -> Self
Audio fade-out to silence, starting at start_sec seconds and reaching
full silence after duration_sec seconds.
build returns FilterError::InvalidConfig if
duration_sec is ≤ 0.0.
Sourcepub fn areverse(self) -> Self
pub fn areverse(self) -> Self
Reverse audio playback using FFmpeg’s areverse filter.
Warning: areverse buffers the entire clip in memory before producing
any output. Only use this on short clips to avoid excessive memory usage.
Sourcepub fn loudness_normalize(
self,
target_lufs: f32,
true_peak_db: f32,
lra: f32,
) -> Self
pub fn loudness_normalize( self, target_lufs: f32, true_peak_db: f32, lra: f32, ) -> Self
Apply EBU R128 two-pass loudness normalization.
target_lufs is the target integrated loudness (e.g. −23.0),
true_peak_db is the true-peak ceiling (e.g. −1.0), and
lra is the target loudness range in LU (e.g. 7.0).
Pass 1 measures integrated loudness with the ebur128 filter.
Pass 2 applies a linear volume correction. All audio frames are
buffered in memory between the two passes — use only for clips that
fit comfortably in RAM.
build returns FilterError::InvalidConfig if
target_lufs >= 0.0, true_peak_db > 0.0, or lra <= 0.0.
Sourcepub fn normalize_peak(self, target_db: f32) -> Self
pub fn normalize_peak(self, target_db: f32) -> Self
Normalize the audio peak level to target_db dBFS using a two-pass approach.
Pass 1 measures the true peak with astats=metadata=1.
Pass 2 applies volume={gain}dB so the output peak reaches target_db.
All audio frames are buffered in memory between the two passes — use only
for clips that fit comfortably in RAM.
build returns FilterError::InvalidConfig if
target_db > 0.0 (cannot normalize above digital full scale).
Sourcepub fn agate(self, threshold_db: f32, attack_ms: f32, release_ms: f32) -> Self
pub fn agate(self, threshold_db: f32, attack_ms: f32, release_ms: f32) -> Self
Apply a noise gate to suppress audio below a given threshold.
Uses FFmpeg’s agate filter. Audio below threshold_db (dBFS) is
attenuated; audio above it passes through unmodified. The threshold is
converted from dBFS to the linear amplitude ratio expected by agate.
build returns FilterError::InvalidConfig if
attack_ms or release_ms is ≤ 0.0.
Sourcepub fn compressor(
self,
threshold_db: f32,
ratio: f32,
attack_ms: f32,
release_ms: f32,
makeup_db: f32,
) -> Self
pub fn compressor( self, threshold_db: f32, ratio: f32, attack_ms: f32, release_ms: f32, makeup_db: f32, ) -> Self
Apply a dynamic range compressor to the audio.
Uses FFmpeg’s acompressor filter. Audio peaks above threshold_db
(dBFS) are reduced by ratio:1. makeup_db applies additional gain
after compression to restore perceived loudness.
build returns FilterError::InvalidConfig if
ratio < 1.0, attack_ms ≤ 0.0, or release_ms ≤ 0.0.
Sourcepub fn stereo_to_mono(self) -> Self
pub fn stereo_to_mono(self) -> Self
Downmix stereo audio to mono by equally mixing both channels.
Uses FFmpeg’s pan filter with the expression
mono|c0=0.5*c0+0.5*c1. The output has a single channel.
Sourcepub fn channel_map(self, mapping: &str) -> Self
pub fn channel_map(self, mapping: &str) -> Self
Remap audio channels using FFmpeg’s channelmap filter.
mapping is a |-separated list of output channel names taken from
input channels, e.g. "FR|FL" swaps left and right.
build returns FilterError::InvalidConfig if
mapping is empty.
Sourcepub fn audio_delay(self, ms: f64) -> Self
pub fn audio_delay(self, ms: f64) -> Self
Shift audio for A/V sync correction.
Positive ms: uses FFmpeg’s adelay filter to delay the audio
(audio plays later). Negative ms: uses FFmpeg’s atrim filter to
advance the audio by trimming the start (audio plays earlier).
Zero ms is a no-op.
Sourcepub fn concat_audio(self, n_segments: u32) -> Self
pub fn concat_audio(self, n_segments: u32) -> Self
Concatenate n_segments sequential audio inputs using FFmpeg’s concat filter.
Requires n_segments audio input slots (push to slots 0 through
n_segments - 1 in order). build returns
FilterError::InvalidConfig if n_segments < 2.
Sourcepub fn volume(self, gain_db: f64) -> Self
pub fn volume(self, gain_db: f64) -> Self
Adjust audio volume by gain_db decibels (negative = quieter).
Sourcepub fn equalizer(self, bands: Vec<EqBand>) -> Self
pub fn equalizer(self, bands: Vec<EqBand>) -> Self
Apply a multi-band parametric equalizer.
Each EqBand maps to one FFmpeg filter node chained in sequence:
EqBand::LowShelf→lowshelfEqBand::HighShelf→highshelfEqBand::Peak→equalizer
build returns FilterError::InvalidConfig if bands
is empty.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn trim(self, start: f64, end: f64) -> Self
pub fn trim(self, start: f64, end: f64) -> Self
Trim the stream to the half-open interval [start, end) in seconds.
Sourcepub fn scale(self, width: u32, height: u32, algorithm: ScaleAlgorithm) -> Self
pub fn scale(self, width: u32, height: u32, algorithm: ScaleAlgorithm) -> Self
Scale the video to width × height pixels using the given resampling
algorithm.
Use ScaleAlgorithm::Fast for the best speed/quality trade-off.
For highest quality use ScaleAlgorithm::Lanczos at the cost of
additional CPU time.
Sourcepub fn crop(self, x: u32, y: u32, width: u32, height: u32) -> Self
pub fn crop(self, x: u32, y: u32, width: u32, height: u32) -> Self
Crop a rectangle starting at (x, y) with the given dimensions.
Sourcepub fn fade_in(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_in(self, start_sec: f64, duration_sec: f64) -> Self
Fade in from black, starting at start_sec seconds and reaching full
brightness after duration_sec seconds.
Sourcepub fn fade_out(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_out(self, start_sec: f64, duration_sec: f64) -> Self
Fade out to black, starting at start_sec seconds and reaching full
black after duration_sec seconds.
Sourcepub fn fade_in_white(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_in_white(self, start_sec: f64, duration_sec: f64) -> Self
Fade in from white, starting at start_sec seconds and reaching full
brightness after duration_sec seconds.
Sourcepub fn fade_out_white(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_out_white(self, start_sec: f64, duration_sec: f64) -> Self
Fade out to white, starting at start_sec seconds and reaching full
white after duration_sec seconds.
Sourcepub fn rotate(self, angle_degrees: f64, fill_color: &str) -> Self
pub fn rotate(self, angle_degrees: f64, fill_color: &str) -> Self
Rotate the video clockwise by angle_degrees, filling exposed corners
with fill_color.
fill_color accepts any color string understood by FFmpeg — for example
"black", "white", "0x00000000" (transparent), or "gray".
Pass "black" to reproduce the classic solid-background rotation.
Sourcepub fn tone_map(self, algorithm: ToneMap) -> Self
pub fn tone_map(self, algorithm: ToneMap) -> Self
Apply HDR-to-SDR tone mapping using the given algorithm.
Sourcepub fn lut3d(self, path: &str) -> Self
pub fn lut3d(self, path: &str) -> Self
Apply a 3D LUT colour grade from a .cube or .3dl file.
Uses FFmpeg’s lut3d filter with trilinear interpolation.
§Validation
build returns FilterError::InvalidConfig if:
- the extension is not
.cubeor.3dl, or - the file does not exist at build time.
Sourcepub fn eq(self, brightness: f32, contrast: f32, saturation: f32) -> Self
pub fn eq(self, brightness: f32, contrast: f32, saturation: f32) -> Self
Adjust brightness, contrast, and saturation using FFmpeg’s eq filter.
Valid ranges:
brightness: −1.0 – 1.0 (neutral: 0.0)contrast: 0.0 – 3.0 (neutral: 1.0)saturation: 0.0 – 3.0 (neutral: 1.0; 0.0 = grayscale)
§Validation
build returns FilterError::InvalidConfig if any
value is outside its valid range.
Sourcepub fn curves(
self,
master: Vec<(f32, f32)>,
r: Vec<(f32, f32)>,
g: Vec<(f32, f32)>,
b: Vec<(f32, f32)>,
) -> Self
pub fn curves( self, master: Vec<(f32, f32)>, r: Vec<(f32, f32)>, g: Vec<(f32, f32)>, b: Vec<(f32, f32)>, ) -> Self
Apply per-channel RGB color curves using FFmpeg’s curves filter.
Each argument is a list of (input, output) control points in [0.0, 1.0].
Pass an empty Vec for any channel that needs no adjustment.
§Validation
build returns FilterError::InvalidConfig if any
control point coordinate is outside [0.0, 1.0].
Sourcepub fn white_balance(self, temperature_k: u32, tint: f32) -> Self
pub fn white_balance(self, temperature_k: u32, tint: f32) -> Self
Correct white balance using FFmpeg’s colorchannelmixer filter.
RGB channel multipliers are derived from temperature_k via Tanner
Helland’s Kelvin-to-RGB algorithm. The tint offset shifts the green
channel (positive = more green, negative = more magenta).
Valid ranges:
temperature_k: 1000–40000 K (neutral daylight ≈ 6500 K)tint: −1.0–1.0
§Validation
build returns FilterError::InvalidConfig if either
value is outside its valid range.
Sourcepub fn hue(self, degrees: f32) -> Self
pub fn hue(self, degrees: f32) -> Self
Rotate hue by degrees using FFmpeg’s hue filter.
Valid range: −360.0–360.0. A value of 0.0 is a no-op.
§Validation
build returns FilterError::InvalidConfig if
degrees is outside [−360.0, 360.0].
Sourcepub fn gamma(self, r: f32, g: f32, b: f32) -> Self
pub fn gamma(self, r: f32, g: f32, b: f32) -> Self
Apply per-channel gamma correction using FFmpeg’s eq filter.
Valid range per channel: 0.1–10.0. A value of 1.0 is neutral.
Values above 1.0 brighten midtones; values below 1.0 darken them.
§Validation
build returns FilterError::InvalidConfig if any
channel value is outside [0.1, 10.0].
Sourcepub fn three_way_cc(self, lift: Rgb, gamma: Rgb, gain: Rgb) -> Self
pub fn three_way_cc(self, lift: Rgb, gamma: Rgb, gain: Rgb) -> Self
Apply a three-way colour corrector (lift / gamma / gain) using FFmpeg’s
curves filter.
Each parameter is an Rgb triplet; neutral for all three is
Rgb::NEUTRAL (r=1.0, g=1.0, b=1.0).
- lift: shifts shadows (blacks). Values below
1.0darken shadows. - gamma: shapes midtones via a power curve. Values above
1.0brighten midtones; values below1.0darken them. - gain: scales highlights (whites). Values above
1.0boost whites.
§Validation
build returns FilterError::InvalidConfig if any
gamma component is ≤ 0.0 (division by zero in the power curve).
Sourcepub fn vignette(self, angle: f32, x0: f32, y0: f32) -> Self
pub fn vignette(self, angle: f32, x0: f32, y0: f32) -> Self
Apply a vignette effect using FFmpeg’s vignette filter.
Darkens the corners of the frame with a smooth radial falloff.
angle: radius angle in radians (0.0– π/2 ≈ 1.5708). Default: π/5 ≈ 0.628.x0: horizontal centre of the vignette. Pass0.0to use the video centre (w/2).y0: vertical centre of the vignette. Pass0.0to use the video centre (h/2).
§Validation
build returns FilterError::InvalidConfig if
angle is outside [0.0, π/2].
Sourcepub fn hflip(self) -> Self
pub fn hflip(self) -> Self
Flip the video horizontally (mirror left–right) using FFmpeg’s hflip filter.
Sourcepub fn vflip(self) -> Self
pub fn vflip(self) -> Self
Flip the video vertically (mirror top–bottom) using FFmpeg’s vflip filter.
Sourcepub fn reverse(self) -> Self
pub fn reverse(self) -> Self
Reverse video playback using FFmpeg’s reverse filter.
Warning: reverse buffers the entire clip in memory before producing
any output. Only use this on short clips to avoid excessive memory usage.
Sourcepub fn pad(self, width: u32, height: u32, x: i32, y: i32, color: &str) -> Self
pub fn pad(self, width: u32, height: u32, x: i32, y: i32, color: &str) -> Self
Pad the frame to width × height pixels, placing the source at (x, y)
and filling the exposed borders with color.
Pass a negative value for x or y to centre the source on that axis
(x = -1 → (width − source_w) / 2).
color accepts any color string understood by FFmpeg — for example
"black", "white", "0x000000".
§Validation
build returns FilterError::InvalidConfig if
width or height is zero.
Sourcepub fn fit_to_aspect(self, width: u32, height: u32, color: &str) -> Self
pub fn fit_to_aspect(self, width: u32, height: u32, color: &str) -> Self
Scale the source frame to fit within width × height while preserving its
aspect ratio, then centre it on a width × height canvas filled with
color (letterbox / pillarbox).
Wide sources (wider aspect ratio than the target) get horizontal black bars (letterbox); tall sources get vertical bars (pillarbox).
color accepts any color string understood by FFmpeg — for example
"black", "white", "0x000000".
§Validation
build returns FilterError::InvalidConfig if
width or height is zero.
Sourcepub fn gblur(self, sigma: f32) -> Self
pub fn gblur(self, sigma: f32) -> Self
Apply a Gaussian blur with the given sigma (blur radius).
sigma controls the standard deviation of the Gaussian kernel.
Values near 0.0 are nearly a no-op; values up to 10.0 produce
progressively stronger blur.
§Validation
build returns FilterError::InvalidConfig if
sigma is negative.
Sourcepub fn unsharp(self, luma_strength: f32, chroma_strength: f32) -> Self
pub fn unsharp(self, luma_strength: f32, chroma_strength: f32) -> Self
Sharpen or blur the image using an unsharp mask on luma and chroma.
Positive values sharpen; negative values blur. Pass 0.0 for either
channel to leave it unchanged.
Valid ranges: luma_strength and chroma_strength each −1.5 – 1.5.
§Validation
build returns FilterError::InvalidConfig if either
value is outside [−1.5, 1.5].
Sourcepub fn hqdn3d(
self,
luma_spatial: f32,
chroma_spatial: f32,
luma_tmp: f32,
chroma_tmp: f32,
) -> Self
pub fn hqdn3d( self, luma_spatial: f32, chroma_spatial: f32, luma_tmp: f32, chroma_tmp: f32, ) -> Self
Apply High Quality 3D (hqdn3d) noise reduction.
Typical values: luma_spatial=4.0, chroma_spatial=3.0,
luma_tmp=6.0, chroma_tmp=4.5. All values must be ≥ 0.0.
§Validation
build returns FilterError::InvalidConfig if any
value is negative.
Sourcepub fn nlmeans(self, strength: f32) -> Self
pub fn nlmeans(self, strength: f32) -> Self
Apply non-local means (nlmeans) noise reduction.
strength controls denoising intensity; range 1.0–30.0.
Higher values remove more noise at the cost of significantly more CPU.
NOTE: nlmeans is CPU-intensive; avoid for real-time pipelines.
Sourcepub fn yadif(self, mode: YadifMode) -> Self
pub fn yadif(self, mode: YadifMode) -> Self
Deinterlace using the yadif (Yet Another Deinterlacing Filter).
mode controls whether one frame or two fields are emitted per input
frame and whether the spatial interlacing check is enabled.
Sourcepub fn xfade(
self,
transition: XfadeTransition,
duration: f64,
offset: f64,
) -> Self
pub fn xfade( self, transition: XfadeTransition, duration: f64, offset: f64, ) -> Self
Apply a cross-dissolve transition between two video streams using xfade.
Requires two input slots: slot 0 is clip A (first clip), slot 1 is clip B
(second clip). Call FilterGraph::push_video with slot 0 for clip A
frames and slot 1 for clip B frames.
transition: the visual transition style.duration: length of the overlap in seconds. Must be > 0.0.offset: PTS offset (seconds) at which clip B starts playing.
Sourcepub fn join_with_dissolve(
self,
clip_a_end_sec: f64,
clip_b_start_sec: f64,
dissolve_dur_sec: f64,
) -> Self
pub fn join_with_dissolve( self, clip_a_end_sec: f64, clip_b_start_sec: f64, dissolve_dur_sec: f64, ) -> Self
Join two video streams with a cross-dissolve transition.
Requires two video input slots: push clip A frames to slot 0 and clip B
frames to slot 1. Internally expands to
trim + setpts → xfade ← setpts + trim.
clip_a_end_sec: timestamp (seconds) where clip A ends. Must be > 0.0.clip_b_start_sec: timestamp (seconds) where clip B content starts (before the overlap region).dissolve_dur_sec: cross-dissolve overlap length in seconds. Must be > 0.0.
build returns FilterError::InvalidConfig if
dissolve_dur_sec ≤ 0.0 or clip_a_end_sec ≤ 0.0.
Sourcepub fn speed(self, factor: f64) -> Self
pub fn speed(self, factor: f64) -> Self
Change playback speed by factor.
factor > 1.0 = fast motion (e.g. 2.0 = double speed).
factor < 1.0 = slow motion (e.g. 0.5 = half speed).
Video: uses setpts=PTS/{factor}.
Audio: uses chained atempo filters (each in [0.5, 2.0]) so the
full range 0.1–100.0 is covered without quality degradation.
build returns FilterError::InvalidConfig if
factor is outside [0.1, 100.0].
Sourcepub fn concat_video(self, n_segments: u32) -> Self
pub fn concat_video(self, n_segments: u32) -> Self
Concatenate n_segments sequential video inputs using FFmpeg’s concat filter.
Requires n_segments video input slots (push to slots 0 through
n_segments - 1 in order). build returns
FilterError::InvalidConfig if n_segments < 2.
Sourcepub fn freeze_frame(self, pts_sec: f64, duration_sec: f64) -> Self
pub fn freeze_frame(self, pts_sec: f64, duration_sec: f64) -> Self
Freeze the frame at pts_sec for duration_sec seconds using FFmpeg’s loop filter.
The frame nearest to pts_sec is held for duration_sec seconds before
playback resumes. Frame numbers are approximated using a 25 fps assumption;
accuracy depends on the source stream’s actual frame rate.
build returns FilterError::InvalidConfig if
pts_sec is negative or duration_sec is ≤ 0.0.
Sourcepub fn drawtext(self, opts: DrawTextOptions) -> Self
pub fn drawtext(self, opts: DrawTextOptions) -> Self
Overlay text onto the video using the drawtext filter.
See DrawTextOptions for all configurable fields including position,
font, size, color, opacity, and optional background box.
Sourcepub fn ticker(
self,
text: &str,
y: &str,
speed_px_per_sec: f32,
font_size: u32,
font_color: &str,
) -> Self
pub fn ticker( self, text: &str, y: &str, speed_px_per_sec: f32, font_size: u32, font_color: &str, ) -> Self
Scroll text from right to left as a news ticker.
Uses FFmpeg’s drawtext filter with the expression x = w - t * speed
so the text enters from the right edge at playback start and advances
left by speed_px_per_sec pixels per second.
y is an FFmpeg expression string for the vertical position,
e.g. "h-50" for 50 pixels above the bottom.
build returns FilterError::InvalidConfig if:
textis empty, orspeed_px_per_secis ≤ 0.0.
Sourcepub fn subtitles_srt(self, srt_path: &str) -> Self
pub fn subtitles_srt(self, srt_path: &str) -> Self
Burn SRT subtitles into the video (hard subtitles).
Subtitles are read from the .srt file at srt_path and rendered
at the timecodes defined in the file using FFmpeg’s subtitles filter.
build returns FilterError::InvalidConfig if:
- the extension is not
.srt, or - the file does not exist at build time.
Sourcepub fn subtitles_ass(self, ass_path: &str) -> Self
pub fn subtitles_ass(self, ass_path: &str) -> Self
Burn ASS/SSA styled subtitles into the video (hard subtitles).
Subtitles are read from the .ass or .ssa file at ass_path and
rendered with full styling using FFmpeg’s dedicated ass filter,
which preserves fonts, colours, and positioning better than the generic
subtitles filter.
build returns FilterError::InvalidConfig if:
- the extension is not
.assor.ssa, or - the file does not exist at build time.
Sourcepub fn overlay_image(self, path: &str, x: &str, y: &str, opacity: f32) -> Self
pub fn overlay_image(self, path: &str, x: &str, y: &str, opacity: f32) -> Self
Composite a PNG image (watermark / logo) over video.
The image at path is loaded once at graph construction time via
FFmpeg’s movie source filter. Its alpha channel is scaled by
opacity using a lut filter, then composited onto the main stream
with the overlay filter at position (x, y).
x and y are FFmpeg expression strings, e.g. "10", "W-w-10".
build returns FilterError::InvalidConfig if:
- the extension is not
.png, - the file does not exist at build time, or
opacityis outside[0.0, 1.0].
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn hardware(self, hw: HwAccel) -> Self
pub fn hardware(self, hw: HwAccel) -> Self
Enable hardware-accelerated filtering.
When set, hwupload and hwdownload filters are inserted around the
filter chain automatically.
Sourcepub fn build(self) -> Result<FilterGraph, FilterError>
pub fn build(self) -> Result<FilterGraph, FilterError>
Build the FilterGraph.
§Errors
Returns FilterError::BuildFailed if steps is empty (there is
nothing to filter). The actual FFmpeg graph is constructed lazily on the
first push_video or
push_audio call.