pub struct FilterGraphBuilder { /* private fields */ }Expand description
Builder for constructing a FilterGraph.
Create one with FilterGraph::builder(), chain the desired filter
methods, then call build to obtain the graph.
§Examples
use ff_filter::{FilterGraph, ToneMap};
let graph = FilterGraph::builder()
.scale(1280, 720)
.tone_map(ToneMap::Hable)
.build()?;Implementations§
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn afade_in(self, start_sec: f64, duration_sec: f64) -> Self
pub fn afade_in(self, start_sec: f64, duration_sec: f64) -> Self
Audio fade-in from silence, starting at start_sec seconds and reaching
full volume after duration_sec seconds.
build returns FilterError::InvalidConfig if
duration_sec is ≤ 0.0.
Sourcepub fn afade_out(self, start_sec: f64, duration_sec: f64) -> Self
pub fn afade_out(self, start_sec: f64, duration_sec: f64) -> Self
Audio fade-out to silence, starting at start_sec seconds and reaching
full silence after duration_sec seconds.
build returns FilterError::InvalidConfig if
duration_sec is ≤ 0.0.
Sourcepub fn areverse(self) -> Self
pub fn areverse(self) -> Self
Reverse audio playback using FFmpeg’s areverse filter.
Warning: areverse buffers the entire clip in memory before producing
any output. Only use this on short clips to avoid excessive memory usage.
Sourcepub fn loudness_normalize(
self,
target_lufs: f32,
true_peak_db: f32,
lra: f32,
) -> Self
pub fn loudness_normalize( self, target_lufs: f32, true_peak_db: f32, lra: f32, ) -> Self
Apply EBU R128 two-pass loudness normalization.
target_lufs is the target integrated loudness (e.g. −23.0),
true_peak_db is the true-peak ceiling (e.g. −1.0), and
lra is the target loudness range in LU (e.g. 7.0).
Pass 1 measures integrated loudness with the ebur128 filter.
Pass 2 applies a linear volume correction. All audio frames are
buffered in memory between the two passes — use only for clips that
fit comfortably in RAM.
build returns FilterError::InvalidConfig if
target_lufs >= 0.0, true_peak_db > 0.0, or lra <= 0.0.
Sourcepub fn normalize_peak(self, target_db: f32) -> Self
pub fn normalize_peak(self, target_db: f32) -> Self
Normalize the audio peak level to target_db dBFS using a two-pass approach.
Pass 1 measures the true peak with astats=metadata=1.
Pass 2 applies volume={gain}dB so the output peak reaches target_db.
All audio frames are buffered in memory between the two passes — use only
for clips that fit comfortably in RAM.
build returns FilterError::InvalidConfig if
target_db > 0.0 (cannot normalize above digital full scale).
Sourcepub fn agate(self, threshold_db: f32, attack_ms: f32, release_ms: f32) -> Self
pub fn agate(self, threshold_db: f32, attack_ms: f32, release_ms: f32) -> Self
Apply a noise gate to suppress audio below a given threshold.
Uses FFmpeg’s agate filter. Audio below threshold_db (dBFS) is
attenuated; audio above it passes through unmodified. The threshold is
converted from dBFS to the linear amplitude ratio expected by agate.
build returns FilterError::InvalidConfig if
attack_ms or release_ms is ≤ 0.0.
Sourcepub fn compressor(
self,
threshold_db: f32,
ratio: f32,
attack_ms: f32,
release_ms: f32,
makeup_db: f32,
) -> Self
pub fn compressor( self, threshold_db: f32, ratio: f32, attack_ms: f32, release_ms: f32, makeup_db: f32, ) -> Self
Apply a dynamic range compressor to the audio.
Uses FFmpeg’s acompressor filter. Audio peaks above threshold_db
(dBFS) are reduced by ratio:1. makeup_db applies additional gain
after compression to restore perceived loudness.
build returns FilterError::InvalidConfig if
ratio < 1.0, attack_ms ≤ 0.0, or release_ms ≤ 0.0.
Sourcepub fn stereo_to_mono(self) -> Self
pub fn stereo_to_mono(self) -> Self
Downmix stereo audio to mono by equally mixing both channels.
Uses FFmpeg’s pan filter with the expression
mono|c0=0.5*c0+0.5*c1. The output has a single channel.
Sourcepub fn channel_map(self, mapping: &str) -> Self
pub fn channel_map(self, mapping: &str) -> Self
Remap audio channels using FFmpeg’s channelmap filter.
mapping is a |-separated list of output channel names taken from
input channels, e.g. "FR|FL" swaps left and right.
build returns FilterError::InvalidConfig if
mapping is empty.
Sourcepub fn audio_delay(self, ms: f64) -> Self
pub fn audio_delay(self, ms: f64) -> Self
Shift audio for A/V sync correction.
Positive ms: uses FFmpeg’s adelay filter to delay the audio
(audio plays later). Negative ms: uses FFmpeg’s atrim filter to
advance the audio by trimming the start (audio plays earlier).
Zero ms is a no-op.
Sourcepub fn concat_audio(self, n_segments: u32) -> Self
pub fn concat_audio(self, n_segments: u32) -> Self
Concatenate n_segments sequential audio inputs using FFmpeg’s concat filter.
Requires n_segments audio input slots (push to slots 0 through
n_segments - 1 in order). build returns
FilterError::InvalidConfig if n_segments < 2.
Sourcepub fn volume(self, gain_db: f64) -> Self
pub fn volume(self, gain_db: f64) -> Self
Adjust audio volume by gain_db decibels (negative = quieter).
Sourcepub fn equalizer(self, bands: Vec<EqBand>) -> Self
pub fn equalizer(self, bands: Vec<EqBand>) -> Self
Apply a multi-band parametric equalizer.
Each EqBand maps to one FFmpeg filter node chained in sequence:
EqBand::LowShelf→lowshelfEqBand::HighShelf→highshelfEqBand::Peak→equalizer
build returns FilterError::InvalidConfig if bands
is empty.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn tone_map(self, algorithm: ToneMap) -> Self
pub fn tone_map(self, algorithm: ToneMap) -> Self
Apply HDR-to-SDR tone mapping using the given algorithm.
Sourcepub fn lut3d(self, path: &str) -> Self
pub fn lut3d(self, path: &str) -> Self
Apply a 3D LUT colour grade from a .cube or .3dl file.
Uses FFmpeg’s lut3d filter with trilinear interpolation.
§Validation
build returns FilterError::InvalidConfig if:
- the extension is not
.cubeor.3dl, or - the file does not exist at build time.
Sourcepub fn eq(self, brightness: f32, contrast: f32, saturation: f32) -> Self
pub fn eq(self, brightness: f32, contrast: f32, saturation: f32) -> Self
Adjust brightness, contrast, and saturation using FFmpeg’s eq filter.
Valid ranges:
brightness: −1.0 – 1.0 (neutral: 0.0)contrast: 0.0 – 3.0 (neutral: 1.0)saturation: 0.0 – 3.0 (neutral: 1.0; 0.0 = grayscale)
Shorthand for eq_animated with static values and a
neutral gamma (1.0).
§Validation
build returns FilterError::InvalidConfig if any
value is outside its valid range.
Sourcepub fn eq_animated(
self,
brightness: AnimatedValue<f64>,
contrast: AnimatedValue<f64>,
saturation: AnimatedValue<f64>,
gamma: AnimatedValue<f64>,
) -> Self
pub fn eq_animated( self, brightness: AnimatedValue<f64>, contrast: AnimatedValue<f64>, saturation: AnimatedValue<f64>, gamma: AnimatedValue<f64>, ) -> Self
Adjust brightness, contrast, saturation, and gamma using FFmpeg’s eq filter,
with optionally animated parameters.
When an AnimatedValue::Track is supplied for any parameter, the animation
is registered for per-frame avfilter_graph_send_command updates (#363).
The initial filter graph is built from values at Duration::ZERO.
Filter node names are assigned deterministically: the first call produces
"eq_0", the second "eq_1", and so on.
Valid ranges (at Duration::ZERO):
brightness: −1.0 – 1.0contrast: 0.0 – 3.0saturation: 0.0 – 3.0gamma: 0.1 – 10.0
§Validation
build returns FilterError::InvalidConfig if any
parameter evaluates outside its valid range at Duration::ZERO.
Sourcepub fn color_correct(
self,
lift: (f64, f64, f64),
gamma: (f64, f64, f64),
gain: (f64, f64, f64),
) -> Self
pub fn color_correct( self, lift: (f64, f64, f64), gamma: (f64, f64, f64), gain: (f64, f64, f64), ) -> Self
Apply a three-way color balance (lift / gamma / gain) using FFmpeg’s
colorbalance filter.
Each parameter is an (R, G, B) tuple; neutral for all three is (0.0, 0.0, 0.0).
- lift: additive correction for shadows. Range per component: −1.0 – 1.0.
- gamma: additive correction for midtones. Range per component: −1.0 – 1.0.
- gain: additive correction for highlights. Range per component: −1.0 – 1.0.
Shorthand for color_correct_animated with
static values.
§Validation
build returns FilterError::InvalidConfig if any
component is outside [−1.0, 1.0].
Sourcepub fn color_correct_animated(
self,
lift: AnimatedValue<(f64, f64, f64)>,
gamma: AnimatedValue<(f64, f64, f64)>,
gain: AnimatedValue<(f64, f64, f64)>,
) -> Self
pub fn color_correct_animated( self, lift: AnimatedValue<(f64, f64, f64)>, gamma: AnimatedValue<(f64, f64, f64)>, gain: AnimatedValue<(f64, f64, f64)>, ) -> Self
Apply a three-way color balance (lift / gamma / gain) using FFmpeg’s
colorbalance filter, with optionally animated parameters.
When an AnimatedValue::Track is supplied, the animation is registered
for per-frame avfilter_graph_send_command updates (#363). For tuple tracks
three separate entries are registered (one per RGB channel).
Filter node names: "colorbalance_0", "colorbalance_1", …
FFmpeg param names per parameter:
lift→"rs","gs","bs"(shadows)gamma→"rm","gm","bm"(midtones)gain→"rh","gh","bh"(highlights)
Valid range per component at Duration::ZERO: −1.0 – 1.0.
§Validation
build returns FilterError::InvalidConfig if any
component evaluates outside [−1.0, 1.0] at Duration::ZERO.
Sourcepub fn curves(
self,
master: Vec<(f32, f32)>,
r: Vec<(f32, f32)>,
g: Vec<(f32, f32)>,
b: Vec<(f32, f32)>,
) -> Self
pub fn curves( self, master: Vec<(f32, f32)>, r: Vec<(f32, f32)>, g: Vec<(f32, f32)>, b: Vec<(f32, f32)>, ) -> Self
Apply per-channel RGB color curves using FFmpeg’s curves filter.
Each argument is a list of (input, output) control points in [0.0, 1.0].
Pass an empty Vec for any channel that needs no adjustment.
§Validation
build returns FilterError::InvalidConfig if any
control point coordinate is outside [0.0, 1.0].
Sourcepub fn white_balance(self, temperature_k: u32, tint: f32) -> Self
pub fn white_balance(self, temperature_k: u32, tint: f32) -> Self
Correct white balance using FFmpeg’s colorchannelmixer filter.
RGB channel multipliers are derived from temperature_k via Tanner
Helland’s Kelvin-to-RGB algorithm. The tint offset shifts the green
channel (positive = more green, negative = more magenta).
Valid ranges:
temperature_k: 1000–40000 K (neutral daylight ≈ 6500 K)tint: −1.0–1.0
§Validation
build returns FilterError::InvalidConfig if either
value is outside its valid range.
Sourcepub fn hue(self, degrees: f32) -> Self
pub fn hue(self, degrees: f32) -> Self
Rotate hue by degrees using FFmpeg’s hue filter.
Valid range: −360.0–360.0. A value of 0.0 is a no-op.
§Validation
build returns FilterError::InvalidConfig if
degrees is outside [−360.0, 360.0].
Sourcepub fn gamma(self, r: f32, g: f32, b: f32) -> Self
pub fn gamma(self, r: f32, g: f32, b: f32) -> Self
Apply per-channel gamma correction using FFmpeg’s eq filter.
Valid range per channel: 0.1–10.0. A value of 1.0 is neutral.
Values above 1.0 brighten midtones; values below 1.0 darken them.
§Validation
build returns FilterError::InvalidConfig if any
channel value is outside [0.1, 10.0].
Sourcepub fn three_way_cc(self, lift: Rgb, gamma: Rgb, gain: Rgb) -> Self
pub fn three_way_cc(self, lift: Rgb, gamma: Rgb, gain: Rgb) -> Self
Apply a three-way colour corrector (lift / gamma / gain) using FFmpeg’s
curves filter.
Each parameter is an Rgb triplet; neutral for all three is
Rgb::NEUTRAL (r=1.0, g=1.0, b=1.0).
- lift: shifts shadows (blacks). Values below
1.0darken shadows. - gamma: shapes midtones via a power curve. Values above
1.0brighten midtones; values below1.0darken them. - gain: scales highlights (whites). Values above
1.0boost whites.
§Validation
build returns FilterError::InvalidConfig if any
gamma component is ≤ 0.0 (division by zero in the power curve).
Sourcepub fn vignette(self, angle: f32, x0: f32, y0: f32) -> Self
pub fn vignette(self, angle: f32, x0: f32, y0: f32) -> Self
Apply a vignette effect using FFmpeg’s vignette filter.
Darkens the corners of the frame with a smooth radial falloff.
angle: radius angle in radians (0.0– π/2 ≈ 1.5708). Default: π/5 ≈ 0.628.x0: horizontal centre of the vignette. Pass0.0to use the video centre (w/2).y0: vertical centre of the vignette. Pass0.0to use the video centre (h/2).
§Validation
build returns FilterError::InvalidConfig if
angle is outside [0.0, π/2].
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn blend(
self,
top: FilterGraphBuilder,
mode: BlendMode,
opacity: f32,
) -> Self
pub fn blend( self, top: FilterGraphBuilder, mode: BlendMode, opacity: f32, ) -> Self
Blend a top layer over self (the bottom) using the given BlendMode
and opacity.
opacity is clamped to [0.0, 1.0] before being stored.
§Normal mode
The bottom stream is self; the top stream is pushed on input slot 1.
When opacity == 1.0 the filter chain is:
[bottom][top]overlay=format=auto:shortest=1[out]When opacity < 1.0 a colorchannelmixer=aa=<opacity> step is applied
to the top stream first:
[top]colorchannelmixer=aa=<opacity>[top_faded];
[bottom][top_faded]overlay=format=auto:shortest=1[out]§Unimplemented modes
All modes other than BlendMode::Normal are defined but not yet
implemented. Calling build with an
unimplemented mode returns
FilterError::InvalidConfig.
Sourcepub fn chromakey(self, color: &str, similarity: f32, blend: f32) -> Self
pub fn chromakey(self, color: &str, similarity: f32, blend: f32) -> Self
Key out pixels matching color using FFmpeg’s chromakey filter.
color:FFmpegcolor string, e.g."green","0x00FF00","#00FF00".similarity: match radius in[0.0, 1.0]; higher = more pixels removed.blend: edge softness in[0.0, 1.0];0.0= hard edge.
similarity and blend are validated in build;
out-of-range values return FilterError::InvalidConfig.
The output pixel format is yuva420p (adds an alpha channel).
Use this for YCbCr-encoded sources (most video).
Sourcepub fn alpha_matte(self, matte: FilterGraphBuilder) -> Self
pub fn alpha_matte(self, matte: FilterGraphBuilder) -> Self
Apply a grayscale matte as the alpha channel of self.
White (255) in the matte produces fully opaque output; black (0) produces
fully transparent output. Wraps FFmpeg’s alphamerge filter.
The matte pipeline is applied to the second input slot (slot 1).
Call push_video with slot=1 to
supply matte frames at runtime.
Sourcepub fn spill_suppress(self, key_color: &str, strength: f32) -> Self
pub fn spill_suppress(self, key_color: &str, strength: f32) -> Self
Reduce color spill from the key color on subject edges.
Applies FFmpeg’s hue filter with saturation 1.0 - strength.
The typical pipeline is chromakey → spill_suppress.
strength must be in [0.0, 1.0]; out-of-range values return
FilterError::InvalidConfig from build.
Sourcepub fn lumakey(
self,
threshold: f32,
tolerance: f32,
softness: f32,
invert: bool,
) -> Self
pub fn lumakey( self, threshold: f32, tolerance: f32, softness: f32, invert: bool, ) -> Self
Key out pixels by luminance value using FFmpeg’s lumakey filter.
threshold: luma cutoff in[0.0, 1.0];0.0= black,1.0= white.tolerance: match radius around the threshold in[0.0, 1.0].softness: edge feather width in[0.0, 1.0];0.0= hard edge.invert: whenfalse, keys out pixels matching the threshold; whentrue, the alpha channel is negated after keying, making the complementary region transparent (useful for dark-background sources).
threshold, tolerance, and softness are validated in
build; out-of-range values return
FilterError::InvalidConfig.
The output pixel format is yuva420p (adds an alpha channel).
Sourcepub fn rect_mask(
self,
x: u32,
y: u32,
width: u32,
height: u32,
invert: bool,
) -> Self
pub fn rect_mask( self, x: u32, y: u32, width: u32, height: u32, invert: bool, ) -> Self
Apply a rectangular alpha mask using FFmpeg’s geq filter.
Pixels inside the rectangle (x, y, width, height) are fully
opaque; pixels outside are fully transparent. When invert is true
the roles are swapped: inside becomes transparent and outside becomes
opaque.
width and height must be > 0; zero values return
FilterError::InvalidConfig from build.
The output carries an alpha channel (rgba).
Sourcepub fn feather_mask(self, radius: u32) -> Self
pub fn feather_mask(self, radius: u32) -> Self
Feather (soften) the alpha channel edges using a Gaussian blur.
Splits the stream into a color copy and an alpha copy, blurs the alpha
plane with gblur=sigma=<radius>, then re-merges via alphamerge.
radius is the blur kernel half-size in pixels and must be > 0.
A value of 0 returns FilterError::InvalidConfig from
build.
Typically chained after a keying or masking step such as
chromakey,
rect_mask, or
polygon_matte.
Sourcepub fn polygon_matte(self, vertices: Vec<(f32, f32)>, invert: bool) -> Self
pub fn polygon_matte(self, vertices: Vec<(f32, f32)>, invert: bool) -> Self
Apply a polygon alpha mask defined by normalised vertex coordinates.
Pixels inside the polygon are fully opaque; pixels outside are fully
transparent. When invert is true the roles are swapped.
vertices: polygon corners as(x, y)in[0.0, 1.0](normalised to frame size). Minimum 3, maximum 16.invert: whenfalse, inside = opaque; whentrue, outside = opaque.
Vertex count and coordinates are validated in
build; out-of-range values return
FilterError::InvalidConfig.
The output carries an alpha channel (rgba).
Sourcepub fn colorkey(self, color: &str, similarity: f32, blend: f32) -> Self
pub fn colorkey(self, color: &str, similarity: f32, blend: f32) -> Self
Key out pixels matching color in RGB space using FFmpeg’s colorkey filter.
color:FFmpegcolor string, e.g."green","0x00FF00","#00FF00".similarity: match radius in[0.0, 1.0]; higher = more pixels removed.blend: edge softness in[0.0, 1.0];0.0= hard edge.
similarity and blend are validated in build;
out-of-range values return FilterError::InvalidConfig.
The output pixel format is rgba.
Use this for RGB-encoded sources; prefer chromakey
for YCbCr-encoded video.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn trim(self, start: f64, end: f64) -> Self
pub fn trim(self, start: f64, end: f64) -> Self
Trim the stream to the half-open interval [start, end) in seconds.
Sourcepub fn speed(self, factor: f64) -> Self
pub fn speed(self, factor: f64) -> Self
Change playback speed by factor.
factor > 1.0 = fast motion (e.g. 2.0 = double speed).
factor < 1.0 = slow motion (e.g. 0.5 = half speed).
Video: uses setpts=PTS/{factor}.
Audio: uses chained atempo filters (each in [0.5, 2.0]) so the
full range 0.1–100.0 is covered without quality degradation.
build returns FilterError::InvalidConfig if
factor is outside [0.1, 100.0].
Sourcepub fn reverse(self) -> Self
pub fn reverse(self) -> Self
Reverse video playback using FFmpeg’s reverse filter.
Warning: reverse buffers the entire clip in memory before producing
any output. Only use this on short clips to avoid excessive memory usage.
Sourcepub fn concat_video(self, n_segments: u32) -> Self
pub fn concat_video(self, n_segments: u32) -> Self
Concatenate n_segments sequential video inputs using FFmpeg’s concat filter.
Requires n_segments video input slots (push to slots 0 through
n_segments - 1 in order). build returns
FilterError::InvalidConfig if n_segments < 2.
Sourcepub fn freeze_frame(self, pts_sec: f64, duration_sec: f64) -> Self
pub fn freeze_frame(self, pts_sec: f64, duration_sec: f64) -> Self
Freeze the frame at pts_sec for duration_sec seconds using FFmpeg’s loop filter.
The frame nearest to pts_sec is held for duration_sec seconds before
playback resumes. Frame numbers are approximated using a 25 fps assumption;
accuracy depends on the source stream’s actual frame rate.
build returns FilterError::InvalidConfig if
pts_sec is negative or duration_sec is ≤ 0.0.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn gblur(self, sigma: f32) -> Self
pub fn gblur(self, sigma: f32) -> Self
Apply a Gaussian blur with the given sigma (blur radius).
sigma controls the standard deviation of the Gaussian kernel.
Values near 0.0 are nearly a no-op; values up to 10.0 produce
progressively stronger blur.
Shorthand for gblur_animated with a static value.
§Validation
build returns FilterError::InvalidConfig if
sigma is negative.
Sourcepub fn gblur_animated(self, sigma: AnimatedValue<f64>) -> Self
pub fn gblur_animated(self, sigma: AnimatedValue<f64>) -> Self
Apply a Gaussian blur with an optionally animated sigma (blur radius).
When an AnimatedValue::Track is supplied, the animation is registered
for per-frame avfilter_graph_send_command updates (#363).
The initial filter graph is built from the value at Duration::ZERO.
Filter node names are assigned deterministically: the first call produces
"gblur_0", the second "gblur_1", and so on.
§Validation
build returns FilterError::InvalidConfig if
sigma evaluates to a negative value at Duration::ZERO.
Sourcepub fn unsharp(self, luma_strength: f32, chroma_strength: f32) -> Self
pub fn unsharp(self, luma_strength: f32, chroma_strength: f32) -> Self
Sharpen or blur the image using an unsharp mask on luma and chroma.
Positive values sharpen; negative values blur. Pass 0.0 for either
channel to leave it unchanged.
Valid ranges: luma_strength and chroma_strength each −1.5 – 1.5.
§Validation
build returns FilterError::InvalidConfig if either
value is outside [−1.5, 1.5].
Sourcepub fn hqdn3d(
self,
luma_spatial: f32,
chroma_spatial: f32,
luma_tmp: f32,
chroma_tmp: f32,
) -> Self
pub fn hqdn3d( self, luma_spatial: f32, chroma_spatial: f32, luma_tmp: f32, chroma_tmp: f32, ) -> Self
Apply High Quality 3D (hqdn3d) noise reduction.
Typical values: luma_spatial=4.0, chroma_spatial=3.0,
luma_tmp=6.0, chroma_tmp=4.5. All values must be ≥ 0.0.
§Validation
build returns FilterError::InvalidConfig if any
value is negative.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn scale(self, width: u32, height: u32, algorithm: ScaleAlgorithm) -> Self
pub fn scale(self, width: u32, height: u32, algorithm: ScaleAlgorithm) -> Self
Scale the video to width × height pixels using the given resampling
algorithm.
Use ScaleAlgorithm::Fast for the best speed/quality trade-off.
For highest quality use ScaleAlgorithm::Lanczos at the cost of
additional CPU time.
Sourcepub fn crop(self, x: u32, y: u32, width: u32, height: u32) -> Self
pub fn crop(self, x: u32, y: u32, width: u32, height: u32) -> Self
Crop a rectangle starting at (x, y) with the given dimensions.
Shorthand for crop_animated with static values.
Sourcepub fn crop_animated(
self,
x: AnimatedValue<f64>,
y: AnimatedValue<f64>,
width: AnimatedValue<f64>,
height: AnimatedValue<f64>,
) -> Self
pub fn crop_animated( self, x: AnimatedValue<f64>, y: AnimatedValue<f64>, width: AnimatedValue<f64>, height: AnimatedValue<f64>, ) -> Self
Crop with optionally animated boundaries (pixels).
When an AnimatedValue::Track is supplied, the corresponding animation
is registered for per-frame avfilter_graph_send_command updates (#363).
The initial filter graph is built from the value at Duration::ZERO.
Filter node names are assigned deterministically: the first call produces
"crop_0", the second "crop_1", and so on.
Sourcepub fn overlay_image(self, path: &str, x: &str, y: &str, opacity: f32) -> Self
pub fn overlay_image(self, path: &str, x: &str, y: &str, opacity: f32) -> Self
Composite a PNG image (watermark / logo) over video.
The image at path is loaded once at graph construction time via
FFmpeg’s movie source filter. Its alpha channel is scaled by
opacity using a lut filter, then composited onto the main stream
with the overlay filter at position (x, y).
x and y are FFmpeg expression strings, e.g. "10", "W-w-10".
build returns FilterError::InvalidConfig if:
- the extension is not
.png, - the file does not exist at build time, or
opacityis outside[0.0, 1.0].
Sourcepub fn rotate(self, angle_degrees: f64, fill_color: &str) -> Self
pub fn rotate(self, angle_degrees: f64, fill_color: &str) -> Self
Rotate the video clockwise by angle_degrees, filling exposed corners
with fill_color.
fill_color accepts any color string understood by FFmpeg — for example
"black", "white", "0x00000000" (transparent), or "gray".
Pass "black" to reproduce the classic solid-background rotation.
Sourcepub fn hflip(self) -> Self
pub fn hflip(self) -> Self
Flip the video horizontally (mirror left–right) using FFmpeg’s hflip filter.
Sourcepub fn vflip(self) -> Self
pub fn vflip(self) -> Self
Flip the video vertically (mirror top–bottom) using FFmpeg’s vflip filter.
Sourcepub fn pad(self, width: u32, height: u32, x: i32, y: i32, color: &str) -> Self
pub fn pad(self, width: u32, height: u32, x: i32, y: i32, color: &str) -> Self
Pad the frame to width × height pixels, placing the source at (x, y)
and filling the exposed borders with color.
Pass a negative value for x or y to centre the source on that axis
(x = -1 → (width − source_w) / 2).
color accepts any color string understood by FFmpeg — for example
"black", "white", "0x000000".
§Validation
build returns FilterError::InvalidConfig if
width or height is zero.
Sourcepub fn fit_to_aspect(self, width: u32, height: u32, color: &str) -> Self
pub fn fit_to_aspect(self, width: u32, height: u32, color: &str) -> Self
Scale the source frame to fit within width × height while preserving its
aspect ratio, then centre it on a width × height canvas filled with
color (letterbox / pillarbox).
Wide sources (wider aspect ratio than the target) get horizontal black bars (letterbox); tall sources get vertical bars (pillarbox).
color accepts any color string understood by FFmpeg — for example
"black", "white", "0x000000".
§Validation
build returns FilterError::InvalidConfig if
width or height is zero.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn drawtext(self, opts: DrawTextOptions) -> Self
pub fn drawtext(self, opts: DrawTextOptions) -> Self
Overlay text onto the video using the drawtext filter.
See DrawTextOptions for all configurable fields including position,
font, size, color, opacity, and optional background box.
Sourcepub fn ticker(
self,
text: &str,
y: &str,
speed_px_per_sec: f32,
font_size: u32,
font_color: &str,
) -> Self
pub fn ticker( self, text: &str, y: &str, speed_px_per_sec: f32, font_size: u32, font_color: &str, ) -> Self
Scroll text from right to left as a news ticker.
Uses FFmpeg’s drawtext filter with the expression x = w - t * speed
so the text enters from the right edge at playback start and advances
left by speed_px_per_sec pixels per second.
y is an FFmpeg expression string for the vertical position,
e.g. "h-50" for 50 pixels above the bottom.
build returns FilterError::InvalidConfig if:
textis empty, orspeed_px_per_secis ≤ 0.0.
Sourcepub fn subtitles_srt(self, srt_path: &str) -> Self
pub fn subtitles_srt(self, srt_path: &str) -> Self
Burn SRT subtitles into the video (hard subtitles).
Subtitles are read from the .srt file at srt_path and rendered
at the timecodes defined in the file using FFmpeg’s subtitles filter.
build returns FilterError::InvalidConfig if:
- the extension is not
.srt, or - the file does not exist at build time.
Sourcepub fn subtitles_ass(self, ass_path: &str) -> Self
pub fn subtitles_ass(self, ass_path: &str) -> Self
Burn ASS/SSA styled subtitles into the video (hard subtitles).
Subtitles are read from the .ass or .ssa file at ass_path and
rendered with full styling using FFmpeg’s dedicated ass filter,
which preserves fonts, colours, and positioning better than the generic
subtitles filter.
build returns FilterError::InvalidConfig if:
- the extension is not
.assor.ssa, or - the file does not exist at build time.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn fade_in(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_in(self, start_sec: f64, duration_sec: f64) -> Self
Fade in from black, starting at start_sec seconds and reaching full
brightness after duration_sec seconds.
Sourcepub fn fade_out(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_out(self, start_sec: f64, duration_sec: f64) -> Self
Fade out to black, starting at start_sec seconds and reaching full
black after duration_sec seconds.
Sourcepub fn fade_in_white(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_in_white(self, start_sec: f64, duration_sec: f64) -> Self
Fade in from white, starting at start_sec seconds and reaching full
brightness after duration_sec seconds.
Sourcepub fn fade_out_white(self, start_sec: f64, duration_sec: f64) -> Self
pub fn fade_out_white(self, start_sec: f64, duration_sec: f64) -> Self
Fade out to white, starting at start_sec seconds and reaching full
white after duration_sec seconds.
Sourcepub fn xfade(
self,
transition: XfadeTransition,
duration: f64,
offset: f64,
) -> Self
pub fn xfade( self, transition: XfadeTransition, duration: f64, offset: f64, ) -> Self
Apply a cross-dissolve transition between two video streams using xfade.
Requires two input slots: slot 0 is clip A (first clip), slot 1 is clip B
(second clip). Call FilterGraph::push_video with slot 0 for clip A
frames and slot 1 for clip B frames.
transition: the visual transition style.duration: length of the overlap in seconds. Must be > 0.0.offset: PTS offset (seconds) at which clip B starts playing.
Sourcepub fn join_with_dissolve(
self,
clip_a_end_sec: f64,
clip_b_start_sec: f64,
dissolve_dur_sec: f64,
) -> Self
pub fn join_with_dissolve( self, clip_a_end_sec: f64, clip_b_start_sec: f64, dissolve_dur_sec: f64, ) -> Self
Join two video streams with a cross-dissolve transition.
Requires two video input slots: push clip A frames to slot 0 and clip B
frames to slot 1. Internally expands to
trim + setpts → xfade ← setpts + trim.
clip_a_end_sec: timestamp (seconds) where clip A ends. Must be > 0.0.clip_b_start_sec: timestamp (seconds) where clip B content starts (before the overlap region).dissolve_dur_sec: cross-dissolve overlap length in seconds. Must be > 0.0.
build returns FilterError::InvalidConfig if
dissolve_dur_sec ≤ 0.0 or clip_a_end_sec ≤ 0.0.
Source§impl FilterGraphBuilder
impl FilterGraphBuilder
Sourcepub fn hardware(self, hw: HwAccel) -> Self
pub fn hardware(self, hw: HwAccel) -> Self
Enable hardware-accelerated filtering.
When set, hwupload and hwdownload filters are inserted around the
filter chain automatically.
Sourcepub fn build(self) -> Result<FilterGraph, FilterError>
pub fn build(self) -> Result<FilterGraph, FilterError>
Build the FilterGraph.
§Errors
Returns FilterError::BuildFailed if steps is empty (there is
nothing to filter). The actual FFmpeg graph is constructed lazily on the
first push_video or
push_audio call.
Trait Implementations§
Source§impl Clone for FilterGraphBuilder
impl Clone for FilterGraphBuilder
Source§fn clone(&self) -> FilterGraphBuilder
fn clone(&self) -> FilterGraphBuilder
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more