Struct ffmpeg_sys_the_third::AVFrame

source ·
#[repr(C)]
pub struct AVFrame {
Show 54 fields pub data: [*mut u8; 8], pub linesize: [c_int; 8], pub extended_data: *mut *mut u8, pub width: c_int, pub height: c_int, pub nb_samples: c_int, pub format: c_int, pub key_frame: c_int, pub pict_type: AVPictureType, pub sample_aspect_ratio: AVRational, pub pts: i64, pub pkt_pts: i64, pub pkt_dts: i64, pub coded_picture_number: c_int, pub display_picture_number: c_int, pub quality: c_int, pub opaque: *mut c_void, pub error: [u64; 8], pub repeat_pict: c_int, pub interlaced_frame: c_int, pub top_field_first: c_int, pub palette_has_changed: c_int, pub reordered_opaque: i64, pub sample_rate: c_int, pub channel_layout: u64, pub buf: [*mut AVBufferRef; 8], pub extended_buf: *mut *mut AVBufferRef, pub nb_extended_buf: c_int, pub side_data: *mut *mut AVFrameSideData, pub nb_side_data: c_int, pub flags: c_int, pub color_range: AVColorRange, pub color_primaries: AVColorPrimaries, pub color_trc: AVColorTransferCharacteristic, pub colorspace: AVColorSpace, pub chroma_location: AVChromaLocation, pub best_effort_timestamp: i64, pub pkt_pos: i64, pub pkt_duration: i64, pub metadata: *mut AVDictionary, pub decode_error_flags: c_int, pub channels: c_int, pub pkt_size: c_int, pub qscale_table: *mut i8, pub qstride: c_int, pub qscale_type: c_int, pub qp_table_buf: *mut AVBufferRef, pub hw_frames_ctx: *mut AVBufferRef, pub opaque_ref: *mut AVBufferRef, pub crop_top: usize, pub crop_bottom: usize, pub crop_left: usize, pub crop_right: usize, pub private_ref: *mut AVBufferRef,
}
Expand description

This structure describes decoded (raw) audio or video data.

AVFrame must be allocated using av_frame_alloc(). Note that this only allocates the AVFrame itself, the buffers for the data must be managed through other means (see below). AVFrame must be freed with av_frame_free().

AVFrame is typically allocated once and then reused multiple times to hold different data (e.g. a single AVFrame to hold frames received from a decoder). In such a case, av_frame_unref() will free any references held by the frame and reset it to its original clean state before it is reused again.

The data described by an AVFrame is usually reference counted through the AVBuffer API. The underlying buffer references are stored in AVFrame.buf / AVFrame.extended_buf. An AVFrame is considered to be reference counted if at least one reference is set, i.e. if AVFrame.buf[0] != NULL. In such a case, every single data plane must be contained in one of the buffers in AVFrame.buf or AVFrame.extended_buf. There may be a single buffer for all the data, or one separate buffer for each plane, or anything in between.

sizeof(AVFrame) is not a part of the public ABI, so new fields may be added to the end with a minor bump.

Fields can be accessed through AVOptions, the name string used, matches the C structure field name for fields accessible through AVOptions. The AVClass for AVFrame can be obtained from avcodec_get_frame_class()

Fields§

§data: [*mut u8; 8]

pointer to the picture/channel planes. This might be different from the first allocated byte

Some decoders access areas outside 0,0 - width,height, please see avcodec_align_dimensions2(). Some filters and swscale can read up to 16 bytes beyond the planes, if these filters are to be used, then 16 extra bytes must be allocated.

NOTE: Except for hwaccel formats, pointers not needed by the format MUST be set to NULL.

§linesize: [c_int; 8]

For video, size in bytes of each picture line. For audio, size in bytes of each plane.

For audio, only linesize[0] may be set. For planar audio, each channel plane must be the same size.

For video the linesizes should be multiples of the CPUs alignment preference, this is 16 or 32 for modern desktop CPUs. Some code requires such alignment other code can be slower without correct alignment, for yet other it makes no difference.

@note The linesize may be larger than the size of usable data – there may be extra padding present for performance reasons.

§extended_data: *mut *mut u8

pointers to the data planes/channels.

For video, this should simply point to data[].

For planar audio, each channel has a separate data pointer, and linesize[0] contains the size of each channel buffer. For packed audio, there is just one data pointer, and linesize[0] contains the total size of the buffer for all channels.

Note: Both data and extended_data should always be set in a valid frame, but for planar audio with more channels that can fit in data, extended_data must be used in order to access all channels.

§width: c_int

@name Video dimensions Video frames only. The coded dimensions (in pixels) of the video frame, i.e. the size of the rectangle that contains some well-defined values.

@note The part of the frame intended for display/presentation is further restricted by the @ref cropping “Cropping rectangle”. @{

§height: c_int

@name Video dimensions Video frames only. The coded dimensions (in pixels) of the video frame, i.e. the size of the rectangle that contains some well-defined values.

@note The part of the frame intended for display/presentation is further restricted by the @ref cropping “Cropping rectangle”. @{

§nb_samples: c_int

number of audio samples (per channel) described by this frame

§format: c_int

format of the frame, -1 if unknown or unset Values correspond to enum AVPixelFormat for video frames, enum AVSampleFormat for audio)

§key_frame: c_int

1 -> keyframe, 0-> not

§pict_type: AVPictureType

Picture type of the frame.

§sample_aspect_ratio: AVRational

Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.

§pts: i64

Presentation timestamp in time_base units (time when frame should be shown to user).

§pkt_pts: i64

PTS copied from the AVPacket that was decoded to produce this frame. @deprecated use the pts field instead

§pkt_dts: i64

DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn’t used) This is also the Presentation time of this AVFrame calculated from only AVPacket.dts values without pts values.

§coded_picture_number: c_int

picture number in bitstream order

§display_picture_number: c_int

picture number in display order

§quality: c_int

quality (between 1 (good) and FF_LAMBDA_MAX (bad))

§opaque: *mut c_void

for some private data of the user

§error: [u64; 8]

@deprecated unused

§repeat_pict: c_int

When decoding, this signals how much the picture must be delayed. extra_delay = repeat_pict / (2*fps)

§interlaced_frame: c_int

The content of the picture is interlaced.

§top_field_first: c_int

If the content is interlaced, is top field displayed first.

§palette_has_changed: c_int

Tell user application that palette has changed from previous frame.

§reordered_opaque: i64

reordered opaque 64 bits (generally an integer or a double precision float PTS but can be anything). The user sets AVCodecContext.reordered_opaque to represent the input at that time, the decoder reorders values as needed and sets AVFrame.reordered_opaque to exactly one of the values provided by the user through AVCodecContext.reordered_opaque

§sample_rate: c_int

Sample rate of the audio data.

§channel_layout: u64

Channel layout of the audio data.

§buf: [*mut AVBufferRef; 8]

AVBuffer references backing the data for this frame. If all elements of this array are NULL, then this frame is not reference counted. This array must be filled contiguously – if buf[i] is non-NULL then buf[j] must also be non-NULL for all j < i.

There may be at most one AVBuffer per data plane, so for video this array always contains all the references. For planar audio with more than AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in this array. Then the extra AVBufferRef pointers are stored in the extended_buf array.

§extended_buf: *mut *mut AVBufferRef

For planar audio which requires more than AV_NUM_DATA_POINTERS AVBufferRef pointers, this array will hold all the references which cannot fit into AVFrame.buf.

Note that this is different from AVFrame.extended_data, which always contains all the pointers. This array only contains the extra pointers, which cannot fit into AVFrame.buf.

This array is always allocated using av_malloc() by whoever constructs the frame. It is freed in av_frame_unref().

§nb_extended_buf: c_int

Number of elements in extended_buf.

§side_data: *mut *mut AVFrameSideData§nb_side_data: c_int§flags: c_int

Frame flags, a combination of @ref lavu_frame_flags

§color_range: AVColorRange

MPEG vs JPEG YUV range.

  • encoding: Set by user
  • decoding: Set by libavcodec
§color_primaries: AVColorPrimaries§color_trc: AVColorTransferCharacteristic§colorspace: AVColorSpace

YUV colorspace type.

  • encoding: Set by user
  • decoding: Set by libavcodec
§chroma_location: AVChromaLocation§best_effort_timestamp: i64

frame timestamp estimated using various heuristics, in stream time base

  • encoding: unused
  • decoding: set by libavcodec, read by user.
§pkt_pos: i64

reordered pos from the last AVPacket that has been input into the decoder

  • encoding: unused
  • decoding: Read by user.
§pkt_duration: i64

duration of the corresponding packet, expressed in AVStream->time_base units, 0 if unknown.

  • encoding: unused
  • decoding: Read by user.
§metadata: *mut AVDictionary

metadata.

  • encoding: Set by user.
  • decoding: Set by libavcodec.
§decode_error_flags: c_int

decode error flags of the frame, set to a combination of FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there were errors during the decoding.

  • encoding: unused
  • decoding: set by libavcodec, read by user.
§channels: c_int

number of audio channels, only used for audio.

  • encoding: unused
  • decoding: Read by user.
§pkt_size: c_int

size of the corresponding packet containing the compressed frame. It is set to a negative value if unknown.

  • encoding: unused
  • decoding: set by libavcodec, read by user.
§qscale_table: *mut i8

QP table

§qstride: c_int

QP store stride

§qscale_type: c_int§qp_table_buf: *mut AVBufferRef§hw_frames_ctx: *mut AVBufferRef

For hwaccel-format frames, this should be a reference to the AVHWFramesContext describing the frame.

§opaque_ref: *mut AVBufferRef

AVBufferRef for free use by the API user. FFmpeg will never check the contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when the frame is unreferenced. av_frame_copy_props() calls create a new reference with av_buffer_ref() for the target frame’s opaque_ref field.

This is unrelated to the opaque field, although it serves a similar purpose.

§crop_top: usize

@anchor cropping @name Cropping Video frames only. The number of pixels to discard from the the top/bottom/left/right border of the frame to obtain the sub-rectangle of the frame intended for presentation. @{

§crop_bottom: usize§crop_left: usize§crop_right: usize§private_ref: *mut AVBufferRef

AVBufferRef for internal use by a single libav* library. Must not be used to transfer data between libraries. Has to be NULL when ownership of the frame leaves the respective library.

Code outside the FFmpeg libs should never check or change the contents of the buffer ref.

FFmpeg calls av_buffer_unref() on it when the frame is unreferenced. av_frame_copy_props() calls create a new reference with av_buffer_ref() for the target frame’s private_ref field.

Trait Implementations§

source§

impl Clone for AVFrame

source§

fn clone(&self) -> AVFrame

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for AVFrame

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq for AVFrame

source§

fn eq(&self, other: &AVFrame) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl Copy for AVFrame

source§

impl Eq for AVFrame

source§

impl StructuralPartialEq for AVFrame

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.