easy_ffprobe

Struct VideoStream

Source
pub struct VideoStream {
Show 27 fields pub width: i64, pub height: i64, pub coded_height: i64, pub coded_width: i64, pub sample_aspect_ratio: Option<Ratio>, pub display_aspect_ratio: Option<Ratio>, pub bits_per_raw_sample: Option<i64>, pub chroma_location: Option<String>, pub closed_captions: i64, pub codec_long_name: String, pub codec_name: String, pub color_primaries: Option<String>, pub color_range: Option<String>, pub color_space: Option<String>, pub color_transfer: Option<String>, pub field_order: Option<String>, pub film_grain: i64, pub has_b_frames: i64, pub is_avc: Option<bool>, pub level: i64, pub nal_length_size: Option<i64>, pub pix_fmt: Option<String>, pub profile: Option<String>, pub duration_ts: Option<u64>, pub refs: i64, pub tags: Option<VideoTags>, pub bit_rate: Option<i64>, /* private fields */
}
Expand description

Stream of type video

Fields§

§width: i64

width of video

§height: i64

height of video

§coded_height: i64

height before cropping https://superuser.com/questions/1523944/whats-the-difference-between-coded-width-and-width-in-ffprobe

§coded_width: i64

width before cropping https://superuser.com/questions/1523944/whats-the-difference-between-coded-width-and-width-in-ffprobe

§sample_aspect_ratio: Option<Ratio>

ratio of the width to the height of individual pixels in the video. It describes how the pixels are stored in the video file

§display_aspect_ratio: Option<Ratio>

ratio of the width to the height of the video as it is intended to be viewed. This aspect ratio dictates the shape of the displayed image on the screen.

§bits_per_raw_sample: Option<i64>

This specifies the number of bits used to represent each component of the pixel. For example, in an 8-bit raw sample, each color component (e.g., red, green, and blue in an RGB format) is represented by 8 bits, allowing 256 different levels per component.

§chroma_location: Option<String>

Location of chroma samples in the video (e.g., left, center). Chroma samples refer to the color information in a video image. In video and image processing, the image is typically represented in a color space where the luminance (brightness) and chrominance (color) are separated. The chrominance components (chroma) are often sub-sampled to reduce the amount of data that needs to be processed and stored. TODO: enum

§closed_captions: i64

Indicates the presence of closed captions in the video. (0/1) Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements

§codec_long_name: String

Long name of the codec used for the video stream.

§codec_name: String

Short name of the codec used for the video stream. Example: h264

§color_primaries: Option<String>

Indicates the color primaries used in the video (e.g., BT.709).

§color_range: Option<String>

Indicates the color range used in the video (e.g., full, limited).

§color_space: Option<String>

Indicates the color space used in the video (e.g., YUV, RGB).

§color_transfer: Option<String>

Indicates the color transfer characteristic used in the video (e.g., BT.709).

§field_order: Option<String>

Order in which fields are interlaced (e.g., top first, bottom first). Order in which fields are interlaced in the video. Interlaced video consists of two fields per frame, each containing a subset of the lines in the frame. Field order determines how these fields are displayed:

  • Top field first: The first field contains the topmost lines (odd lines), followed by the second field with the even lines.
  • Bottom field first: The first field contains the bottommost lines (even lines), followed by the second field with the odd lines.
  • Progressive: The video is not interlaced; each frame is displayed as a whole.
  • Unknown: The field order is not specified.
§film_grain: i64

Indicates the presence of film grain in the video.

§has_b_frames: i64

Number of B-frames between I-frames and P-frames in the video. MPEG-2 includes three basic types of coded frames: intra-coded frames (I-frames), predictive-coded frames (P-frames), and bidirectionally-predictive-coded frames (B-frames). An I-frame is a separately-compressed version of a single uncompressed (raw) frame. The coding of an I-frame takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames, and so their coding is very similar to how a still photograph would be coded (roughly similar to JPEG picture coding). Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by the discrete cosine transform (DCT). The result is an 8×8 matrix of coefficients that have real number values. The transform converts spatial variations into frequency variations, but it does not change the information in the block; if the transform is computed with perfect precision, the original block can be recreated exactly by applying the inverse cosine transform (also with perfect precision). The conversion from 8-bit integers to real-valued transform coefficients actually expands the amount of data used at this stage of the processing, but the advantage of the transformation is that the image data can then be approximated by quantizing the coefficients. Many of the transform coefficients, usually the higher frequency components, will be zero after the quantization, which is basically a rounding operation. The penalty of this step is the loss of some subtle distinctions in brightness and color. The quantization may either be coarse or fine, as selected by the encoder. If the quantization is not too coarse and one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but is not quite the same. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the 8×8 array of coefficients contains only zeros after quantization is applied. By starting in the opposite corner of the matrix, then zigzagging through the matrix to combine the coefficients into a string, then substituting run-length codes for consecutive zeros in that string, and then applying Huffman coding to that result, one reduces the matrix to a smaller quantity of data. It is this entropy coded data that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame. The processing of B-frames is similar to that of P-frames except that B-frames use the picture in a subsequent reference frame as well as the picture in a preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames in MPEG-2 Video. Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form a Group of Pictures (GOP); however, the standard is flexible about this. The encoder selects which pictures are coded as I-, P-, and B-frames.

§is_avc: Option<bool>

Indicates whether the video stream is in AVC format. Advanced Video Coding (AVC), also referred to as H.264 or MPEG-4 Part 10, is a video compression standard based on block-oriented, motion-compensated coding.[2] It is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019.[3][4] It supports a maximum resolution of 8K UHD.[5][6]

§level: i64

Level of the codec profile used for the video stream. TODO: explain

§nal_length_size: Option<i64>

Size of the NAL (Network Abstraction Layer) units in the video stream.

§pix_fmt: Option<String>

Pixel format used in the video stream (e.g., yuv420p).

§profile: Option<String>

Profile of the codec used for the video stream (e.g., Main, High). TODO: enum

§duration_ts: Option<u64>

Duration of the video stream in timestamp units.

§refs: i64

Number of reference frames in the video stream. TODO: Explain

§tags: Option<VideoTags>

Metadata tags associated with the video stream.

§bit_rate: Option<i64>

Bit rate of the video stream. The bit_rate represents the number of bits that are processed per unit of time in the video stream. It is a measure of the video stream’s data rate, indicating how much data is encoded for each second of video.

Trait Implementations§

Source§

impl Clone for VideoStream

Source§

fn clone(&self) -> VideoStream

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for VideoStream

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<'de> Deserialize<'de> for VideoStream

Source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
Source§

impl PartialEq for VideoStream

Source§

fn eq(&self, other: &VideoStream) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl Serialize for VideoStream

Source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more
Source§

impl Eq for VideoStream

Source§

impl StructuralPartialEq for VideoStream

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut T)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,