#[repr(C)]pub struct AVCaptureDeviceInput { /* private fields */ }
AVCaptureInput
only.Expand description
AVCaptureDeviceInput is a concrete subclass of AVCaptureInput that provides an interface for capturing media from an AVCaptureDevice.
Instances of AVCaptureDeviceInput are input sources for AVCaptureSession that provide media data from devices connected to the system, represented by instances of AVCaptureDevice.
See also Apple’s documentation
Implementations§
Source§impl AVCaptureDeviceInput
impl AVCaptureDeviceInput
Sourcepub unsafe fn deviceInputWithDevice_error(
device: &AVCaptureDevice,
) -> Result<Retained<Self>, Retained<NSError>>
Available on crate feature AVCaptureDevice
only.
pub unsafe fn deviceInputWithDevice_error( device: &AVCaptureDevice, ) -> Result<Retained<Self>, Retained<NSError>>
AVCaptureDevice
only.Returns an AVCaptureDeviceInput instance that provides media data from the given device.
Parameter device
: An AVCaptureDevice instance to be used for capture.
Parameter outError
: On return, if the given device cannot be used for capture, points to an NSError describing the problem.
Returns: An AVCaptureDeviceInput instance that provides data from the given device, or nil, if the device could not be used for capture.
This method returns an instance of AVCaptureDeviceInput that can be used to capture data from an AVCaptureDevice in an AVCaptureSession. This method attempts to open the device for capture, taking exclusive control of it if necessary. If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
Sourcepub unsafe fn initWithDevice_error(
this: Allocated<Self>,
device: &AVCaptureDevice,
) -> Result<Retained<Self>, Retained<NSError>>
Available on crate feature AVCaptureDevice
only.
pub unsafe fn initWithDevice_error( this: Allocated<Self>, device: &AVCaptureDevice, ) -> Result<Retained<Self>, Retained<NSError>>
AVCaptureDevice
only.Creates an AVCaptureDeviceInput instance that provides media data from the given device.
Parameter device
: An AVCaptureDevice instance to be used for capture.
Parameter outError
: On return, if the given device cannot be used for capture, points to an NSError describing the problem.
Returns: An AVCaptureDeviceInput instance that provides data from the given device, or nil, if the device could not be used for capture.
This method creates an instance of AVCaptureDeviceInput that can be used to capture data from an AVCaptureDevice in an AVCaptureSession. This method attempts to open the device for capture, taking exclusive control of it if necessary. If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
Sourcepub unsafe fn device(&self) -> Retained<AVCaptureDevice>
Available on crate feature AVCaptureDevice
only.
pub unsafe fn device(&self) -> Retained<AVCaptureDevice>
AVCaptureDevice
only.The device from which the receiver provides data.
The value of this property is the AVCaptureDevice instance that was used to create the receiver.
Sourcepub unsafe fn unifiedAutoExposureDefaultsEnabled(&self) -> bool
pub unsafe fn unifiedAutoExposureDefaultsEnabled(&self) -> bool
Specifies whether the source device should use the same default auto exposure behaviors for -[AVCaptureSession setSessionPreset:] and -[AVCaptureDevice setActiveFormat:].
AVCaptureDevice’s activeFormat property may be set two different ways. 1) You set it directly using one of the formats in the device’s -formats array, or 2) the AVCaptureSession sets it on your behalf when you set the AVCaptureSession’s sessionPreset property. Depending on the device and format, the default auto exposure behavior may be configured differently when you use one method or the other, resulting in non-uniform auto exposure behavior. Auto exposure defaults include min frame rate, max frame rate, and max exposure duration. If you wish to ensure that consistent default behaviors are applied to the device regardless of the API you use to configure the activeFormat, you may set the device input’s unifiedAutoExposureDefaultsEnabled property to YES. Default value for this property is NO.
Note that if you manually set the device’s min frame rate, max frame rate, or max exposure duration, your custom values will override the device defaults regardless of whether you’ve set this property to YES.
Sourcepub unsafe fn setUnifiedAutoExposureDefaultsEnabled(
&self,
unified_auto_exposure_defaults_enabled: bool,
)
pub unsafe fn setUnifiedAutoExposureDefaultsEnabled( &self, unified_auto_exposure_defaults_enabled: bool, )
Setter for unifiedAutoExposureDefaultsEnabled
.
Sourcepub unsafe fn portsWithMediaType_sourceDeviceType_sourceDevicePosition(
&self,
media_type: Option<&AVMediaType>,
source_device_type: Option<&AVCaptureDeviceType>,
source_device_position: AVCaptureDevicePosition,
) -> Retained<NSArray<AVCaptureInputPort>>
Available on crate features AVCaptureDevice
and AVMediaFormat
only.
pub unsafe fn portsWithMediaType_sourceDeviceType_sourceDevicePosition( &self, media_type: Option<&AVMediaType>, source_device_type: Option<&AVCaptureDeviceType>, source_device_position: AVCaptureDevicePosition, ) -> Retained<NSArray<AVCaptureInputPort>>
AVCaptureDevice
and AVMediaFormat
only.An accessor method used to retrieve a virtual device’s constituent device ports for use in an AVCaptureMultiCamSession.
Parameter mediaType
: The AVMediaType of the port for which you’re searching, or nil if all media types should be considered.
Parameter sourceDeviceType
: The AVCaptureDeviceType of the port for which you’re searching, or nil if source device type is irrelevant.
Parameter sourceDevicePosition
: The AVCaptureDevicePosition of the port for which you’re searching. AVCaptureDevicePositionUnspecified is germane to audio devices, indicating omnidirectional audio. For other types of capture devices (e.g. cameras), AVCaptureDevicePositionUnspecified means all positions should be considered in the search.
Returns: An array of AVCaptureInputPorts satisfying the search criteria, or an empty array could be found.
When using AVCaptureMultiCamSession, multiple devices may be run simultaneously. You may also run simultaneous streams from a virtual device such as the Dual Camera. By inspecting a virtual device’s constituentDevices property, you can find its underlying physical devices and, using this method, search for ports originating from one of those constituent devices. Note that the AVCaptureInput.ports array does not include constituent device ports for virtual devices. You must use this accessor method to discover the ports for which you’re specifically looking. These constituent device ports may be used to make connections to outputs for use with an AVCaptureMultiCamSession. Using the Dual Camera as an example, the AVCaptureInput.ports property exposes only those ports supported by the virtual device (it switches automatically between wide and telephoto cameras according to the zoom factor). You may use this method to find the video ports for the constituentDevices.
AVCaptureInputPort *wideVideoPort = [dualCameraInput portsWithMediaType:AVMediaTypeVideo sourceDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera sourceDevicePosition:AVCaptureDevicePositionBack].firstObject; AVCaptureInputPort *teleVideoPort = [dualCameraInput portsWithMediaType:AVMediaTypeVideo sourceDeviceType:AVCaptureDeviceTypeBuiltInTelephotoCamera sourceDevicePosition:AVCaptureDevicePositionBack].firstObject;
These ports may be used to create connections, say, to two AVCaptureVideoDataOutput instances, allowing for synchronized full frame rate delivery of both wide and telephoto streams.
As of iOS 13, constituent device ports may not be connected to AVCapturePhotoOutput instances. Clients who wish to capture multiple photos from a virtual device should use AVCapturePhotoOutput’s virtualDeviceConstituentPhotoDeliveryEnabled feature.
When used in conjunction with an audio device, this method allows you to discover microphones in different AVCaptureDevicePositions. When you intend to work with an AVCaptureMultiCamSession, you may use these ports to make connections and simultaneously capture both front facing and back facing audio simultaneously to two different outputs. When used with an AVCaptureMultiCamSession, the audio device port whose sourceDevicePosition is AVCaptureDevicePositionUnspecified produces omnidirectional sound.
Sourcepub unsafe fn videoMinFrameDurationOverride(&self) -> CMTime
Available on crate feature objc2-core-media
only.
pub unsafe fn videoMinFrameDurationOverride(&self) -> CMTime
objc2-core-media
only.A property that acts as a modifier to the AVCaptureDevice’s activeVideoMinFrameDuration property. Default value is kCMTimeInvalid.
An AVCaptureDevice’s activeVideoMinFrameDuration property is the reciprocal of its active maximum frame rate. To limit the max frame rate of the capture device, clients may set the device’s activeVideoMinFrameDuration to a value supported by the receiver’s activeFormat (see AVCaptureDeviceFormat’s videoSupportedFrameRateRanges property). Changes you make to the device’s activeVideoMinFrameDuration property take effect immediately without disrupting preview. Therefore, the AVCaptureSession must always allocate sufficient resources to allow the device to run at its activeFormat’s max allowable frame rate. If you wish to use a particular device format but only ever run it at lower frame rates (for instance, only run a 1080p240 fps format at a max frame rate of 60), you can set the AVCaptureDeviceInput’s videoMinFrameDurationOverride property to the reciprocal of the max frame rate you intend to use before starting the session (or within a beginConfiguration / commitConfiguration block while running the session).
When a device input is added to a session, this property reverts back to the default of kCMTimeInvalid (no override).
Sourcepub unsafe fn setVideoMinFrameDurationOverride(
&self,
video_min_frame_duration_override: CMTime,
)
Available on crate feature objc2-core-media
only.
pub unsafe fn setVideoMinFrameDurationOverride( &self, video_min_frame_duration_override: CMTime, )
objc2-core-media
only.Setter for videoMinFrameDurationOverride
.
Sourcepub unsafe fn isMultichannelAudioModeSupported(
&self,
multichannel_audio_mode: AVCaptureMultichannelAudioMode,
) -> bool
pub unsafe fn isMultichannelAudioModeSupported( &self, multichannel_audio_mode: AVCaptureMultichannelAudioMode, ) -> bool
Returns whether the receiver supports the given multichannel audio mode.
Parameter multichannelAudioMode
: An AVCaptureMultichannelAudioMode to be checked.
Returns: YES if the receiver supports the given multichannel audio mode, NO otherwise.
The receiver’s multichannelAudioMode property can only be set to a certain mode if this method returns YES for that mode.
Multichannel audio modes are not supported when used in conjunction with AVCaptureMultiCamSession.
Sourcepub unsafe fn multichannelAudioMode(&self) -> AVCaptureMultichannelAudioMode
pub unsafe fn multichannelAudioMode(&self) -> AVCaptureMultichannelAudioMode
Indicates the multichannel audio mode to apply when recording audio.
This property only takes effect when audio is being routed through the built-in microphone, and is ignored if an external microphone is in use.
The default value is AVCaptureMultichannelAudioModeNone, in which case the default single channel audio recording is used.
Sourcepub unsafe fn setMultichannelAudioMode(
&self,
multichannel_audio_mode: AVCaptureMultichannelAudioMode,
)
pub unsafe fn setMultichannelAudioMode( &self, multichannel_audio_mode: AVCaptureMultichannelAudioMode, )
Setter for multichannelAudioMode
.
Sourcepub unsafe fn isWindNoiseRemovalSupported(&self) -> bool
pub unsafe fn isWindNoiseRemovalSupported(&self) -> bool
Returns whether or not the device supports wind noise removal during audio capture.
YES if the device supports wind noise removal, NO otherwise.
Sourcepub unsafe fn isWindNoiseRemovalEnabled(&self) -> bool
pub unsafe fn isWindNoiseRemovalEnabled(&self) -> bool
Specifies whether or not wind noise is removed during audio capture.
Wind noise removal is available when the AVCaptureDeviceInput multichannelAudioMode property is set to any value other than AVCaptureMultichannelAudioModeNone.
Sourcepub unsafe fn setWindNoiseRemovalEnabled(
&self,
wind_noise_removal_enabled: bool,
)
pub unsafe fn setWindNoiseRemovalEnabled( &self, wind_noise_removal_enabled: bool, )
Setter for isWindNoiseRemovalEnabled
.
Methods from Deref<Target = AVCaptureInput>§
Sourcepub unsafe fn ports(&self) -> Retained<NSArray<AVCaptureInputPort>>
pub unsafe fn ports(&self) -> Retained<NSArray<AVCaptureInputPort>>
The ports owned by the receiver.
The value of this property is an array of AVCaptureInputPort objects, each exposing an interface to a single stream of media data provided by an input.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init
/new
methods).
§Example
Check that an instance of NSObject
has the precise class NSObject
.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());
Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load
instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load
instead.Use Ivar::load
instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T
.
See Ivar::load_ptr
for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T
.
This is the reference-variant. Use Retained::downcast
if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString
.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString
to a NSMutableString
,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass:
for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject
.
§Panics
This works internally by calling isKindOfClass:
. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject
and
NSProxy
implement this method.
§Examples
Cast an NSString
back and forth from NSObject
.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();
Try (and fail) to cast an NSObject
to an NSString
.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());
Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();
This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}
Trait Implementations§
Source§impl AsRef<AVCaptureInput> for AVCaptureDeviceInput
impl AsRef<AVCaptureInput> for AVCaptureDeviceInput
Source§fn as_ref(&self) -> &AVCaptureInput
fn as_ref(&self) -> &AVCaptureInput
Source§impl AsRef<AnyObject> for AVCaptureDeviceInput
impl AsRef<AnyObject> for AVCaptureDeviceInput
Source§impl AsRef<NSObject> for AVCaptureDeviceInput
impl AsRef<NSObject> for AVCaptureDeviceInput
Source§impl Borrow<AVCaptureInput> for AVCaptureDeviceInput
impl Borrow<AVCaptureInput> for AVCaptureDeviceInput
Source§fn borrow(&self) -> &AVCaptureInput
fn borrow(&self) -> &AVCaptureInput
Source§impl Borrow<AnyObject> for AVCaptureDeviceInput
impl Borrow<AnyObject> for AVCaptureDeviceInput
Source§impl Borrow<NSObject> for AVCaptureDeviceInput
impl Borrow<NSObject> for AVCaptureDeviceInput
Source§impl ClassType for AVCaptureDeviceInput
impl ClassType for AVCaptureDeviceInput
Source§const NAME: &'static str = "AVCaptureDeviceInput"
const NAME: &'static str = "AVCaptureDeviceInput"
Source§type Super = AVCaptureInput
type Super = AVCaptureInput
Source§type ThreadKind = <<AVCaptureDeviceInput as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<AVCaptureDeviceInput as ClassType>::Super as ClassType>::ThreadKind
Source§impl Debug for AVCaptureDeviceInput
impl Debug for AVCaptureDeviceInput
Source§impl Deref for AVCaptureDeviceInput
impl Deref for AVCaptureDeviceInput
Source§impl Hash for AVCaptureDeviceInput
impl Hash for AVCaptureDeviceInput
Source§impl Message for AVCaptureDeviceInput
impl Message for AVCaptureDeviceInput
Source§impl NSObjectProtocol for AVCaptureDeviceInput
impl NSObjectProtocol for AVCaptureDeviceInput
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass
directly, or cast your objects with AnyObject::downcast_ref