pub unsafe trait AVCaptureFileOutputDelegate: NSObjectProtocol {
// Provided methods
unsafe fn captureOutputShouldProvideSampleAccurateRecordingStart(
&self,
output: &AVCaptureFileOutput,
) -> bool
where Self: Sized + Message { ... }
unsafe fn captureOutput_didOutputSampleBuffer_fromConnection(
&self,
output: &AVCaptureFileOutput,
sample_buffer: &CMSampleBuffer,
connection: &AVCaptureConnection,
)
where Self: Sized + Message { ... }
}AVCaptureFileOutput only.Expand description
Defines an interface for delegates of AVCaptureFileOutput to monitor and control recordings along exact sample boundaries.
See also Apple’s documentation
Provided Methods§
Sourceunsafe fn captureOutputShouldProvideSampleAccurateRecordingStart(
&self,
output: &AVCaptureFileOutput,
) -> bool
Available on crate feature AVCaptureOutputBase only.
unsafe fn captureOutputShouldProvideSampleAccurateRecordingStart( &self, output: &AVCaptureFileOutput, ) -> bool
AVCaptureOutputBase only.Allows a client to opt in to frame accurate record-start in captureOutput:didOutputSampleBuffer:fromConnection:
Parameter output: The AVCaptureFileOutput instance with which the delegate is associated.
In apps linked before macOS 10.8, delegates that implement the captureOutput:didOutputSampleBuffer:fromConnection: method can ensure frame accurate start / stop of a recording by calling startRecordingToOutputFileURL:recordingDelegate: from within the callback. Frame accurate start requires the capture output to apply outputSettings when the session starts running, so it is ready to record on any given frame boundary. Compressing all the time while the session is running has power, thermal, and CPU implications. In apps linked on or after macOS 10.8, delegates must implement captureOutputShouldProvideSampleAccurateRecordingStart: to indicate whether frame accurate start/stop recording is required (returning YES) or not (returning NO). The output calls this method as soon as the delegate is added, and never again. If your delegate returns NO, the capture output applies compression settings when startRecordingToOutputFileURL:recordingDelegate: is called, and disables compression settings after the recording is stopped.
Sourceunsafe fn captureOutput_didOutputSampleBuffer_fromConnection(
&self,
output: &AVCaptureFileOutput,
sample_buffer: &CMSampleBuffer,
connection: &AVCaptureConnection,
)
Available on crate features AVCaptureOutputBase and AVCaptureSession and objc2-core-media only.
unsafe fn captureOutput_didOutputSampleBuffer_fromConnection( &self, output: &AVCaptureFileOutput, sample_buffer: &CMSampleBuffer, connection: &AVCaptureConnection, )
AVCaptureOutputBase and AVCaptureSession and objc2-core-media only.Gives the delegate the opportunity to inspect samples as they are received by the output and optionally start and stop recording at exact times.
Parameter output: The capture file output that is receiving the media data.
Parameter sampleBuffer: A CMSampleBuffer object containing the sample data and additional information about the sample, such as its format and presentation time.
Parameter connection: The AVCaptureConnection object attached to the file output from which the sample data was received.
This method is called whenever the file output receives a single sample buffer (a single video frame or audio buffer, for example) from the given connection. This gives delegates an opportunity to start and stop recording or change output files at an exact sample boundary if -captureOutputShouldProvideSampleAccurateRecordingStart: returns YES. If called from within this method, the file output’s startRecordingToOutputFileURL:recordingDelegate: and resumeRecording methods are guaranteed to include the received sample buffer in the new file, whereas calls to stopRecording and pauseRecording are guaranteed to include all samples leading up to those in the current sample buffer in the existing file.
Delegates can gather information particular to the samples by inspecting the CMSampleBuffer object. Sample buffers always contain a single frame of video if called from this method but may also contain multiple samples of audio. For B-frame video formats, samples are always delivered in presentation order.
Clients that need to reference the CMSampleBuffer object outside of the scope of this method must CFRetain it and then CFRelease it when they are finished with it.
Note that to maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped. If your application is causing samples to be dropped by retaining the provided CMSampleBuffer objects for too long, but it needs access to the sample data for a long period of time, consider copying the data into a new buffer and then calling CFRelease on the sample buffer if it was previously retained so that the memory it references can be reused.
Clients should not assume that this method will be called on a specific thread. In addition, this method is called periodically, so it must be efficient to prevent capture performance problems.