pub struct SFSpeechAudioBufferRecognitionRequest { /* private fields */ }SFSpeechRecognitionRequest only.Expand description
A request to recognize speech from captured audio content, such as audio from the device’s microphone.
Use an SFSpeechAudioBufferRecognitionRequest object to perform speech recognition on live audio, or on a set of existing audio buffers. For example, use this request object to route audio from a device’s microphone to the speech recognizer.
The request object contains no audio initially. As you capture audio, call append(_:) or appendAudioSampleBuffer(_:) to add audio samples to the request object. The speech recognizer continuously analyzes the audio you appended, stopping only when you call the endAudio() method. You must call endAudio() explicitly to stop the speech recognition process.
For a complete example of how to use audio buffers with speech recognition, see SpeakToMe: Using Speech Recognition with AVAudioEngine.
See also Apple’s documentation
Implementations§
Source§impl SFSpeechAudioBufferRecognitionRequest
impl SFSpeechAudioBufferRecognitionRequest
Sourcepub unsafe fn nativeAudioFormat(&self) -> Retained<AVAudioFormat>
Available on crate feature objc2-avf-audio only.
pub unsafe fn nativeAudioFormat(&self) -> Retained<AVAudioFormat>
objc2-avf-audio only.The preferred audio format for optimal speech recognition.
Use the audio format in this property as a hint for optimal recording, but don’t depend on the value remaining unchanged.
Sourcepub unsafe fn appendAudioPCMBuffer(&self, audio_pcm_buffer: &AVAudioPCMBuffer)
Available on crate feature objc2-avf-audio only.
pub unsafe fn appendAudioPCMBuffer(&self, audio_pcm_buffer: &AVAudioPCMBuffer)
objc2-avf-audio only.Appends audio in the PCM format to the end of the recognition request.
The audio must be in a native format and uncompressed.
- Parameters:
- audioPCMBuffer: An audio buffer that contains audio in the PCM format.
Sourcepub unsafe fn appendAudioSampleBuffer(&self, sample_buffer: &CMSampleBuffer)
Available on crate feature objc2-core-media only.
pub unsafe fn appendAudioSampleBuffer(&self, sample_buffer: &CMSampleBuffer)
objc2-core-media only.Appends audio to the end of the recognition request.
The audio must be in a native format.
- Parameters:
- sampleBuffer: A buffer of audio.
Methods from Deref<Target = SFSpeechRecognitionRequest>§
Sourcepub unsafe fn taskHint(&self) -> SFSpeechRecognitionTaskHint
Available on crate feature SFSpeechRecognitionTaskHint only.
pub unsafe fn taskHint(&self) -> SFSpeechRecognitionTaskHint
SFSpeechRecognitionTaskHint only.A value that indicates the type of speech recognition being performed.
The default value of this property is SFSpeechRecognitionTaskHint/unspecified. For a valid list of values, see SFSpeechRecognitionTaskHint.
Sourcepub unsafe fn setTaskHint(&self, task_hint: SFSpeechRecognitionTaskHint)
Available on crate feature SFSpeechRecognitionTaskHint only.
pub unsafe fn setTaskHint(&self, task_hint: SFSpeechRecognitionTaskHint)
SFSpeechRecognitionTaskHint only.Setter for taskHint.
Sourcepub unsafe fn shouldReportPartialResults(&self) -> bool
pub unsafe fn shouldReportPartialResults(&self) -> bool
A Boolean value that indicates whether you want intermediate results returned for each utterance.
The default value of this property is true. If you want only final results (and you don’t care about intermediate results), set this property to false to prevent the system from doing extra work.
Sourcepub unsafe fn setShouldReportPartialResults(
&self,
should_report_partial_results: bool,
)
pub unsafe fn setShouldReportPartialResults( &self, should_report_partial_results: bool, )
Setter for shouldReportPartialResults.
Sourcepub unsafe fn contextualStrings(&self) -> Retained<NSArray<NSString>>
pub unsafe fn contextualStrings(&self) -> Retained<NSArray<NSString>>
An array of phrases that should be recognized, even if they are not in the system vocabulary.
Use this property to specify short custom phrases that are unique to your app. You might include phrases with the names of characters, products, or places that are specific to your app. You might also include domain-specific terminology or unusual or made-up words. Assigning custom phrases to this property improves the likelihood of those phrases being recognized.
Keep phrases relatively brief, limiting them to one or two words whenever possible. Lengthy phrases are less likely to be recognized. In addition, try to limit each phrase to something the user can say without pausing.
Limit the total number of phrases to no more than 100.
Sourcepub unsafe fn setContextualStrings(
&self,
contextual_strings: &NSArray<NSString>,
)
pub unsafe fn setContextualStrings( &self, contextual_strings: &NSArray<NSString>, )
Setter for contextualStrings.
This is copied when set.
Sourcepub unsafe fn interactionIdentifier(&self) -> Option<Retained<NSString>>
👎Deprecated: Not used anymore
pub unsafe fn interactionIdentifier(&self) -> Option<Retained<NSString>>
An identifier string that you use to describe the type of interaction associated with the speech recognition request.
If different parts of your app have different speech recognition needs, you can use this property to identify the part of your app that is making each request. For example, if one part of your app lets users speak phone numbers and another part lets users speak street addresses, consistently identifying the part of the app that makes a recognition request may help improve the accuracy of the results.
Sourcepub unsafe fn setInteractionIdentifier(
&self,
interaction_identifier: Option<&NSString>,
)
👎Deprecated: Not used anymore
pub unsafe fn setInteractionIdentifier( &self, interaction_identifier: Option<&NSString>, )
Setter for interactionIdentifier.
This is copied when set.
Sourcepub unsafe fn requiresOnDeviceRecognition(&self) -> bool
pub unsafe fn requiresOnDeviceRecognition(&self) -> bool
A Boolean value that determines whether a request must keep its audio data on the device.
Set this property to true to prevent an SFSpeechRecognitionRequest from sending audio over the network. However, on-device requests won’t be as accurate.
Note: The request only honors this setting if the
SFSpeechRecognizer/supportsOnDeviceRecognition(SFSpeechRecognizer) property is alsotrue.
Sourcepub unsafe fn setRequiresOnDeviceRecognition(
&self,
requires_on_device_recognition: bool,
)
pub unsafe fn setRequiresOnDeviceRecognition( &self, requires_on_device_recognition: bool, )
Setter for requiresOnDeviceRecognition.
Sourcepub unsafe fn addsPunctuation(&self) -> bool
pub unsafe fn addsPunctuation(&self) -> bool
A Boolean value that indicates whether to add punctuation to speech recognition results.
Set this property to true for the speech framework to automatically include punctuation in the recognition results. Punctuation includes a period or question mark at the end of a sentence, and a comma within a sentence.
Sourcepub unsafe fn setAddsPunctuation(&self, adds_punctuation: bool)
pub unsafe fn setAddsPunctuation(&self, adds_punctuation: bool)
Setter for addsPunctuation.
pub unsafe fn customizedLanguageModel( &self, ) -> Option<Retained<SFSpeechLanguageModelConfiguration>>
SFSpeechLanguageModel only.Sourcepub unsafe fn setCustomizedLanguageModel(
&self,
customized_language_model: Option<&SFSpeechLanguageModelConfiguration>,
)
Available on crate feature SFSpeechLanguageModel only.
pub unsafe fn setCustomizedLanguageModel( &self, customized_language_model: Option<&SFSpeechLanguageModelConfiguration>, )
SFSpeechLanguageModel only.Setter for customizedLanguageModel.
This is copied when set.
Methods from Deref<Target = NSObject>§
Sourcepub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
pub fn doesNotRecognizeSelector(&self, sel: Sel) -> !
Handle messages the object doesn’t recognize.
See Apple’s documentation for details.
Methods from Deref<Target = AnyObject>§
Sourcepub fn class(&self) -> &'static AnyClass
pub fn class(&self) -> &'static AnyClass
Dynamically find the class of this object.
§Panics
May panic if the object is invalid (which may be the case for objects
returned from unavailable init/new methods).
§Example
Check that an instance of NSObject has the precise class NSObject.
use objc2::ClassType;
use objc2::runtime::NSObject;
let obj = NSObject::new();
assert_eq!(obj.class(), NSObject::class());Sourcepub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
👎Deprecated: this is difficult to use correctly, use Ivar::load instead.
pub unsafe fn get_ivar<T>(&self, name: &str) -> &Twhere
T: Encode,
Ivar::load instead.Use Ivar::load instead.
§Safety
The object must have an instance variable with the given name, and it
must be of type T.
See Ivar::load_ptr for details surrounding this.
Sourcepub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
pub fn downcast_ref<T>(&self) -> Option<&T>where
T: DowncastTarget,
Attempt to downcast the object to a class of type T.
This is the reference-variant. Use Retained::downcast if you want
to convert a retained object to another type.
§Mutable classes
Some classes have immutable and mutable variants, such as NSString
and NSMutableString.
When some Objective-C API signature says it gives you an immutable class, it generally expects you to not mutate that, even though it may technically be mutable “under the hood”.
So using this method to convert a NSString to a NSMutableString,
while not unsound, is generally frowned upon unless you created the
string yourself, or the API explicitly documents the string to be
mutable.
See Apple’s documentation on mutability and on
isKindOfClass: for more details.
§Generic classes
Objective-C generics are called “lightweight generics”, and that’s because they aren’t exposed in the runtime. This makes it impossible to safely downcast to generic collections, so this is disallowed by this method.
You can, however, safely downcast to generic collections where all the
type-parameters are AnyObject.
§Panics
This works internally by calling isKindOfClass:. That means that the
object must have the instance method of that name, and an exception
will be thrown (if CoreFoundation is linked) or the process will abort
if that is not the case. In the vast majority of cases, you don’t need
to worry about this, since both root objects NSObject and
NSProxy implement this method.
§Examples
Cast an NSString back and forth from NSObject.
use objc2::rc::Retained;
use objc2_foundation::{NSObject, NSString};
let obj: Retained<NSObject> = NSString::new().into_super();
let string = obj.downcast_ref::<NSString>().unwrap();
// Or with `downcast`, if we do not need the object afterwards
let string = obj.downcast::<NSString>().unwrap();Try (and fail) to cast an NSObject to an NSString.
use objc2_foundation::{NSObject, NSString};
let obj = NSObject::new();
assert!(obj.downcast_ref::<NSString>().is_none());Try to cast to an array of strings.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
// This is invalid and doesn't type check.
let arr = arr.downcast_ref::<NSArray<NSString>>();This fails to compile, since it would require enumerating over the array to ensure that each element is of the desired type, which is a performance pitfall.
Downcast when processing each element instead.
use objc2_foundation::{NSArray, NSObject, NSString};
let arr = NSArray::from_retained_slice(&[NSObject::new()]);
for elem in arr {
if let Some(data) = elem.downcast_ref::<NSString>() {
// handle `data`
}
}Trait Implementations§
Source§impl AsRef<SFSpeechRecognitionRequest> for SFSpeechAudioBufferRecognitionRequest
impl AsRef<SFSpeechRecognitionRequest> for SFSpeechAudioBufferRecognitionRequest
Source§fn as_ref(&self) -> &SFSpeechRecognitionRequest
fn as_ref(&self) -> &SFSpeechRecognitionRequest
Source§impl Borrow<SFSpeechRecognitionRequest> for SFSpeechAudioBufferRecognitionRequest
impl Borrow<SFSpeechRecognitionRequest> for SFSpeechAudioBufferRecognitionRequest
Source§fn borrow(&self) -> &SFSpeechRecognitionRequest
fn borrow(&self) -> &SFSpeechRecognitionRequest
Source§impl ClassType for SFSpeechAudioBufferRecognitionRequest
impl ClassType for SFSpeechAudioBufferRecognitionRequest
Source§const NAME: &'static str = "SFSpeechAudioBufferRecognitionRequest"
const NAME: &'static str = "SFSpeechAudioBufferRecognitionRequest"
Source§type Super = SFSpeechRecognitionRequest
type Super = SFSpeechRecognitionRequest
Source§type ThreadKind = <<SFSpeechAudioBufferRecognitionRequest as ClassType>::Super as ClassType>::ThreadKind
type ThreadKind = <<SFSpeechAudioBufferRecognitionRequest as ClassType>::Super as ClassType>::ThreadKind
Source§impl NSObjectProtocol for SFSpeechAudioBufferRecognitionRequest
impl NSObjectProtocol for SFSpeechAudioBufferRecognitionRequest
Source§fn isEqual(&self, other: Option<&AnyObject>) -> bool
fn isEqual(&self, other: Option<&AnyObject>) -> bool
Source§fn hash(&self) -> usize
fn hash(&self) -> usize
Source§fn isKindOfClass(&self, cls: &AnyClass) -> bool
fn isKindOfClass(&self, cls: &AnyClass) -> bool
Source§fn is_kind_of<T>(&self) -> bool
fn is_kind_of<T>(&self) -> bool
isKindOfClass directly, or cast your objects with AnyObject::downcast_ref