luis_sys 0.4.5

FFI bindings for Microsoft LUIS API. failed to build luis_sys-0.4.5
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure builds.
If you believe this is' fault, open an issue.
Visit the last successful build: luis_sys-0.4.3


Rust FFI bindings for Microsoft LUIS API.

A rust style wrapper for Microsoft LUIS C/C++ SDK.


Add luis_sys to the dependencies section in your project's Cargo.toml, with

luis_sys = "^0.3.8"

Note: The crate includes Cognitive Services Speech SDK Linux Version 1.3.1. Windows version is not tested.


Create entry main function with crates of luis_sys, logger and futures.

use env_logger;
use futures::{Future, Stream};
use log::{error, info};
use luis_sys::{builder::RecognizerConfig, events::Flags, Result};
use std::env;
use tokio;

fn main() {
    env::set_var("RUST_BACKTRACE", "1");
    env::set_var("RUST_LOG", "debug");

    info!("Start ASR test...");
    info!("Stop ASR test...");

Construct a builder by subscription info with configurations. The audio input is a wav file in example folder.

    let mut factory = RecognizerConfig::from_subscription(

    // Choose the events to subscribe.
    let flags = Flags::Recognition
        | Flags::SpeechDetection
        | Flags::Session
        | Flags::Connection
        | Flags::Canceled;

    // Add intents if you want a intent recognizer. They are phrases or intents names of a pre-trained language understanding model.
    let intents = vec![


factory.recognizer() build a speech recognition only recognizer. factory.intent_recognizer() build a speech intent recognizer.

Starts blocked intent recognition, and returns after a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed.

fn recognize_once(factory: &RecognizerConfig) -> Result {
    info!("Synchronous ASR ");
    let recognizer = factory.recognizer()?;
    let result = recognizer.recognize()?;
    info!("done: {}", result);

Asynchronous intent recognition in tokio runtime.

fn recognize_stream(factory: &RecognizerConfig) -> Result {
    info!("Asynchronous ASR, streaming Event object");
    let mut reco = factory.intent_recognizer()?;
    let promise = reco
        // Add event filter to choice events you care.
        .set_filter(Flags::Recognized | Flags::SpeechDetection)
        .for_each(|msg| {
            info!("result: {:?}", msg.into_result());

Translate and synthesis audio.

    // Add one or many target languages to tranlate from speech.
    // Enable audio synthesis output.
    // Select voice name appropriate for the target language.
    .put_voice_name("Microsoft Server Speech Text to Speech Voice (en-US, JessaRUS)")?;

info!("Asynchronous translation and audio synthesis");
let mut reco = factory.translator()?;
let promise = reco
    .set_filter(Flags::Recognized | Flags::Synthesis)
    .for_each(|evt| {
        // Handle the translation or synthesis result.
    .map_err(|err| error!("{}", err));


EventStream returned by Recognizer::start is implemented futures::Stream for asynchronous operation. And it can be refined by set_filter, resulting, json and text to pump different format results. And you can do that and more by Future/Stream combinations.


See the change log.


  • The crate is working in progress, carefully if apply in production.

  • Only speech SDK of LUIS service has C/C++ version. So current version supports very few feature of LUIS while LUIS SDK is in fast evolution phase.

  • Windows version SDK is not tested.

  • Linux version SDK only support Ubuntu distribution currently.

  • Please read the prerequisites at first.