brainwires-hardware
Hardware I/O for the Brainwires Agent Framework.
Provides a unified hardware abstraction layer covering audio, GPIO, Bluetooth, and network hardware — all behind opt-in feature flags so you only compile what you need.
Modules
| Module | Feature | Description |
|---|---|---|
audio |
audio |
Audio capture/playback, STT, TTS (16 cloud providers + local Whisper) |
audio/vad |
(always) / vad |
Voice activity detection — EnergyVad (always) + WebRtcVad (vad) |
audio/wake_word |
wake-word |
Wake word detection — EnergyTriggerDetector + optional ML backends |
audio/assistant |
voice-assistant |
End-to-end voice assistant pipeline |
gpio |
gpio |
GPIO pin management with safety allow-lists and PWM (Linux) |
bluetooth |
bluetooth |
BLE advertisement scanning and adapter enumeration |
network |
network |
NIC enumeration, IP config, ARP host discovery, port scanning |
camera |
camera |
Webcam/camera frame capture (V4L2/AVFoundation/MSMF) |
usb |
usb |
Raw USB device enumeration and transfers (no libusb) |
Getting started
[]
# Pick only what you need:
= { = "0.8", = ["audio"] }
= { = "0.8", = ["gpio"] }
= { = "0.8", = ["bluetooth"] }
= { = "0.8", = ["network"] }
# Or enable everything:
= { = "0.8", = ["full"] }
Feature flags
| Feature | Description |
|---|---|
audio |
Hardware audio I/O via CPAL + 16 cloud STT/TTS providers |
flac |
FLAC encode/decode |
local-stt |
Local Whisper STT inference via whisper-rs (heavy dep, opt-in) |
vad |
WebRTC VAD algorithm (EnergyVad is always available with audio) |
wake-word |
Wake word detection — EnergyTriggerDetector (zero deps) |
wake-word-rustpotter |
RustpotterDetector — pure-Rust ML wake word (opt-in, see notes) |
wake-word-porcupine |
PorcupineDetector — Picovoice Porcupine (requires AccessKey + git dep) |
voice-assistant |
Full pipeline: capture → wake word → VAD → STT → handler → TTS |
gpio |
GPIO pin control via Linux character device API (gpio-cdev) |
bluetooth |
BLE scanning and adapter enumeration via btleplug |
network |
NIC enumeration, IP config, ARP discovery, port scanning |
camera |
Webcam/camera capture via nokhwa (V4L2/AVFoundation/MSMF) |
usb |
Raw USB device access and transfers via nusb (no libusb) |
full |
All features (except local-stt, wake-word-rustpotter, wake-word-porcupine) |
Audio
Supports hardware capture and playback via CPAL, plus cloud STT/TTS integrations:
STT: OpenAI, Azure, Deepgram, ElevenLabs, Fish Audio TTS: OpenAI, Azure, Deepgram, ElevenLabs, Fish Audio, Google, Murf, Cartesia
use ;
let tts = new;
let audio = tts.synthesize.await?;
GPIO (Linux)
Safe GPIO access with explicit allow-lists — no pin can be used unless it appears in the configured policy.
use ;
use GpioDirection;
let mut manager = from_config;
let pin = manager.acquire?;
Bluetooth
Cross-platform BLE scanning using btleplug:
use bluetooth;
use Duration;
let devices = scan_ble.await;
for d in &devices
Network
use network;
use Duration;
// List interfaces
for iface in list_interfaces
// IP config with gateways
for cfg in get_ip_configs
// Port scan
let results = scan_common_ports.await;
// ARP host discovery (requires CAP_NET_RAW)
let hosts = arp_scan.await;
Voice Activity Detection
EnergyVad is always available (no extra feature needed beyond audio). WebRtcVad requires the vad feature.
use ;
let vad = default; // -40 dBFS threshold
if vad.is_speech
Wake Word Detection
use ;
let mut detector = new;
// Feed 30 ms i16 frames from the mic:
if let Some = detector.process_frame
Voice Assistant Pipeline
use ;
use Transcript;
use async_trait;
;
// Build and run
let mut assistant = builder
.with_playback
.with_tts
.with_wake_word
.build;
assistant.run.await?;
Migration from brainwires-audio
# Before
= "0.8"
# After
= { = "0.8", = ["audio"] }
All public types and traits are re-exported from the crate root — existing code using
brainwires_audio::* can switch to brainwires_hardware::* with no further changes.
Examples
# Wake word demo (prints detections from mic)
# Full voice assistant demo (requires OPENAI_API_KEY)
# Standalone voice assistant binary