text2audio
A high-performance Rust library for converting text to audio files using Zhipu AI's GLM models, featuring intelligent text segmentation, parallel processing, and advanced audio merging capabilities.
Features
- π€ AI-Powered Text Segmentation - Intelligent semantic text splitting using GLM models for natural-sounding audio
- π΅ Multiple Voice Options - Support for 7 distinct voices with customizable speed and volume
- β‘ Parallel Processing - Concurrent audio generation for improved performance on long texts
- π Automatic Retry - Built-in exponential backoff retry mechanism for robust API calls
- π οΈ Flexible Configuration - Builder pattern API for intuitive customization
- π¦ Zero Dependencies Audio Processing - Built-in WAV audio merging without external tools
- π― Smart Modes - Automatic direct conversion for short texts, segmented processing for long texts
Supported AI Models
Text Segmentation Models
Used for intelligent text splitting and semantic analysis:
- GLM-4.7 - Latest flagship model with superior semantic understanding
- GLM-4.6 - Advanced reasoning model for complex text analysis
- GLM-4.5 - High-performance general-purpose model
- GLM-4.5-Flash - Optimized for speed (default)
- GLM-4.5-Air - Lightweight and cost-effective model
Text-to-Speech Model
- GLM-TTS - Zhipu AI's dedicated text-to-speech model for high-quality audio generation
Prerequisites
- Rust 1.70 or later
- Zhipu AI API Key - Get one from Zhipu AI Platform
- Network Connection - Required for API calls
Environment Setup
Quick Start
Add to your Cargo.toml:
[]
= "0.1.0"
= { = "1", = ["full"] }
Basic usage:
use Text2Audio;
async
Usage Examples
1. Basic Text to Audio
use Text2Audio;
let converter = new;
converter.convert.await?;
2. Custom Voice and Speed
use ;
let converter = new
.with_voice
.with_speed // 50% faster
.with_volume; // Louder
converter.convert.await?;
3. Long Text with AI Segmentation
use ;
let long_text = "ιεΈΈιΏηζζ¬...";
let converter = new
.with_model // Use best model for segmentation
.with_max_segment_length // Shorter segments for better flow
.with_thinking; // Enable thinking mode
converter.convert.await?;
4. Parallel Processing for Performance
use ;
let converter = new
.with_voice
.with_parallel // Process up to 5 segments concurrently
.with_retry_config;
converter.convert.await?;
5. Using Builder Pattern
use ;
use Duration;
let converter = builder
.model
.voice
.speed
.volume
.max_segment_length
.parallel
.thinking
.retry_config
.build;
converter.convert.await?;
Configuration Reference
Text2Audio Methods
| Method | Type | Range | Default | Description |
|---|---|---|---|---|
with_model() |
Model |
enum | GLM4_5Flash |
AI model for text segmentation |
with_voice() |
Voice |
enum | Tongtong |
Voice selection for TTS |
with_speed() |
f32 |
0.5 - 2.0 | 1.0 |
Speech speed multiplier |
with_volume() |
f32 |
0.0 - 10.0 | 1.0 |
Audio volume level |
with_max_segment_length() |
usize |
100 - 1024 | 500 |
Max characters per segment |
with_parallel() |
usize |
1 - 10 | disabled | Enable concurrent processing |
with_thinking() |
bool |
true/false | false |
Enable AI thinking mode |
with_coding_plan() |
bool |
true/false | false |
Use coding plan endpoint |
with_retry_config() |
(u32, Duration) |
custom | (3, 100ms) |
Retry attempts and delay |
Voice Options
All voices are provided by Zhipu AI's TTS service:
Voice::Tongtong(η«₯η«₯) - Default female voice, clear and naturalVoice::Chuichui(ι€ι€) - Warm and friendly male voiceVoice::Xiaochen(ζθΎ°) - Professional narration voiceVoice::Jam- Youthful and energetic voiceVoice::Kazi- Deep and authoritative voiceVoice::Douji(θ±ιΈ‘) - Cute and playful voiceVoice::Luodo- Mature and calm voice
AI Models
Choose the appropriate model based on your needs:
- GLM-4.7: Best for long, complex texts requiring deep semantic understanding
- GLM-4.6: Good balance of quality and speed for most use cases
- GLM-4.5: Reliable general-purpose model
- GLM-4.5-Flash: Fastest processing, ideal for simple texts
- GLM-4.5-Air: Most cost-effective for high-volume processing
Error Handling
The library provides detailed error types for robust error handling:
use ;
match converter.convert.await
Architecture
text2audio/
βββ src/
β βββ lib.rs # Main API and Text2Audio struct
β βββ client.rs # Zhipu AI API client
β βββ ai_splitter.rs # AI-powered text segmentation
β βββ audio_merger.rs # WAV audio file merging
β βββ config.rs # Voice and configuration types
β βββ error.rs # Error types and Result alias
βββ examples/ # Usage examples
βββ assets/ # Sample text files
βββ target/ # Build output
Workflow
- Input Validation: Check if text is empty
- Length Detection:
- Short text (β€ max_segment_length): Direct TTS conversion
- Long text (> max_segment_length): AI-powered segmentation
- Text Segmentation: AI model splits text at semantic boundaries
- Audio Generation:
- Sequential: One segment at a time
- Parallel: Multiple segments concurrently (if enabled)
- Audio Merging: Combine all audio segments into final WAV file
- Retry Handling: Automatic retry with exponential backoff on failures
Running Examples
The project includes comprehensive examples demonstrating various features:
Basic Example
Converts a short Chinese text to audio using default settings.
AI Segmentation Example
Demonstrates AI-powered semantic segmentation for long texts.
Custom Voice Example
Shows voice customization and parameter tuning.
Parallel Processing Example
Illustrates concurrent audio generation for performance.
File Input Example
Converts text from a file with optimized settings for long-form content.
Direct AI Splitter Usage
Demonstrates direct usage of the AiSplitter component.
Performance Tips
- Choose the Right Model: Use GLM-4.5-Flash for simple texts, GLM-4.7 for complex content
- Enable Parallel Processing: Set
with_parallel(3-5)for long texts to significantly reduce total time - Optimize Segment Length:
- 300-500 chars for narrative content
- 800-1024 chars for technical content
- Adjust Retry Config: Increase retries and delays for unstable networks
- Use Thinking Mode: Enable for texts requiring deep semantic understanding
Requirements
- Minimum Rust Version: 1.70.0
- Dependencies: tokio (async runtime), zai-rs (Zhipu AI client), hound (WAV handling)
- Network: Stable internet connection for API calls
- API Key: Valid Zhipu AI API key with TTS service enabled
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Contributions are welcome! Areas for improvement:
- Additional audio format support (MP3, OGG)
- Custom voice training integration
- Local model inference support
- Batch processing utilities
- Audio post-processing effects
Please feel free to submit issues, feature requests, or pull requests.