Baidu NetDisk Rust SDK
A Rust SDK for Baidu NetDisk Open Platform API, providing file management, upload/download and other functionalities.
Features
- API Coverage: File management, upload/download, media processing
- High Performance: Supports parallel upload, streaming download, and multi-threaded download
- Elegant Error Handling: Layered error types with Chinese error descriptions
- Thread Safety: Uses
RwLockfor concurrent safety - Flexible Configuration: Builder pattern for easy client configuration
- Async First: Built on
tokioasync runtime - Convenient Shortcut APIs: Client automatically encapsulates submodule methods and manages tokens internally
Installation
Add this to your Cargo.toml:
[]
= "0.1.3"
= { = "1.0", = ["full"] }
Quick Start
Note: The
BaiduNetDiskClientencapsulates all submodule methods and automatically manages tokens internally. Most operations can be called directly on the client without needing to access submodules or pass tokens explicitly.
1. Create Client
use BaiduNetDiskClient;
let client = builder
.app_key
.app_secret
.build?;
2. Authorization
use BaiduNetDiskClient;
// Get device code for authorization
let device_code = client.authorize.get_device_code.await?;
println!;
println!;
// Poll for access token
let token = loop ;
3. File Operations
// List directory - using shortcut API (no explicit token required)
let files = client.list_directory.await?;
// Search files - using shortcut API
let = client.search_files.await?;
// Upload file - using shortcut API
client.upload_file.await?;
// Download file - using shortcut API
client.download_single.await?;
4. TokenScopedClient for Multi-User Scenarios
For multi-user scenarios where you need to use multiple tokens concurrently, you can create a TokenScopedClient:
// Create a scoped client bound to a specific token
let scoped_client = client.with_token;
// Use the scoped client - token is automatically used
let files = scoped_client.list_directory.await?;
let quota = scoped_client.get_quota.await?;
// Each scoped client has its own isolated token context
// Multiple scoped clients can be used concurrently without conflicts
Key Benefits of TokenScopedClient:
- Thread-safe: Each scoped client has its own token context
- Isolation: Changes to one scoped client don't affect others
- Convenience: No need to pass token explicitly for each call
- Cacheable: You can cache scoped clients per user for better performance
Configuration
Environment Variables
Silently Read During Builder Initialization
These variables are automatically read when ClientBuilder::default() is called (i.e., when BaiduNetDiskClient::builder() is invoked). Values from environment variables are used as defaults but can be overridden by explicit builder calls.
| Variable | Description |
|---|---|
BD_NETDISK_APP_ID |
Application ID (optional) |
BD_NETDISK_APP_KEY |
Application Key |
BD_NETDISK_SECRET_KEY |
Application Secret |
BD_NETDISK_APP_NAME |
Application Name (optional, for identifying multiple apps) |
Note: If you explicitly set these via the builder (e.g., .app_key("...")), they will override any environment variable values.
Explicitly Loaded (Manual Reading)
These variables are NOT automatically read during builder initialization. You must call load_token_from_env() explicitly to load them.
| Variable | Required | Description |
|---|---|---|
BD_NETDISK_ACCESS_TOKEN |
Yes | Access Token |
BD_NETDISK_REFRESH_TOKEN |
Yes | Refresh Token |
BD_NETDISK_EXPIRES_IN |
Yes | Token expiration in seconds |
BD_NETDISK_SCOPE |
No | Permission scope (default: "basic netdisk") |
BD_NETDISK_SESSION_KEY |
No | Session key |
BD_NETDISK_SESSION_SECRET |
No | Session secret |
BD_NETDISK_ACQUIRED_AT |
No | Token acquisition timestamp (for testing) |
Avoiding Duplicate Configuration
If you want to avoid environment variable interference, simply set all required values explicitly via the builder:
let client = builder
.app_key // Explicitly set, overrides BD_NETDISK_APP_KEY
.app_secret // Explicitly set, overrides BD_NETDISK_SECRET_KEY
.build?;
// Tokens must still be loaded explicitly or set manually
client.load_token_from_env?; // Loads token variables
// OR
client.set_access_token?; // Set manually
Client Builder Options
let client = builder
.app_key // Application Key
.app_secret // Application Secret
.app_name // Application Name (optional)
.timeout // Request timeout
.auto_refresh // Auto refresh token
.refresh_ahead_seconds // Refresh 24h before expiration
.max_retries // Max retry attempts
.build?;
API Modules
File Management (client.file())
Core file operations:
list_directory()- List directory fileslist_all()- List all files recursivelyget_file_info()- Get file information by pathget_file_meta()- Get file metadata (with download link dlink) by fs_idsearch_files()- Search files by keywordsemantic_search()- Semantic searchcreate_folder()- Create directoryrename()- Rename file/foldermove_file()- Move file/foldercopy_file()- Copy file/folderdelete()- Delete file/folder
Download (client.download())
Download methods:
get_dlink_from_path()- Get download link by file pathget_dlink_from_fsid()- Get download link by file fs_idauto_download()- Auto-select best method based on file sizeauto_download_by_fsid()- Auto-select by fs_iddownload_single()- Single-threaded download (by path)download_single_by_fsid()- Single-threaded download (by fs_id)download_single_with_meta()- Single-threaded download (with FileMeta)download_parallel()- Multi-thread parallel download (by path)download_parallel_by_fsid()- Multi-thread parallel download (by fs_id)download_parallel_multi_threaded()- Multi-thread parallel download (with FileMeta)download_streaming()- Async concurrency download (by path)download_streaming_by_fsid()- Async concurrency download (by fs_id)download_streaming_with_meta()- Streaming download (with FileMeta)
Upload (client.upload())
The SDK provides multiple upload methods to handle different scenarios:
Upload Methods Comparison
| Method | Data Source | Memory Usage | Streaming | Best For |
|---|---|---|---|---|
upload_file() |
File path | ~80MB | ✅ | Most common scenarios |
upload_reader() |
Reader + size | ~80MB | ✅ | Custom readers, wrapped streams |
upload_bytes() |
&[u8] slice |
Full data | ❌ | Small data in memory |
Key Features
- Resumable Upload: Automatically detects partially uploaded chunks and skips them
- Parallel Upload: Uploads multiple chunks concurrently (default: 10 parallel)
- Memory Optimized: For large files, memory is bounded by batch size (~80MB default)
- Automatic Chunking: Files are automatically split into 4MB chunks
1. Upload File from Path
The simplest way to upload a file:
use BaiduNetDiskClient;
let client = builder.build?;
client.load_token_from_env?;
// Simple upload - using shortcut API (no explicit token required)
let response = client.upload_file.await?;
println!;
With custom options:
use ;
let options = default
.chunk_size // 8MB chunks
.max_concurrency; // 20 parallel uploads
let response = client.upload_file_with_options.await?;
2. Upload from Reader
For streaming upload with custom readers (requires Read + Seek):
use BaiduNetDiskClient;
use BufReader;
let file = open?;
let metadata = file.metadata?;
let file_size = metadata.len;
let mut reader = new;
let response = client.upload
.upload_reader
.await?;
3. Upload Bytes from Memory
For data already in memory:
use BaiduNetDiskClient;
let data = b"Hello, World!";
// Using shortcut API (no explicit token required)
let response = client.upload_bytes.await?;
println!;
How Resumable Upload Works
- First pass: Read file → calculate MD5 for each chunk
- Precreate: Call API → get uploadid and list of existing chunks
- Second pass: Read file again → only upload missing chunks
- Create: Merge chunks into final file
This means if an upload is interrupted, restarting will only upload the missing chunks.
Authorization (client.authorize())
get_device_code()- Get authorization device coderequest_access_token()- Poll for access token
User & Quota
client.user().info()- Get user infoclient.quota().info()- Get storage quota
Playlist (client.playlist())
Playlist and media functionality:
Playlist Operations:
get_playlist_list()- List playlistsget_playlist_file_list()- List files in playlist
Media Playback:
get_media_play_info()- Get media playback info (supports path or fs_id)get_media_m3u8_content()- Get raw m3u8 content by path
Convenience Methods (Quality Enums):
get_video_m3u8()- Get video m3u8 with VideoQualityget_video_m3u8_highest()- Get video m3u8 with highest quality for VIP levelget_audio_m3u8()- Get audio m3u8 with AudioQualityget_audio_m3u8_default()- Get audio m3u8 with default quality (128K)
Transcoding Status Check:
fetch_m3u8()- Fetch m3u8 content from URLis_media_fully_transcoded()- Check if media is fully transcoded (#EXT-X-ENDLIST)
Quality Enums:
VideoQuality- Video quality levels (480P, 720P, 1080P)AudioQuality- Audio quality levels (MP3 128K)- Quality methods:
to_media_type(),highest_for_vip_level(),available_for_vip_level() - Automatic quality selection based on VIP level
Error Handling
use ;
match result
Token Management
// Set token manually
let token = new;
client.set_access_token?;
// Load token from environment
let token = client.load_token_from_env?;
// Validate token status
match client.validate_token
Examples
Run examples:
# Authorization flow
# File operations
# Search
# Upload
# Download
# Download comparison
# Token test
# User info
# Quota info
# Playlist
Download Strategy Guide
This SDK provides multiple download strategies for different scenarios:
Concurrency vs Parallelism
Concurrency (download_streaming):
- Uses async tasks on a single thread (or thread pool)
- Efficient for many small files or when network is the bottleneck
- Lower memory overhead
- Ideal for: Downloading multiple small files, limited memory environments
Parallelism (download_parallel):
- Uses true multi-threading with dedicated OS threads
- Higher throughput for large files (maximizes network bandwidth)
- Higher memory usage (each thread has its own stack)
- Ideal for: Large files (>100MB), maximum speed requirements
When to Use Which
| Scenario | Recommendation |
|---|---|
| Small files (<10MB) | auto_download() or concurrent streaming |
| Medium files (10-100MB) | auto_download() will choose best |
| Large files (>100MB) | Parallel multi-threaded |
| Multiple files | Concurrent streaming |
| Memory constrained | Single-thread or concurrent streaming |
| Maximum speed | Parallel multi-threaded |
Quick Reference
// Auto-select based on file size (recommended for most cases)
// - < 10MB: single-threaded
// - > 10MB: futures concurrent streaming (good performance regardless of CPU cores)
// Note: For maximum speed, use `download_parallel` manually
// Using shortcut API (no explicit token required)
client.auto_download.await?;
// For maximum speed with large files (6+ cores recommended)
client.download_parallel.await?;
// For many small files or limited cores (<= 4)
client.download_streaming.await?;
Not Sure Which to Use? Run the Comparison Test!
If you're unsure about the best download method for your hardware, run the comparison test:
The test will ask you to enter your CPU core count (e.g., 4, 8, 12) and then:
- Download using Streaming (Futures/Concurrency)
- Download using Parallel (Multi-thread)
- Show a side-by-side speed comparison
Use the results to decide which method works best for your specific hardware!
Key Findings from Tests:
- 4 cores: Futures (Streaming) is often 1.5-2x faster than Parallel
- 6-8 cores: Both methods perform similarly
- 8+ cores: Parallel pulls ahead slightly due to better multi-core utilization
Performance Tips
- Large File Upload: Use
upload_file()which automatically chunks and parallelizes - Large File Download: Use
download_parallel_multi_threaded()for maximum speed - Token Refresh: Set
refresh_ahead_secondsto a value that matches your usage pattern
License
MIT License - see LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.