NNTP Proxy
A high-performance NNTP proxy server written in Rust, with round-robin load balancing and optional per-command routing.
Key Features
- 🔄 Round-robin load balancing - Distributes connections across multiple backend servers
- ⚡ High performance - Lock-free routing, zero-allocation command parsing, optimized I/O
- 🏥 Health checking - Automatic backend health monitoring with failure detection
- 🔐 Authentication - Proxy-level authentication, backends pre-authenticated
- � TLS/SSL support - Secure backend connections with system certificate store
- �🔀 Per-command routing mode - Optional stateless routing for resource efficiency
- 📊 Connection pooling - Efficient connection reuse with configurable limits
- ⚙️ TOML configuration - Simple, readable configuration with sensible defaults
- 🔍 Structured logging - Detailed tracing for debugging and monitoring
- 🧩 Modular architecture - Clean separation of concerns, well-tested codebase
Table of Contents
- Overview
- Quick Start
- Configuration
- Usage
- Architecture
- Performance
- Limitations
- Building
- Testing
- License
Overview
This NNTP proxy offers two operating modes:
- Standard mode (default) - Full NNTP proxy with complete command support
- Per-command routing mode (
--per-command-routingor-r) - Stateless routing for resource efficiency
Design Goals
- Load balancing - Distribute connections across multiple backend servers
- Health monitoring - Automatic detection and routing around unhealthy backends
- High performance - Lock-free routing, zero-allocation parsing, optimized I/O
- Flexible deployment - Choose between full compatibility or resource efficiency
When to Use This Proxy
✅ Standard mode - Good for:
- Traditional newsreaders (tin, slrn, Thunderbird)
- Any NNTP client requiring stateful operations
- Load balancing with full protocol support
- Drop-in replacement for direct backend connections
✅ Per-command routing mode - Good for:
- Message-ID based article retrieval
- Indexing and search tools
- Metadata-heavy workloads
- Distributing load across multiple backends
❌ Not suitable for:
- Applications requiring custom NNTP extensions (unless in standard mode)
- Scenarios requiring true concurrent request processing (NNTP doesn't support this)
Limitations
Per-Command Routing Mode Restrictions
When running in per-command routing mode (--per-command-routing or -r), the proxy rejects stateful commands:
Rejected in per-command routing mode:
- Group navigation:
GROUP,NEXT,LAST,LISTGROUP - Article retrieval by number:
ARTICLE 123,HEAD 123,BODY 123 - Overview commands:
XOVER,OVER,XHDR,HDR
Always supported:
- ✅ Article by Message-ID:
ARTICLE <message-id@example.com> - ✅ Metadata:
LIST,HELP,DATE,CAPABILITIES,POST - ✅ Authentication:
AUTHINFO USER/PASS(intercepted by proxy)
Standard Mode (Default)
In standard mode (without --per-command-routing):
- ✅ All NNTP commands are supported - full bidirectional forwarding
- ✅ Compatible with traditional newsreaders (tin, slrn, Thunderbird)
- ✅ Stateful operations work normally (GROUP, NEXT, LAST, etc.)
- Each client gets a dedicated backend connection (1:1 mapping)
Quick Start
Prerequisites
- Rust 1.85+ (or use the included Nix flake)
- Optional: Nix with flakes for reproducible development environment
Installation
# Clone the repository
# Build release version
# Binary will be in target/release/nntp-proxy
Using Nix (Optional)
# Enter development environment
# Or use direnv
# Build and run
First Run
- Create a configuration file (see Configuration section)
- Run the proxy:
- Connect with a client:
Configuration
The proxy uses a TOML configuration file. Create config.toml:
# Backend servers (at least one required)
[[]]
= "news.example.com"
= 119
= "Primary News Server"
= "your_username" # Optional
= "your_password" # Optional
= 20 # Optional, default: 10
[[]]
= "news2.example.com"
= 119
= "Secondary News Server"
= 10
# Health check configuration (optional)
[]
= 30 # Seconds between checks (default: 30)
= 5 # Timeout per check (default: 5)
= 3 # Failures before marking unhealthy (default: 3)
Configuration Reference
Server Configuration
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
host |
string | Yes | - | Backend server hostname or IP |
port |
integer | Yes | - | Backend server port |
name |
string | Yes | - | Friendly name for logging |
username |
string | No | - | Authentication username |
password |
string | No | - | Authentication password |
max_connections |
integer | No | 10 | Max concurrent connections to this backend |
use_tls |
boolean | No | false | Enable TLS/SSL encryption |
tls_verify_cert |
boolean | No | true | Verify server certificates (uses system CA store) |
tls_cert_path |
string | No | - | Path to additional CA certificate (PEM format) |
TLS/SSL Support
The proxy supports TLS/SSL encrypted connections to backend servers using your operating system's trusted certificate store by default.
Basic TLS Configuration
For servers with valid SSL certificates from recognized CAs:
[[]]
= "secure.newsserver.com"
= 563 # Standard NNTPS port
= "Secure News Server"
= true
= true # Uses system certificate store (default)
= 20
That's it! No additional certificate configuration needed. The proxy will:
- Use your operating system's trusted certificate store automatically
- Verify the server's certificate
- Establish a secure TLS connection
Private/Self-Signed CA
For servers using certificates from a private CA:
[[]]
= "internal.newsserver.local"
= 563
= "Internal News Server"
= true
= true
= "/etc/nntp-proxy/internal-ca.pem" # PEM format
= 10
Note: The custom certificate is added to the system certificates, not replacing them.
System Certificate Stores
| Operating System | Certificate Store |
|---|---|
| Linux (Debian/Ubuntu) | /etc/ssl/certs/ca-certificates.crt |
| Linux (RHEL/CentOS) | /etc/pki/tls/certs/ca-bundle.crt |
| macOS | Security.framework (Keychain) |
| Windows | SChannel (Windows Certificate Store) |
Port Reference
| Port | Protocol | Description |
|---|---|---|
| 119 | NNTP | Unencrypted, standard NNTP |
| 563 | NNTPS | NNTP over TLS/SSL (encrypted) |
| 8119 | Custom | Common alternative port |
Security Best Practices
✅ Always verify certificates in production (tls_verify_cert = true)
✅ Keep system certificates updated via OS package manager
✅ Use standard NNTPS port 563 for encrypted connections
✅ Monitor TLS handshake failures in logs
⚠️ Never set tls_verify_cert = false in production - this disables all certificate verification!
Environment Variable Overrides for Servers
Backend servers can be configured entirely via environment variables, useful for Docker/container deployments. If any NNTP_SERVER_N_HOST variable is found, environment variables take precedence over the config file.
Per-server variables (N = 0, 1, 2, ...):
| Variable | Required | Default | Description |
|---|---|---|---|
NNTP_SERVER_N_HOST |
Yes | - | Backend hostname/IP (presence triggers env mode) |
NNTP_SERVER_N_PORT |
No | 119 | Backend port |
NNTP_SERVER_N_NAME |
No | "Server N" | Friendly name for logging |
NNTP_SERVER_N_USERNAME |
No | - | Backend authentication username |
NNTP_SERVER_N_PASSWORD |
No | - | Backend authentication password |
NNTP_SERVER_N_MAX_CONNECTIONS |
No | 10 | Max concurrent connections |
Example Docker deployment:
Health Check Configuration
| Field | Type | Default | Description |
|---|---|---|---|
interval_secs |
integer | 30 | Seconds between health checks |
timeout_secs |
integer | 5 | Health check timeout in seconds |
unhealthy_threshold |
integer | 3 | Consecutive failures before marking unhealthy |
Authentication
The proxy handles authentication in two ways:
-
Backend authentication (when credentials are configured)
- Configure
usernameandpasswordin server config - Proxy authenticates to backends during connection pool initialization
- Connections are pre-authenticated, eliminating per-command overhead
- Configure
-
Client authentication
- Client
AUTHINFO USER/PASScommands are always intercepted by the proxy - Returns success without forwarding to backend
- No actual client credential validation (proxy trusts connected clients)
- To restrict access, use firewall rules or network isolation
- Client
Usage
Command Line Options
| Option | Short | Environment Variable | Description | Default |
|---|---|---|---|---|
--port <PORT> |
-p |
NNTP_PROXY_PORT |
Listen port | 8119 |
--per-command-routing |
-r |
NNTP_PROXY_PER_COMMAND_ROUTING |
Enable per-command routing mode | false |
--config <FILE> |
-c |
NNTP_PROXY_CONFIG |
Config file path | config.toml |
--threads <NUM> |
-t |
NNTP_PROXY_THREADS |
Tokio worker threads | CPU cores |
--help |
-h |
- | Show help | - |
--version |
-V |
- | Show version | - |
Note: Environment variables take precedence over default values but are overridden by command-line arguments.
Examples
# Standard mode with defaults
# Custom port and config
# Per-command routing mode (long form)
# Per-command routing mode (short form)
# Single-threaded for debugging
# Production setup
# Using environment variables for configuration
NNTP_PROXY_PORT=8119 \
NNTP_PROXY_THREADS=4 \
NNTP_SERVER_0_HOST=news.example.com \
NNTP_SERVER_0_PORT=119 \
NNTP_SERVER_0_NAME="Primary" \
# Docker deployment with environment variables
Operating Modes
Standard Mode (default)
- One backend connection per client
- Simple 1:1 connection forwarding
- All NNTP commands supported
- Lower overhead, easier debugging
Per-Command Routing Mode (-r / --per-command-routing)
- Each command routed to next backend (round-robin)
- Commands processed serially (one at a time)
- Multiple clients share backend pool
- Health-aware routing
- Better resource distribution
- Stateful commands rejected
Architecture
Module Organization
The codebase is organized into focused modules with clear responsibilities:
| Module | Purpose |
|---|---|
auth/ |
Client and backend authentication handling |
command/ |
NNTP command parsing and classification |
config/ |
Configuration loading and validation |
constants/ |
Centralized configuration constants |
health/ |
Backend health monitoring system |
pool/ |
Connection and buffer pooling |
protocol/ |
NNTP protocol constants and parsing |
router/ |
Backend selection and load balancing |
session/ |
Client session lifecycle management |
types/ |
Core type definitions (IDs, etc.) |
How It Works
Standard Mode Flow
Client Connection
↓
Select Backend (round-robin)
↓
Get Pooled Connection
↓
Pre-authenticated Connection
↓
Bidirectional Data Forwarding
↓
Connection Cleanup
Per-Command Routing Mode Flow
Client Connection
↓
Read Command
↓
Classify Command
↓
Route to Healthy Backend (round-robin)
↓
Execute on Backend Connection (BLOCKS)
↓
Stream Response to Client
↓
Repeat (commands processed serially)
Key Design Decisions
-
Serial Processing
- NNTP processes one command at a time
- Each command blocks until response received
- No concurrent request handling possible
- Round-robin distributes load across backends
-
Connection Pooling
- Pre-authenticated connections
- Reduces setup overhead
- Configurable pool sizes per backend
-
Health Checking
- Periodic DATE command probes
- Automatic failure detection
- Router skips unhealthy backends
-
Lock-Free Routing
- Atomic operations for pending counts
- Eliminates RwLock contention
- Significant CPU reduction with many clients
Performance
Optimizations
This proxy implements several performance optimizations:
| Optimization | Impact | Description |
|---|---|---|
| Zero-allocation parsing | -0.92% CPU | Direct byte comparison, no to_ascii_uppercase() |
| Lock-free routing | -10-15% CPU | Atomic operations instead of RwLock |
| Pre-authenticated connections | High | No per-command auth overhead |
| Buffer reuse | ~200+ allocs/sec saved | Pre-allocated buffers in hot paths |
| Frequency-ordered matching | Better branch prediction | Common commands (ARTICLE, BODY) checked first |
| 64KB read buffers | Fewer syscalls | Optimized for large article transfers |
Performance Characteristics
- CPU Usage: Low overhead with lock-free routing and zero-allocation parsing
- Per-command routing mode: ~15% of one core for 80 connections at 105MB/s (AMD Ryzen 9 5950X, single-threaded configuration)
- Memory: Constant usage regardless of article size (no response buffering)
- Throughput: Typically limited by backend servers, not the proxy
- Scalability: Efficiently handles hundreds of concurrent connections
Profiling
To generate a performance flamegraph for analysis:
# Install cargo-flamegraph (if using Nix, it's already available)
# Run with flamegraph profiling (per-command routing mode)
# Open flamegraph.svg in a browser to analyze CPU hotspots
Building
Development Build
Release Build
Production Deployment
# Build optimized binary
# Copy binary to deployment location
# Create config directory
# Copy config
# Run as service (example systemd unit included)
Static Binary (Optional)
For maximum portability, build a fully static binary:
# Install musl target
# Build static binary
# Result is a static binary with no dependencies
Testing
Running Tests
# All tests
# Unit tests only
# Integration tests only
# With output
# Quiet mode
Test Coverage
The codebase includes:
- 165 unit tests covering all modules
- Integration tests for end-to-end scenarios
- 100% pass rate
Manual Testing
Test with telnet or netcat:
# Connect to proxy
# Should see greeting like:
# 200 news.example.com ready
# Try commands:
Load Testing
For performance testing, create custom scripts that:
- Open multiple concurrent NNTP connections
- Issue realistic command sequences
- Measure throughput and latency
- Monitor CPU and memory usage
Dependencies
Core Dependencies
| Crate | Purpose |
|---|---|
tokio |
Async runtime and networking |
tokio-native-tls |
TLS/SSL support for async streams |
native-tls |
Native TLS backend (uses system certificate store) |
tracing |
Structured logging framework |
anyhow |
Error handling |
clap |
Command-line argument parsing |
serde |
Serialization framework |
toml |
TOML configuration parsing |
deadpool |
Connection pooling |
Development Dependencies
tempfile- Temporary files for testing- Test helpers included in
tests/test_helpers.rs
Troubleshooting
Common Issues
"Connection refused" when starting
- Check if port is already in use:
lsof -i :8119 - Try a different port:
--port 8120
"Backend authentication failed"
- Verify credentials in config.toml
- Test direct connection to backend
- Check backend server logs
"Command not supported" errors
- In per-command routing mode, stateful commands are rejected (GROUP, NEXT, etc.)
- Use message-ID based retrieval instead
- For stateful operations, use standard mode or connect directly to backend
High CPU usage
- Try per-command routing mode:
-ror--per-command-routing - Reduce worker threads:
--threads 1 - Check health check interval (increase if too frequent)
Backends marked unhealthy
- Check backend server status
- Verify network connectivity
- Review health check configuration
- Check logs for specific errors
Logging
Control log verbosity with RUST_LOG:
# Info level (default)
RUST_LOG=info
# Debug level
RUST_LOG=debug
# Specific module
RUST_LOG=nntp_proxy::router=debug
# Multiple modules
RUST_LOG=nntp_proxy::router=debug,nntp_proxy::health=debug
Roadmap
Planned Features
- Prometheus metrics endpoint
- Configuration hot-reload
- IPv6 support
- Connection affinity mode
- Admin/stats HTTP endpoint
Completed
- SSL/TLS support (NNTPS) with system certificate store
- Lock-free routing
- Zero-allocation command parsing
- Health checking system
- Per-command routing mode
- Pre-authenticated connections
- TOML configuration
- Connection pooling
- Renamed terminology from "multiplexing" to "per-command routing"
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure all tests pass:
cargo test - Submit a pull request
License
MIT License - see LICENSE file for details.
Acknowledgments
Built with Rust and the excellent Tokio async ecosystem.