bayes-rs
A comprehensive Rust library for Bayesian inference with MCMC samplers, featuring robust statistical distributions and advanced diagnostic tools.

Features
- MCMC Samplers: Metropolis-Hastings, Gibbs, and Hamiltonian Monte Carlo (HMC)
- Statistical Distributions: Normal, Multivariate Normal, Gamma, Beta, Exponential, Uniform, Student's t
- MCMC Diagnostics: Effective sample size, R-hat statistic, autocorrelation analysis, trace plots
- Best Practices: Comprehensive error handling, extensive testing, performance benchmarks
- Easy to Use: Clean API with extensive documentation and examples
Quick Start
Add this to your Cargo.toml
:
[dependencies]
bayes-rs = "0.1.0"
Simple Example
use bayes_rs::{
distributions::Normal,
samplers::{MetropolisHastings, Sampler},
prelude::*,
};
use nalgebra::DVector;
let log_posterior = |params: &DVector<f64>| -> f64 {
let mu = params[0];
let log_sigma = params[1];
let sigma = log_sigma.exp();
let prior_mu = Normal::new(0.0, 10.0).unwrap();
let prior_log_sigma = Normal::new(0.0, 1.0).unwrap();
prior_mu.log_pdf(mu) + prior_log_sigma.log_pdf(log_sigma)
};
let initial_state = DVector::from_vec(vec![0.0, 0.0]);
let proposal_std = DVector::from_vec(vec![0.5, 0.2]);
let mut sampler = MetropolisHastings::new(
log_posterior,
initial_state,
proposal_std,
).unwrap();
let samples = sampler.sample(10000);
println!("Generated {} samples", samples.len());
MCMC Samplers
Metropolis-Hastings
The workhorse of MCMC sampling with adaptive proposal tuning:
use bayes_rs::{samplers::MetropolisHastings, prelude::*};
let mut sampler = MetropolisHastings::new(
log_posterior_fn,
initial_state,
proposal_std,
)?;
sampler.adapt_proposal(0.44);
let samples = sampler.sample(10000);
println!("Acceptance rate: {:.3}", sampler.acceptance_rate().unwrap());
Hamiltonian Monte Carlo (HMC)
Efficient sampling using gradient information:
use bayes_rs::{samplers::HamiltonianMonteCarlo, prelude::*};
let gradient_fn = |params: &DVector<f64>| -> DVector<f64> {
};
let mut hmc_sampler = HamiltonianMonteCarlo::new(
log_posterior_fn,
gradient_fn,
initial_state,
step_size, n_leapfrog, )?;
let samples = hmc_sampler.sample(5000);
Gibbs Sampling
For models with known conditional distributions:
use bayes_rs::{samplers::GibbsSampler, prelude::*};
let conditional_samplers = vec![
|params: &DVector<f64>, idx: usize, rng: &mut ThreadRng| -> f64 {
},
];
let mut gibbs_sampler = GibbsSampler::new(
conditional_samplers,
initial_state,
)?;
Statistical Distributions
Univariate Distributions
use bayes_rs::distributions::*;
let normal = Normal::new(0.0, 1.0)?;
println!("PDF at x=1: {}", normal.pdf(1.0));
println!("Log PDF at x=1: {}", normal.log_pdf(1.0));
let gamma = Gamma::new(2.0, 1.0)?;
println!("Mean: {}, Variance: {}", gamma.mean(), gamma.variance());
let beta = Beta::new(2.0, 3.0)?;
println!("PDF at x=0.5: {}", beta.pdf(0.5));
let t_dist = StudentT::new(3.0, 0.0, 1.0)?;
println!("Degrees of freedom: {}", t_dist.degrees_of_freedom());
Multivariate Distributions
use bayes_rs::distributions::MultivariateNormal;
use nalgebra::{DVector, DMatrix};
let mu = DVector::from_vec(vec![0.0, 0.0]);
let cov = DMatrix::from_vec(2, 2, vec![1.0, 0.5, 0.5, 1.0]);
let mvn = MultivariateNormal::new(mu, cov)?;
let x = DVector::from_vec(vec![1.0, -1.0]);
println!("Log PDF: {}", mvn.log_pdf(&x));
MCMC Diagnostics
Comprehensive diagnostic tools to assess convergence and sample quality:
use bayes_rs::diagnostics::{McmcDiagnostics, TracePlot};
let diagnostics = McmcDiagnostics::from_single_chain(&samples)?;
println!("Effective sample sizes: {:?}", diagnostics.effective_sample_size);
println!("Parameter means: {:?}", diagnostics.mean);
println!("Parameter std devs: {:?}", diagnostics.std_dev);
let diagnostics = McmcDiagnostics::from_multiple_chains(&chains)?;
if let Some(r_hat) = &diagnostics.r_hat {
println!("R-hat values: {:?}", r_hat);
println!("Converged: {}", diagnostics.has_converged());
}
let trace_plot = TracePlot::new(&samples, 0)?;
Real-World Example: Bayesian Linear Regression
use bayes_rs::{
distributions::Normal,
samplers::{MetropolisHastings, Sampler},
diagnostics::McmcDiagnostics,
prelude::*,
};
fn bayesian_linear_regression(x_data: &[f64], y_data: &[f64]) -> Result<Vec<DVector<f64>>> {
let log_posterior = |params: &DVector<f64>| -> f64 {
let beta0 = params[0]; let beta1 = params[1]; let log_sigma = params[2]; let sigma = log_sigma.exp();
let prior_beta0 = Normal::new(0.0, 10.0).unwrap();
let prior_beta1 = Normal::new(0.0, 10.0).unwrap();
let prior_log_sigma = Normal::new(0.0, 1.0).unwrap();
let prior_log_prob = prior_beta0.log_pdf(beta0) +
prior_beta1.log_pdf(beta1) +
prior_log_sigma.log_pdf(log_sigma);
if !sigma.is_finite() || sigma <= 0.0 {
return f64::NEG_INFINITY;
}
let likelihood_dist = Normal::new(0.0, sigma).unwrap();
let likelihood_log_prob: f64 = x_data.iter()
.zip(y_data.iter())
.map(|(&x_i, &y_i)| {
let predicted = beta0 + beta1 * x_i;
let residual = y_i - predicted;
likelihood_dist.log_pdf(residual)
})
.sum();
prior_log_prob + likelihood_log_prob
};
let initial_state = DVector::from_vec(vec![0.0, 0.0, 0.0]);
let proposal_std = DVector::from_vec(vec![0.5, 0.1, 0.1]);
let mut sampler = MetropolisHastings::new(
log_posterior,
initial_state,
proposal_std,
)?;
Ok(sampler.sample(10000))
}
let x_data = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let y_data = vec![2.1, 3.9, 6.1, 8.0, 9.9];
let samples = bayesian_linear_regression(&x_data, &y_data)?;
let diagnostics = McmcDiagnostics::from_single_chain(&samples)?;
println!("Intercept estimate: {:.3} ± {:.3}",
diagnostics.mean[0], diagnostics.std_dev[0]);
println!("Slope estimate: {:.3} ± {:.3}",
diagnostics.mean[1], diagnostics.std_dev[1]);
Performance
Run benchmarks to see performance characteristics:
cargo bench
The library is optimized for:
- Efficient matrix operations using
nalgebra
- Minimal memory allocations during sampling
- Fast distribution computations with precomputed constants
Error Handling
The library uses comprehensive error handling with the BayesError
enum:
use bayes_rs::error::{BayesError, Result};
match Normal::new(0.0, -1.0) {
Ok(dist) => println!("Created distribution"),
Err(BayesError::InvalidParameter { message }) => {
println!("Invalid parameter: {}", message);
},
Err(e) => println!("Other error: {}", e),
}
Testing
Run the comprehensive test suite:
cargo test
cargo test --test integration_tests
cargo test -- --nocapture
Examples
See the examples/
directory for complete examples:
cargo run --example linear_regression
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
Development Setup
git clone https://github.com/SyntaxSpirits/bayes-rs.git
cd bayes-rs
cargo test
cargo doc --open
License
This project is licensed under the MIT License - see the LICENSE file for details.
Citation
If you use this library in your research, please cite:
@software{bayes_rs,
title = {bayes-rs: A Rust Library for Bayesian Inference},
author = {Alex Kholodniak},
year = {2025},
url = {https://github.com/SyntaxSpirits/bayes-rs}
}
Related Projects
- PyMC - Python library for Bayesian modeling
- Stan - Platform for statistical modeling and high-performance statistical computation
- Edward - Python library for probabilistic modeling
- Turing.jl - Julia library for Bayesian inference