Expand description
§alloc-track
This project allows per-thread and per-backtrace realtime memory profiling.
§Use Cases
- Diagnosing memory fragmentation (in the form of volatile allocations)
- Diagnosing memory leaks
- Profiling memory consumption of individual components
§Usage
-
Add the following dependency to your project:
alloc-track = "0.2.3"
-
Set a global allocator wrapped by
alloc_track::AllocTrack
Default rust allocator:
use alloc_track::{AllocTrack, BacktraceMode}; use std::alloc::System; #[global_allocator] static GLOBAL_ALLOC: AllocTrack<System> = AllocTrack::new(System, BacktraceMode::Short);
Jemallocator allocator:
use alloc_track::{AllocTrack, BacktraceMode}; use jemallocator::Jemalloc; #[global_allocator] static GLOBAL_ALLOC: AllocTrack<Jemalloc> = AllocTrack::new(Jemalloc, BacktraceMode::Short);
-
Call
alloc_track::thread_report()
oralloc_track::backtrace_report()
to generate a report. Note thatbacktrace_report
requires thebacktrace
feature and theBacktraceMode::Short
orBacktraceMode::Full
flag to be passed toAllocTrack::new
.
§Performance
In BacktraceMode::None
or without the backtrace
feature enabled, the thread memory profiling is reasonably performant. It is not something you would want to run in a production environment though, so feature-gating is a good idea.
When backtrace logging is enabled, the performance will degrade substantially depending on the number of allocations and stack depth. Symbol resolution is delaying, but a lot of allocations means a lot of backtraces. backtrace_report
takes a single argument, which is a filter for individual backtrace records. Filtering out uninteresting backtraces is both easier to read, and substantially faster to generate a report as symbol resolution can be skipped. See examples/example.rs
for an example.
§Real World Example
At LeakSignal, we had extreme memory segmentation in a high-bandwidth/high-concurrency gRPC service. We suspected a known hyper issue with high concurrency, but needed to confirm the cause and fix the issue ASAP. Existing tooling (bpftrace, valgrind) wasn’t able to give us a concrete cause. I had created a prototype of this project back in 2019 or so, and it’s time had come to shine. In a staging environment, I added an HTTP endpoint to generate a thread and backtrace report. I was able to identify a location where a large multi-allocation object was being cloned and dropped very often. A quick fix there solved our memory segmentation issue.
Structs§
- Alloc
Track - Global memory allocator wrapper that can track per-thread and per-backtrace memory usage.
- Backtrace
Metric - Allocation information pertaining to a specific backtrace.
- Backtrace
Report - A report of all (post-filter) backtraces and their associated allocations metrics.
- Hashed
Backtrace - Size
- Size display helper
- SizeF64
- Size display helper
- Thread
Metric - Thread
Report - A comprehensive report of all thread allocation metrics
Enums§
Functions§
- backtrace_
report - Generate a memory usage report for backtraces, if enabled
- thread_
report - Generate a memory usage report Note that the numbers are not a synchronized snapshot, and have slight timing skew.