pub struct RangeBarProcessor { /* private fields */ }Expand description
Range bar processor with non-lookahead bias guarantee
Implementations§
Source§impl RangeBarProcessor
impl RangeBarProcessor
Sourcepub fn new(threshold_decimal_bps: u32) -> Result<Self, ProcessingError>
pub fn new(threshold_decimal_bps: u32) -> Result<Self, ProcessingError>
Create new processor with given threshold
Uses default behavior: prevent_same_timestamp_close = true (Issue #36)
§Arguments
threshold_decimal_bps- Threshold in decimal basis points- Example:
250→ 25bps = 0.25% - Example:
10→ 1bps = 0.01% - Minimum:
1→ 0.1bps = 0.001%
- Example:
§Breaking Change (v3.0.0)
Prior to v3.0.0, threshold_decimal_bps was in 1bps units.
Migration: Multiply all threshold values by 10.
Sourcepub fn with_options(
threshold_decimal_bps: u32,
prevent_same_timestamp_close: bool,
) -> Result<Self, ProcessingError>
pub fn with_options( threshold_decimal_bps: u32, prevent_same_timestamp_close: bool, ) -> Result<Self, ProcessingError>
Create new processor with explicit timestamp gating control
§Arguments
threshold_decimal_bps- Threshold in decimal basis pointsprevent_same_timestamp_close- If true, bars cannot close until timestamp advances from open_time. This prevents “instant bars” during flash crashes. Set to false for legacy behavior (pre-v9).
§Example
// Default behavior (v9+): timestamp gating enabled
let processor = RangeBarProcessor::new(250)?;
// Legacy behavior: allow instant bars
let processor = RangeBarProcessor::with_options(250, false)?;Sourcepub fn prevent_same_timestamp_close(&self) -> bool
pub fn prevent_same_timestamp_close(&self) -> bool
Get the prevent_same_timestamp_close setting
Sourcepub fn with_inter_bar_config(self, config: InterBarConfig) -> Self
pub fn with_inter_bar_config(self, config: InterBarConfig) -> Self
Enable inter-bar feature computation with the given configuration (Issue #59)
When enabled, the processor maintains a trade history buffer and computes lookback-based microstructure features on each bar close. Features are computed from trades that occurred BEFORE each bar’s open_time, ensuring no lookahead bias.
§Arguments
config- Configuration controlling lookback mode and feature tiers
§Example
use rangebar_core::processor::RangeBarProcessor;
use rangebar_core::interbar::{InterBarConfig, LookbackMode};
let processor = RangeBarProcessor::new(1000)?
.with_inter_bar_config(InterBarConfig {
lookback_mode: LookbackMode::FixedCount(500),
compute_tier2: true,
compute_tier3: true,
});Sourcepub fn inter_bar_enabled(&self) -> bool
pub fn inter_bar_enabled(&self) -> bool
Check if inter-bar features are enabled
Sourcepub fn with_intra_bar_features(self) -> Self
pub fn with_intra_bar_features(self) -> Self
Enable intra-bar feature computation (Issue #59)
When enabled, the processor accumulates trades during bar construction and computes 22 features from trades WITHIN each bar at bar close:
- 8 ITH features (Investment Time Horizon)
- 12 statistical features (OFI, intensity, Kyle lambda, etc.)
- 2 complexity features (Hurst exponent, permutation entropy)
§Memory Note
Trades are accumulated per-bar and freed when the bar closes. Typical 1000 dbps bar: ~50-500 trades, ~2-24 KB overhead.
§Example
let processor = RangeBarProcessor::new(1000)?
.with_intra_bar_features();Sourcepub fn intra_bar_enabled(&self) -> bool
pub fn intra_bar_enabled(&self) -> bool
Check if intra-bar features are enabled
Sourcepub fn set_inter_bar_config(&mut self, config: InterBarConfig)
pub fn set_inter_bar_config(&mut self, config: InterBarConfig)
Re-enable inter-bar features on an existing processor (Issue #97).
Used after from_checkpoint() to restore microstructure config that
is not preserved in checkpoint state.
Sourcepub fn set_intra_bar_features(&mut self, enabled: bool)
pub fn set_intra_bar_features(&mut self, enabled: bool)
Re-enable intra-bar features on an existing processor (Issue #97).
Sourcepub fn process_single_trade(
&mut self,
trade: AggTrade,
) -> Result<Option<RangeBar>, ProcessingError>
pub fn process_single_trade( &mut self, trade: AggTrade, ) -> Result<Option<RangeBar>, ProcessingError>
Process a single trade and return completed bar if any
Maintains internal state for streaming use case. State persists across calls until a bar completes (threshold breach), enabling get_incomplete_bar().
§Arguments
trade- Single aggregated trade to process
§Returns
Some(RangeBar) if a bar was completed, None otherwise
§State Management
- First trade: Initializes new bar state
- Subsequent trades: Updates existing bar or closes on breach
- Breach: Returns completed bar, starts new bar with breaching trade
Sourcepub fn get_incomplete_bar(&self) -> Option<RangeBar>
pub fn get_incomplete_bar(&self) -> Option<RangeBar>
Get any incomplete bar currently being processed
Returns clone of current bar state for inspection without consuming it. Useful for final bar at stream end or progress monitoring.
§Returns
Some(RangeBar) if bar is in progress, None if no active bar
Sourcepub fn process_agg_trade_records_with_incomplete(
&mut self,
agg_trade_records: &[AggTrade],
) -> Result<Vec<RangeBar>, ProcessingError>
pub fn process_agg_trade_records_with_incomplete( &mut self, agg_trade_records: &[AggTrade], ) -> Result<Vec<RangeBar>, ProcessingError>
Process AggTrade records into range bars including incomplete bars for analysis
§Arguments
agg_trade_records- Slice of AggTrade records sorted by (timestamp, agg_trade_id)
§Returns
Vector of range bars including incomplete bars at end of data
§Warning
This method is for analysis purposes only. Incomplete bars violate the fundamental range bar algorithm and should not be used for production trading.
Sourcepub fn process_agg_trade_records(
&mut self,
agg_trade_records: &[AggTrade],
) -> Result<Vec<RangeBar>, ProcessingError>
pub fn process_agg_trade_records( &mut self, agg_trade_records: &[AggTrade], ) -> Result<Vec<RangeBar>, ProcessingError>
Process Binance aggregated trade records into range bars
This is the primary method for converting AggTrade records (which aggregate multiple individual trades) into range bars based on price movement thresholds.
§Parameters
agg_trade_records- Slice of AggTrade records sorted by (timestamp, agg_trade_id) Each record represents multiple individual trades aggregated at same price
§Returns
Vector of completed range bars (ONLY bars that breached thresholds). Each bar tracks both individual trade count and AggTrade record count.
Sourcepub fn process_agg_trade_records_with_options(
&mut self,
agg_trade_records: &[AggTrade],
include_incomplete: bool,
) -> Result<Vec<RangeBar>, ProcessingError>
pub fn process_agg_trade_records_with_options( &mut self, agg_trade_records: &[AggTrade], include_incomplete: bool, ) -> Result<Vec<RangeBar>, ProcessingError>
Process AggTrade records with options for including incomplete bars
Batch processing mode: Clears any existing state before processing. Use process_single_trade() for stateful streaming instead.
§Parameters
agg_trade_records- Slice of AggTrade records sorted by (timestamp, agg_trade_id)include_incomplete- Whether to include incomplete bars at end of processing
§Returns
Vector of range bars (completed + incomplete if requested)
Sourcepub fn create_checkpoint(&self, symbol: &str) -> Checkpoint
pub fn create_checkpoint(&self, symbol: &str) -> Checkpoint
Create checkpoint for cross-file continuation
Captures current processing state for seamless continuation:
- Incomplete bar (if any) with FIXED thresholds
- Position tracking (timestamp, trade_id if available)
- Price hash for verification
§Arguments
symbol- Symbol being processed (e.g., “BTCUSDT”, “EURUSD”)
§Example
let bars = processor.process_agg_trade_records(&trades)?;
let checkpoint = processor.create_checkpoint("BTCUSDT");
let json = serde_json::to_string(&checkpoint)?;
std::fs::write("checkpoint.json", json)?;Sourcepub fn from_checkpoint(checkpoint: Checkpoint) -> Result<Self, CheckpointError>
pub fn from_checkpoint(checkpoint: Checkpoint) -> Result<Self, CheckpointError>
Resume processing from checkpoint
Restores incomplete bar state with IMMUTABLE thresholds. Next trade continues building the bar until threshold breach.
§Errors
CheckpointError::MissingThresholds- Checkpoint has bar but no thresholds
§Example
let json = std::fs::read_to_string("checkpoint.json")?;
let checkpoint: Checkpoint = serde_json::from_str(&json)?;
let mut processor = RangeBarProcessor::from_checkpoint(checkpoint)?;
let bars = processor.process_agg_trade_records(&next_file_trades)?;Sourcepub fn verify_position(&self, first_trade: &AggTrade) -> PositionVerification
pub fn verify_position(&self, first_trade: &AggTrade) -> PositionVerification
Verify we’re at the right position in the data stream
Call with first trade of new file to verify continuity. Returns verification result indicating if there’s a gap or exact match.
§Arguments
first_trade- First trade of the new file/chunk
§Example
let processor = RangeBarProcessor::from_checkpoint(checkpoint)?;
let verification = processor.verify_position(&next_file_trades[0]);
match verification {
PositionVerification::Exact => println!("Perfect continuation!"),
PositionVerification::Gap { missing_count, .. } => {
println!("Warning: {} trades missing", missing_count);
}
PositionVerification::TimestampOnly { gap_ms } => {
println!("Exness data: {}ms gap", gap_ms);
}
}Sourcepub fn anomaly_summary(&self) -> &AnomalySummary
pub fn anomaly_summary(&self) -> &AnomalySummary
Get the current anomaly summary
Sourcepub fn threshold_decimal_bps(&self) -> u32
pub fn threshold_decimal_bps(&self) -> u32
Get the threshold in decimal basis points
Sourcepub fn reset_at_ouroboros(&mut self) -> Option<RangeBar>
pub fn reset_at_ouroboros(&mut self) -> Option<RangeBar>
Reset processor state at an ouroboros boundary (year/month/week).
Clears the incomplete bar and position tracking while preserving the threshold configuration. Use this when starting fresh at a known boundary for reproducibility.
§Returns
The orphaned incomplete bar (if any) so caller can decide
whether to include it in results with is_orphan=True flag.
§Example
// At year boundary (Jan 1 00:00:00 UTC)
let orphaned = processor.reset_at_ouroboros();
if let Some(bar) = orphaned {
// Handle incomplete bar from previous year
}
// Continue processing new year's data with clean state