DataFusion Tracing
DataFusion Tracing is an extension for Apache DataFusion that helps you monitor and debug queries. It uses tracing and OpenTelemetry to gather DataFusion metrics, trace execution steps, and preview partial query results.
Note: This is not an official Apache Software Foundation release.
Overview
When you run queries with DataFusion Tracing enabled, it automatically adds tracing around execution steps, records all native DataFusion metrics such as execution time and output row count, lets you preview partial results for easier debugging, and integrates with OpenTelemetry for distributed tracing. This makes it simpler to understand and improve query performance.
See it in action
Here's what DataFusion Tracing can look like in practice:


Getting Started
Installation
Include DataFusion Tracing in your project's Cargo.toml:
[]
= "51.0.0"
= "51.0.0"
Quick Start Example
use ;
use ;
use Arc;
use field;
async
A more complete example can be found in the examples directory.
Setting Up a Collector
Before diving into DataFusion Tracing, you'll need to set up an OpenTelemetry collector to receive and process the tracing data. There are several options available:
Jaeger (Local Development)
For local development and testing, Jaeger is a great choice. It's an open-source distributed tracing system that's easy to set up. You can run it with Docker using:
Once running, you can access the Jaeger UI at http://localhost:16686. For more details, check out their getting started guide.
DataDog (Cloud-Native)
For a cloud-native approach, DataDog offers a hosted solution for OpenTelemetry data. You can send your traces directly to their platform by configuring your DataDog API key and endpoint - their OpenTelemetry integration guide has all the details.
Other Collectors
Of course, you can use any OpenTelemetry-compatible collector. The official OpenTelemetry Collector is a good starting point if you want to build a custom setup.
Using with Multiple Optimizer Rules
If you're using custom physical optimizer rules alongside the instrumentation rule, always register the instrumentation rule last in your physical optimizer chain so that:
- You capture the final optimized plan, not an intermediate one.
- Instrumentation stays purely observational—other optimizer rules never have to deal with instrumented nodes.
To keep the instrumentation rule last in the chain, either chain calls:
builder.with_physical_optimizer_rule
.with_physical_optimizer_rule
.with_physical_optimizer_rule
Or pass a vector:
builder.with_physical_optimizer_rules
InstrumentedExec visibility
Instrumentation is designed to be mostly invisible: with the rule registered last, other optimizer rules typically never see InstrumentedExec at all. The wrapper itself is intentionally private so downstream code cannot depend on its internals; the supported surface is the optimizer rule and the standard ExecutionPlan trait.
Repository Structure
The repository is organized as follows:
datafusion-tracing/: Core tracing functionality for DataFusioninstrumented-object-store/: Object store instrumentationintegration-utils/: Integration utilities and helpers for examples and tests (not for production use)examples/: Example applications demonstrating the library usagetests/: Integration testsdocs/: Documentation, including logos and screenshots
Building and Testing
Use these commands to build and test:
Test data: generate TPCH Parquet files
Integration tests and examples expect TPCH tables in Parquet format to be present in integration-utils/data (not checked in). Generate them locally with:
This produces all TPCH tables at scale factor 0.1 as single Parquet files in integration-utils/data. CI installs tpchgen-cli and runs the same script automatically before tests. If a required file is missing, the helper library will return a clear error instructing you to run the script.
Contributing
Contributions are welcome. Make sure your code passes all tests, follow existing formatting and coding styles, and include tests and documentation. See CONTRIBUTING.md for detailed guidelines.
License
Licensed under the Apache License, Version 2.0. See LICENSE.
Acknowledgments
This project includes software developed at Datadog (info@datadoghq.com).