copc_converter
A fast, memory-efficient converter that turns LAS/LAZ point cloud files into COPC (Cloud-Optimized Point Cloud) files.
Features
- Produces spec-compliant COPC 1.0 files (LAS 1.4, point format 6, 7, or 8 — automatically chosen from input)
- Merges multiple input files into a single COPC output
- Out-of-core processing with a configurable memory budget — handles datasets larger than RAM
- Parallel reading, octree construction, and LAZ compression via rayon
- Preserves WKT CRS from input files
- Optional temporal index for GPS-time-based filtering (spec)
Installation
Requires Rust 1.85+.
From crates.io
From source
This installs the copc_converter binary to ~/.cargo/bin/, which should be on your PATH.
Pre-built binaries
Download pre-built binaries from the GitHub releases page. These are built for broad compatibility and run on any machine.
For best performance, prefer installing from source via cargo install — this automatically compiles with target-cpu=native, optimizing for your specific CPU's instruction set (AVX2, NEON, etc.).
Usage
# Single file
# Directory of LAZ/LAS files
Options
| Flag | Description | Default |
|---|---|---|
--memory-limit |
Max memory budget (16G, 4096M, etc.) |
auto-detected |
--threads |
Max parallel threads | all cores |
--temp-dir |
Directory for intermediate files | system temp |
--temporal-index |
Write a temporal index EVLR for time-based queries | off |
--temporal-stride |
Sampling stride for the temporal index (every n-th point) | 1000 |
--progress |
Progress output format: bar, plain, or json |
bar |
--temp-compression |
Compress scratch temp files: none or lz4 |
none |
Temp file compression
Chunked-build scratch files hold RawPoint records (38 bytes each) and are
highly compressible. On a large run (tens of billions of points) the temp
directory can approach the full raw-point footprint, which becomes the
limiting resource on space-constrained workers.
--temp-compression=lz4 wraps each temp-file write in a self-contained LZ4
frame. Expect roughly a 3-4× reduction in scratch-disk usage at a modest CPU
cost (LZ4 compresses at >1 GB/s per core). On fast local NVMe this trades CPU
for disk without a clear wall-time win; on network filesystems (EFS/NFS) it
typically also reduces wall time because the bottleneck shifts from I/O to
compute.
Examples
# With temporal index (useful for multi-pass mobile mapping data)
Library usage
The crate exposes a typestate pipeline API that enforces correct step ordering at compile time:
use ;
let files = collect_input_files?;
let config = PipelineConfig ;
scan?
.validate?
.distribute?
.build?
.write?;
Tools
Optional analysis tools are available behind the tools feature:
inspect_copc
Inspect a COPC file's structure, or compare two files side-by-side. Works with local files and HTTP URLs.
# Inspect a single file
# Compare two files
Prints node counts, point distribution, compressed sizes, and compression ratios per octree level.
inspect_temporal
Inspect the temporal index EVLR of a COPC file:
Prints GPS time range, per-level temporal coverage, a time histogram showing node overlap across time windows, and sample density stats.
How it works
- Scan — reads headers from all input files in parallel to determine bounds, CRS, point format, and point count.
- Validate — checks that all input files share the same CRS and point format, and selects the appropriate COPC output format (6, 7, or 8).
- Count — first full pass over the input: populates an occupancy grid used by the chunk planner to carve the dataset into thousands of roughly equal-sized chunks via counting sort.
- Distribute — second full pass over the input: streams every point into its chunk's scratch file on disk, bounded by the configured memory budget.
- Build — each chunk's sub-octree is built independently in memory in parallel, then merged at coarse levels up to a single global root, thinning points at each level to produce multi-resolution LODs.
- Write — encodes and compresses nodes in parallel into a single COPC file with a hierarchy EVLR for spatial indexing.
Acknowledgments
The chunked octree build is based on the counting-sort approach described in:
Markus Schütz, Stefan Ohrhallinger, and Michael Wimmer. "Fast Out-of-Core Octree Generation for Massive Point Clouds." Computer Graphics Forum, 2020. doi:10.1111/cgf.14134
License
MIT