csv-nose 0.2.0

CSV dialect sniffer using the Table Uniformity Method - a drop-in replacement for qsv-sniffer
Documentation

csv-nose

A Rust port of the Table Uniformity Method for CSV dialect detection, designed as a drop-in replacement for qsv-sniffer in qsv.

Background

This crate implements the algorithm from "Detecting CSV File Dialects by Table Uniformity Measurement and Data Type Inference" by W. García. The Table Uniformity Method achieves ~95% accuracy on real-world messy CSV files by:

  1. Testing multiple potential dialects (delimiter × quote × line terminator combinations)
  2. Scoring each dialect based on table uniformity (consistent field counts)
  3. Scoring based on type detection (consistent data types within columns)
  4. Selecting the dialect with the highest combined gamma score

Installation

As a library

[dependencies]
csv-nose = "0.1"

As a CLI tool

cargo install csv-nose

Library Usage

use csv_nose::{Sniffer, SampleSize};

let mut sniffer = Sniffer::new();
sniffer.sample_size(SampleSize::Records(100));

let metadata = sniffer.sniff_path("data.csv").unwrap();

println!("Delimiter: {}", metadata.dialect.delimiter as char);
println!("Has header: {}", metadata.dialect.header.has_header_row);
println!("Fields: {:?}", metadata.fields);
println!("Types: {:?}", metadata.types);

CLI Usage

csv-nose data.csv                    # Sniff a single file
csv-nose *.csv                       # Sniff multiple files
csv-nose -f json data.csv            # Output as JSON
csv-nose --delimiter-only data.csv   # Output only the delimiter
csv-nose -v data.csv                 # Verbose output with field types

API Compatibility

The public API mirrors qsv-sniffer for easy migration:

use csv_nose::{Sniffer, Metadata, Dialect, Header, Quote, Type, SampleSize, DatePreference};

let mut sniffer = Sniffer::new();
sniffer
    .sample_size(SampleSize::Records(50))
    .date_preference(DatePreference::MdyFormat)
    .delimiter(b',')
    .quote(Quote::Some(b'"'));

Benchmarks

csv-nose is benchmarked against the same test datasets used by CSVsniffer, enabling direct accuracy comparison with other CSV dialect detection tools.

Success Ratio

The table below shows the dialect detection success ratio. Accuracy is measured using only files that do not produce errors during dialect inference.

Data set csv-nose CSVsniffer MADSE CSVsniffer CleverCSV csv.Sniffer DuckDB sniff_csv
POLLOCK 95.95% 95.27% 96.55% 95.17% 96.35% 84.14%
W3C-CSVW 94.12% 94.52% 95.39% 61.11% 97.69% 99.08%
CSV Wrangling 91.06% 90.50% 89.94% 87.99% 84.26% 91.62%
CSV Wrangling CODEC 90.85% 90.14% 90.14% 89.44% 84.18% 92.25%
CSV Wrangling MESSY 89.68% 89.60% 89.60% 89.60% 83.06% 91.94%

Failure Ratio

The table below shows the failure ratio (errors during dialect detection) for each tool.

Note: "Errors" are files that caused crashes or exceptions during processing (e.g., encoding issues, malformed data). This is distinct from "failures" where a file was successfully processed but the wrong dialect was detected. A 0% error rate means all files were processed without crashes, even if some detections were incorrect.

Data set csv-nose CSVsniffer MADSE CSVsniffer CleverCSV csv.Sniffer DuckDB sniff_csv
POLLOCK [148 files] 0.00% 0.00% 2.03% 2.03% 7.43% 2.03%
W3C-CSVW [221 files] 0.00% 0.91% 1.81% 2.26% 41.18% 1.81%
CSV Wrangling [179 files] 0.00% 0.00% 0.56% 0.56% 39.66% 0.00%
CSV Wrangling CODEC [142 files] 0.00% 0.00% 0.00% 0.00% 38.03% 0.00%
CSV Wrangling MESSY [126 files] 0.00% 0.79% 0.79% 0.79% 42.06% 0.79%

F1 Score

The F1 score is the harmonic mean of precision and recall, providing a balanced measure of dialect detection accuracy.

Data set csv-nose CSVsniffer MADSE CSVsniffer CleverCSV csv.Sniffer DuckDB sniff_csv
POLLOCK 0.959 0.976 0.972 0.965 0.943 0.904
W3C-CSVW 0.941 0.967 0.967 0.748 0.730 0.986
CSV Wrangling 0.911 0.950 0.945 0.935 0.724 0.956
CSV Wrangling CODEC 0.908 0.948 0.948 0.944 0.728 0.959
CSV Wrangling MESSY 0.897 0.943 0.943 0.943 0.705 0.956

Component Accuracy

csv-nose's delimiter and quote detection accuracy on each dataset:

Data set Delimiter Accuracy Quote Accuracy
POLLOCK 96.62% 98.65%
W3C-CSVW 99.55% 94.57%
CSV Wrangling 93.30% 96.65%
CSV Wrangling CODEC 92.96% 97.18%
CSV Wrangling MESSY 92.06% 96.03%

Benchmark Setup

The benchmark test files are not included in this repository. To run benchmarks, first clone CSVsniffer and copy the test files:

# Clone CSVsniffer (if not already available)
git clone https://github.com/ws-garcia/CSVsniffer.git /path/to/CSVsniffer

# Copy test files to csv-nose
cp -r /path/to/CSVsniffer/CSV/* tests/data/pollock/
cp -r /path/to/CSVsniffer/W3C-CSVW/* tests/data/w3c-csvw/
cp -r "/path/to/CSVsniffer/CSV_Wrangling/data/github/Curated files/"* tests/data/csv-wrangling/

Running Benchmarks

Once the test files are in place:

# Run benchmark on POLLOCK dataset
cargo run --release -- --benchmark tests/data/pollock

# Run benchmark on W3C-CSVW dataset
cargo run --release -- --benchmark tests/data/w3c-csvw

# Run benchmark on CSV Wrangling dataset (all 179 files)
cargo run --release -- --benchmark tests/data/csv-wrangling

# Run benchmark on CSV Wrangling filtered CODEC (142 files)
cargo run --release -- --benchmark tests/data/csv-wrangling --annotations tests/data/annotations/csv-wrangling-codec.txt

# Run benchmark on CSV Wrangling MESSY (126 non-normal files)
cargo run --release -- --benchmark tests/data/csv-wrangling --annotations tests/data/annotations/csv-wrangling-messy.txt

# Run integration tests with detailed output
cargo test --test benchmark_accuracy -- --nocapture

License

MIT OR Apache-2.0

Naming

The name "csv-nose" is a play on words, combining "CSV" (Comma-Separated Values) with "nose," suggesting the tool's ability to "sniff out" the correct CSV dialect. "Nose" also sounds like "knows," implying expertise in CSV dialect detection.

AI Contributions

Claude Code using Opus 4.5 was used to assist in code generation and documentation. All AI-generated content has been reviewed and edited by human contributors to ensure accuracy and quality.