Rust Crate: Markdown AI Citation Removal
"Five years of AI evolution and flirting with AGI. Zero libraries to remove
[1][2][3]from markdown ๐. Remember me well when they take over."
Remove AI-generated citations and annotations from Markdown text at the speed of Rust
High-performance Rust library for removing citations from ChatGPT, Claude, Perplexity, and other AI markdown responses. Removes inline citations [1][2], reference links [1]: https://..., and bibliography sections with 100% accuracy.
๐ CLI Guide โข Benchmarking Guide โข FAQ โข Documentation Index
โก Performance-First
- 100+ MB/s throughput on typical documents
- Zero-copy processing where possible
- Regex-optimized with lazy compilation
- Thread-safe stateless design
๐ฏ Use Cases
Real-World Applications
1. Blog Publishing Pipeline
// Remove citations from AI-generated blog posts before publishing
use clean;
let ai_draft = fetch_from_chatgpt;
let clean_post = remove_citations;
publish_to_cms;
2. Documentation Generation
// Remove citations from AI-generated documentation
let docs = generate_docs_with_ai;
let clean_docs = remove_citations;
write_to_file;
3. Content Aggregation
// Remove citations from multiple AI responses
use CitationRemover;
let cleaner = new;
let responses = vec!;
let cleaned: = responses
.iter
.map
.collect;
4. Streaming API Processing
// Remove citations from AI responses in real-time
async
5. Simple File Processing (CLI)
# Remove citations and auto-generate output file
# Creates: ai_response__cite_removed.md
Common Scenarios
- โ Remove citations from AI chatbot responses (ChatGPT, Claude, Perplexity, Gemini)
- โ Prepare markdown for blog posts and articles
- โ Remove citations before website publishing
- โ Process streaming API responses in real-time
- โ Batch document cleaning for content pipelines
- โ Remove citations from documentation generated by AI tools
- โ Prepare content for CMS ingestion
- โ Remove annotations from research summaries
๐ฆ Installation
Prerequisites
Minimum Requirements:
- Rust 1.70 or later
- Cargo (comes with Rust)
Optional (for enhanced benchmarking):
- Gnuplot (for benchmark visualization)
- macOS:
brew install gnuplot - Ubuntu/Debian:
sudo apt-get install gnuplot - Windows: Download from http://www.gnuplot.info/
- macOS:
Library Installation
Add to your Cargo.toml:
[]
= "0.1"
CLI Installation
Install the command-line tool globally:
# Install from crates.io (when published)
# Or install from local source
Verify installation:
๐ Quick Start
Quick Reference
| Task | Command |
|---|---|
| Remove citations from stdin | echo "Text[1]" | mdcr |
| Auto-generate output file | mdcr input.md |
| Specify output file | mdcr input.md -o output.md |
| Verbose output | mdcr input.md --verbose |
| Run tests | cargo test |
| Run benchmarks | cargo bench |
| View docs | cargo doc --open |
Library Usage
use clean;
let markdown = "AI research shows promise[1][2].\n\n[1]: https://example.com\n[2]: https://test.com";
let result = remove_citations;
assert_eq!;
CLI Usage
Basic Examples
1. Process from stdin to stdout (pipe mode):
|
# Output: Text here.
2. Auto-generate output file (easiest!):
# Creates: ai_response__cite_removed.md
3. Specify custom output file:
4. Verbose output (shows processing details):
# Output:
# Reading from file: input.md
# Removing citations (input size: 1234 bytes)...
# Citations removed (output size: 1100 bytes)
# Writing to file: input__cite_removed.md
# Done!
Advanced CLI Usage
Process multiple files (auto-generated output):
# Process all markdown files in current directory
for; do
done
# Creates: file1__cite_removed.md, file2__cite_removed.md, etc.
Integration with other tools:
# Remove citations from AI output from curl
|
# Remove citations and preview
|
# Remove citations and count words
|
# Chain with other markdown processors
|
Advanced shell script example:
For more complex workflows, create a custom shell script. See the CLI Guide for advanced automation examples including:
- Batch processing with custom naming
- Directory watching and auto-processing
- Git pre-commit hooks
- CI/CD integration
๐ง Features
- โ
Remove inline numeric citations
[1][2][3] - โ
Remove named citations
[source:1][ref:2][cite:3][note:4] - โ
Remove reference link lists
[1]: https://... - โ
Remove reference section headers
## References,# Citations,### Sources - โ
Remove bibliographic entries
[1] Author (2024). Title... - โ Preserve markdown formatting (bold, italic, links, lists, etc.)
- โ Whitespace normalization
- โ Configurable cleaning options
๐ Documentation
- FAQ - Frequently asked questions and troubleshooting
- CLI Usage Guide - Complete command-line tool documentation
- Benchmarking Guide - Understanding performance metrics
- Documentation Index - Complete guide to all documentation
- API Documentation - Full API reference
- Examples - Working code examples
๐ Advanced Usage
Custom Configuration
use ;
// Remove only inline citations, keep reference sections
let config = inline_only;
let cleaner = with_config;
let result = cleaner.remove_citations;
// Remove only reference sections, keep inline citations
let config = references_only;
let result = cleaner.remove_citations;
// Full custom configuration
let config = RemoverConfig ;
Reusable Cleaner Instance
use CitationRemover;
let cleaner = new;
// Reuse for multiple documents
let doc1 = cleaner.remove_citations;
let doc2 = cleaner.remove_citations;
let doc3 = cleaner.remove_citations;
๐งช Examples
See the examples/ directory for more:
basic_usage.rs- Simple examplescustom_config.rs- Configuration options
Run examples:
๐๏ธ Performance
Running Benchmarks
# Run all benchmarks
# Run specific benchmark
# Save baseline for comparison
# Compare against baseline
# View results (after running benchmarks)
Note about benchmark output:
- Tests shown as "ignored" during
cargo benchis normal behavior - regular tests are skipped during benchmarking to avoid interference - Outliers (3-13% of measurements) are normal due to OS scheduling and CPU frequency scaling
- "Gnuplot not found" warning is harmless - Criterion uses an alternative plotting backend
- With Gnuplot installed: Interactive HTML reports with charts are generated in
target/criterion/report/
Performance Characteristics
Typical performance on modern hardware (Apple Silicon M-series):
| Benchmark | Time | Throughput | Notes |
|---|---|---|---|
| Simple inline citations | ~580 ns | 91 MiB/s | Single sentence |
| Complex document | ~2.5 ฮผs | 287 MiB/s | Multiple sections |
| Real ChatGPT output | ~18 ฮผs | 645 MiB/s | 11.8 KB document |
| Real Perplexity output | ~245 ฮผs | 224 MiB/s | 54.9 KB document |
| Batch (5 documents) | ~2.2 ฮผs | 43 MiB/s | Total for all 5 |
| No citations (passthrough) | ~320 ns | 393 MiB/s | Fastest path |
Key Insights:
- Throughput: 100-650 MB/s depending on document complexity
- Latency: Sub-microsecond to ~250 ฮผs for large documents
- Scalability: Linear with document size
- Memory: ~200-300 bytes per operation
๐งช Testing
This library has 100%+ test coverage with comprehensive edge case testing.
Running Tests
# Run all tests (unit + integration + doc tests)
# Run with output visible
# Run specific test
# Run only unit tests
# Run only integration tests
# Run tests with all features enabled
Test Coverage
- 58 total tests covering all functionality
- 18 unit tests - Core logic, patterns, configuration
- 36 integration tests - Real-world scenarios, edge cases
- 4 doc tests - Documentation examples
What's tested:
- โ All citation formats (numeric, named, reference links)
- โ Real AI outputs (ChatGPT, Perplexity)
- โ Edge cases (empty strings, no citations, only citations)
- โ Unicode and emoji support
- โ Large documents (1000+ citations)
- โ Configuration variations
- โ Markdown preservation (formatting, links, images)
Understanding Test Output
When running cargo bench, you'll see tests marked as "ignored" - this is normal. Rust automatically skips regular tests during benchmarking to avoid timing interference. All tests pass when running cargo test.
๐ง Troubleshooting
Common Issues
Q: Why do tests show as "ignored" when running cargo bench?
A: This is normal Rust behavior. When running benchmarks, regular tests are automatically skipped to avoid interfering with timing measurements. All tests pass when you run cargo test. See BENCHMARKING.md for details.
Q: What does "Gnuplot not found, using plotters backend" mean?
A: This is just an informational message. Criterion (the benchmarking library) can use Gnuplot for visualization, but falls back to an alternative plotting backend if it's not installed. Benchmarks still run correctly. To install Gnuplot:
- macOS:
brew install gnuplot - Ubuntu/Debian:
sudo apt-get install gnuplot - Windows: Download from http://www.gnuplot.info/
Q: Why are there performance outliers in benchmarks?
A: Outliers (typically 3-13% of measurements) are normal due to:
- Operating system scheduling
- CPU frequency scaling
- Background processes
- Cache effects
This is expected and doesn't indicate a problem. Criterion automatically detects and reports outliers.
Q: The CLI tool isn't found after installation
A: Make sure Cargo's bin directory is in your PATH:
# Add to ~/.bashrc, ~/.zshrc, or equivalent
# Then reload your shell
Q: How do I know if citations were actually removed?
A: Use the --verbose flag to see before/after sizes:
Getting Help
- Issues: Report bugs or request features on GitHub
- Documentation: Run
cargo doc --openfor full API docs - Examples: Check the
examples/directory for working code
๐ค Contributing
Built by OpenSite AI for the developer community.
Contributions welcome! Please feel free to submit a Pull Request.
Development Setup
# Clone the repository
# Run tests
# Run benchmarks
# Build documentation
# Format code
# Run linter
๐ License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.