OxiRS CLI
Command-line interface for OxiRS semantic web operations
Status: v0.2.2 - Released March 16, 2026
⚡ Production-Ready: APIs are stable and tested. Ready for production use with comprehensive documentation.
Overview
oxirs is the unified command-line tool for the OxiRS ecosystem, providing comprehensive functionality for RDF data management, SPARQL operations, server administration, and development workflows. It's designed to be the Swiss Army knife for semantic web developers and data engineers working with knowledge graphs and semantic data.
What's New in v0.1.0 (January 7, 2026)
- API Stability: All CLI commands and flags are now stable with semantic versioning guarantees
- Enhanced Documentation: Comprehensive help text, examples, and error messages for all commands
- Production Hardening: Improved error handling, logging, and resource management
- Performance Improvements: Faster query execution, import/export operations, and batch processing
- Better User Experience: Enhanced progress indicators, colored output, and interactive prompts
- Security Enhancements: Input validation, secure credential handling, and audit logging
Features
- Data Operations: Import, export, validate, and transform RDF data
- Query Execution: Run SPARQL queries against local and remote endpoints
- Server Management: Start, stop, and configure OxiRS servers
- Development Tools: Schema validation, query optimization, and debugging
- Benchmarking: Performance testing and dataset generation
- Migration Tools: Convert between RDF formats and upgrade datasets
- Configuration Management: Manage server and client configurations
- Interactive Mode: REPL for exploratory data analysis
Installation
From Crates.io
# Install the latest release
# Or install with all optional features
From Source
# Or with all features
Shell Completion
Generate shell completion scripts for your shell:
# Bash
# Zsh
# Fish
# PowerShell
Quick Start
Dataset Name Rules
Dataset names must follow these rules:
- Only alphanumeric characters, underscores (_), and hyphens (-) are allowed
- No dots (.), slashes (/), or file extensions (e.g.,
.oxirs) - Maximum length: 255 characters
- Cannot be empty
✅ Valid: mydata, my_dataset, test-data-2024
❌ Invalid: dataset.oxirs, my/data, data.ttl
Basic Usage
# Initialize a new dataset
# Import RDF data into the dataset
# Query the data
# Start a SPARQL server
# Export to different format
Interactive Mode
# Start interactive REPL
)
)
)
Commands
Data Management
Import Data
# Initialize dataset first
# Import single file (dataset name must be alphanumeric, _, - only)
# Import with named graph
# Import N-Triples
# Import RDF/XML
# Import JSON-LD
Export Data
# Export entire dataset
# Export specific graph
# Export to N-Triples
# Export to RDF/XML
Validate Data
# Validate RDF syntax
# SHACL validation
# ShEx validation
Query Operations
Execute Queries
# Run SPARQL query
# Run query from file
# Output formats: table, json, csv, tsv
# Advanced query with arq tool
Query Analysis
# Parse and validate SPARQL query
# Show query algebra
# Parse SPARQL update
# Query optimization analysis (PostgreSQL EXPLAIN-style)
# Shows: query structure, complexity score, optimization hints
Query Templates
# List all available query templates
# Filter by category
# Show template details
# Render query from template with parameters
# Available templates:
# Basic: select-all, select-by-type, select-with-filter, ask-exists
# Advanced: construct-graph
# Aggregation: count-instances, group-by-count
# PropertyPaths: transitive-closure
# Federation: federated-query
# Analytics: statistics-summary
Query History
# List recent queries (automatically tracked)
# Show full query details
# Re-execute a previous query
# Search query history
# View history statistics
# Clear history
# History tracks: dataset, query text, execution time,
# result count, success/failure, timestamps
# Stored in: ~/.local/share/oxirs/query_history.json
Server Management
Start Server
# Start SPARQL server with configuration file
# With GraphQL support enabled
# Specify host and port
Server Administration
# Check server status
# Upload data to running server
# Backup dataset
# View server metrics
Development Tools
Schema Operations
# Generate schema from data
# Validate against schema
# Compare schemas
# Convert schema formats
Optimization
# Optimize dataset
# Analyze dataset statistics
# Generate indices
# Compress dataset
Benchmarking
Dataset Generation
# Generate test dataset
# Generate synthetic data
# Generate benchmark queries
Performance Testing
# Run benchmarks
# Compare performance
# Stress testing
Migration and Conversion
Format Conversion
# Convert between RDF formats
# Batch conversion
# Streaming conversion for large files
Data Migration
# Migrate from older OxiRS version
# Migrate from other triple stores
# Migrate with transformation
Configuration
Configuration Management
# Initialize configuration
# Validate configuration
# Show current configuration
# Set configuration values
Environment Setup
# Setup development environment
# Install dependencies
# Setup CI/CD templates
Configuration
Dataset Configuration
When you run oxirs init mydata, it creates a configuration file at mydata/oxirs.toml:
# OxiRS Configuration
# Generated by oxirs init
[]
= "turtle"
[]
= 3030
= "localhost"
= true
= false
[]
= "mydata"
= "."
= "tdb2"
= false
= false
= false
= false
= false
Configuration Fields
general.default_format: Default RDF serialization formatserver.port: HTTP server portserver.host: Server bind addressserver.enable_graphql: Enable GraphQL endpointdatasets.{name}.location: Storage path (.means dataset directory itself)datasets.{name}.dataset_type: Storage backend (tdb2ormemory)datasets.{name}.read_only: Prevent modifications- Feature flags:
enable_reasoning,enable_validation,enable_text_search,enable_vector_search
Command-specific Configuration
# Use specific profile
# Override global settings
# Use configuration file
Advanced Features
Scripting and Automation
# Bash completion
# Pipeline operations
| \
| \
# Batch processing
Custom Extensions
# Install plugin
# List plugins
# Run plugin command
Integration with Other Tools
# Integration with Git
|
# Integration with Apache Jena
|
# Integration with RDFLib
|
Examples
Data Processing Pipeline
#!/bin/bash
# data-pipeline.sh
# Download and import multiple datasets
# Merge datasets
# Validate merged data
# Generate optimized indices
# Start production server
Development Workflow
# Create new project
# Import development data
# Start development server with hot reload
# Run tests
# Deploy to staging
Performance
Benchmarks
| Operation | Dataset Size | Time | Memory |
|---|---|---|---|
| Import (Turtle) | 1M triples | 15s | 120MB |
| Export (JSON-LD) | 1M triples | 12s | 85MB |
| Query (simple) | 10M triples | 50ms | 45MB |
| Query (complex) | 10M triples | 300ms | 180MB |
| Server startup | 10M triples | 2s | 200MB |
Optimization Tips
# Use streaming for large files
# Enable parallel processing
# Use binary format for faster loading
# Compress datasets
Troubleshooting
Common Issues
# Debug mode
# Verbose output
# Check dataset integrity
# Memory profiling
Error Recovery
# Recover corrupted dataset
# Validate and repair
# Restore from backup
Related Tools
oxirs-fuseki: SPARQL HTTP serveroxirs-chat: AI-powered chat interfaceoxirs-workbench: Visual RDF editor- Apache Jena: Java-based semantic web toolkit
- RDFLib: Python RDF processing library
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new commands
- Update documentation
- Submit a pull request
License
Licensed under:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
Best Practices
Command Cheat Sheet
# Quick reference for common tasks
# Data Operations
# Server Operations
# Format Conversion
# Validation
# Analysis
Performance Tips
For Large Datasets (>1M triples):
- Use batch import with parallel processing:
oxirs batch import --dataset mydata --files *.nt --parallel 8 - Use TDB loader for bulk loading:
oxirs tdbloader mydata *.nt --progress --stats - Stream large exports:
oxirs export mydata output.nq --format nquads | gzip > output.nq.gz
For Query Performance:
- Analyze queries before execution:
oxirs explain mydata query.sparql --file --mode full - Use appropriate output format (JSON for programmatic use, table for humans)
- Enable query caching for repeated queries
Troubleshooting
| Issue | Solution |
|---|---|
| "Dataset not found" | Run oxirs init <name> first to create the dataset |
| "Format not recognized" | Specify format explicitly with --format flag |
| "Permission denied" | Check directory permissions with chmod 755 <dir> |
| "Port already in use" | Use different port with --port <num> |
| "Out of memory" | Use streaming operations or increase batch size |
| "Invalid SPARQL syntax" | Use oxirs qparse to validate query syntax |
Debug Mode:
# Enable verbose logging
# Debug specific modules
RUST_LOG=oxirs_core=debug,oxirs_arq=trace
OxiRS CLI v0.1.0 - Production-ready command-line interface for semantic web operations