OxiRS CLI
Command-line interface for OxiRS semantic web operations
Status: Alpha Release (v0.1.0-alpha.2) - Released October 4, 2025
⚠️ Alpha Software: This is an early alpha release. APIs may change without notice. Not recommended for production use.
Overview
oxirs is the unified command-line tool for the OxiRS ecosystem, providing comprehensive functionality for RDF data management, SPARQL operations, server administration, and development workflows. It's designed to be the Swiss Army knife for semantic web developers and data engineers.
Features
- Data Operations: Import, export, validate, and transform RDF data
- Query Execution: Run SPARQL queries against local and remote endpoints
- Server Management: Start, stop, and configure OxiRS servers
- Development Tools: Schema validation, query optimization, and debugging
- Benchmarking: Performance testing and dataset generation
- Migration Tools: Convert between RDF formats and upgrade datasets
- Configuration Management: Manage server and client configurations
- Interactive Mode: REPL for exploratory data analysis
Installation
From Crates.io
From Source
Quick Start
Dataset Name Rules
Dataset names must follow these rules:
- Only alphanumeric characters, underscores (_), and hyphens (-) are allowed
- No dots (.), slashes (/), or file extensions (e.g.,
.oxirs) - Maximum length: 255 characters
- Cannot be empty
✅ Valid: mydata, my_dataset, test-data-2024
❌ Invalid: dataset.oxirs, my/data, data.ttl
Basic Usage
# Initialize a new dataset
# Import RDF data into the dataset
# Query the data
# Start a SPARQL server
# Export to different format
Interactive Mode
# Start interactive REPL
)
)
)
Commands
Data Management
Import Data
# Initialize dataset first
# Import single file (dataset name must be alphanumeric, _, - only)
# Import with named graph
# Import N-Triples
# Import RDF/XML
# Import JSON-LD
Export Data
# Export entire dataset
# Export specific graph
# Export to N-Triples
# Export to RDF/XML
Validate Data
# Validate RDF syntax
# SHACL validation
# ShEx validation
Query Operations
Execute Queries
# Run SPARQL query
# Run query from file
# Output formats: table, json, csv, tsv
# Advanced query with arq tool
Query Analysis
# Parse and validate SPARQL query
# Show query algebra
# Parse SPARQL update
Server Management
Start Server
# Start SPARQL server with configuration file
# With GraphQL support enabled
# Specify host and port
Server Administration
# Check server status
# Upload data to running server
# Backup dataset
# View server metrics
Development Tools
Schema Operations
# Generate schema from data
# Validate against schema
# Compare schemas
# Convert schema formats
Optimization
# Optimize dataset
# Analyze dataset statistics
# Generate indices
# Compress dataset
Benchmarking
Dataset Generation
# Generate test dataset
# Generate synthetic data
# Generate benchmark queries
Performance Testing
# Run benchmarks
# Compare performance
# Stress testing
Migration and Conversion
Format Conversion
# Convert between RDF formats
# Batch conversion
# Streaming conversion for large files
Data Migration
# Migrate from older OxiRS version
# Migrate from other triple stores
# Migrate with transformation
Configuration
Configuration Management
# Initialize configuration
# Validate configuration
# Show current configuration
# Set configuration values
Environment Setup
# Setup development environment
# Install dependencies
# Setup CI/CD templates
Configuration
Dataset Configuration
When you run oxirs init mydata, it creates a configuration file at mydata/oxirs.toml:
# OxiRS Configuration
# Generated by oxirs init
[]
= "turtle"
[]
= 3030
= "localhost"
= true
= false
[]
= "mydata"
= "."
= "tdb2"
= false
= false
= false
= false
= false
Configuration Fields
general.default_format: Default RDF serialization formatserver.port: HTTP server portserver.host: Server bind addressserver.enable_graphql: Enable GraphQL endpointdatasets.{name}.location: Storage path (.means dataset directory itself)datasets.{name}.dataset_type: Storage backend (tdb2ormemory)datasets.{name}.read_only: Prevent modifications- Feature flags:
enable_reasoning,enable_validation,enable_text_search,enable_vector_search
Command-specific Configuration
# Use specific profile
# Override global settings
# Use configuration file
Advanced Features
Scripting and Automation
# Bash completion
# Pipeline operations
| \
| \
# Batch processing
Custom Extensions
# Install plugin
# List plugins
# Run plugin command
Integration with Other Tools
# Integration with Git
|
# Integration with Apache Jena
|
# Integration with RDFLib
|
Examples
Data Processing Pipeline
#!/bin/bash
# data-pipeline.sh
# Download and import multiple datasets
# Merge datasets
# Validate merged data
# Generate optimized indices
# Start production server
Development Workflow
# Create new project
# Import development data
# Start development server with hot reload
# Run tests
# Deploy to staging
Performance
Benchmarks
| Operation | Dataset Size | Time | Memory |
|---|---|---|---|
| Import (Turtle) | 1M triples | 15s | 120MB |
| Export (JSON-LD) | 1M triples | 12s | 85MB |
| Query (simple) | 10M triples | 50ms | 45MB |
| Query (complex) | 10M triples | 300ms | 180MB |
| Server startup | 10M triples | 2s | 200MB |
Optimization Tips
# Use streaming for large files
# Enable parallel processing
# Use binary format for faster loading
# Compress datasets
Troubleshooting
Common Issues
# Debug mode
# Verbose output
# Check dataset integrity
# Memory profiling
Error Recovery
# Recover corrupted dataset
# Validate and repair
# Restore from backup
Related Tools
oxirs-fuseki: SPARQL HTTP serveroxirs-chat: AI-powered chat interfaceoxirs-workbench: Visual RDF editor- Apache Jena: Java-based semantic web toolkit
- RDFLib: Python RDF processing library
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new commands
- Update documentation
- Submit a pull request
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT License (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Status
🚧 Alpha Release (v0.1.0-alpha.2) - October 1, 2025
Current alpha features:
- ✅ Dataset initialization and management
- ✅ Persistent RDF storage (N-Quads format, auto-save on import)
- ✅ Data import/export (Turtle, N-Triples, RDF/XML, JSON-LD, N-Quads, TriG)
- ✅ SPARQL query execution (SELECT, ASK, CONSTRUCT, DESCRIBE)
- ✅ Triple pattern matching with variable binding
- ✅ Automatic data persistence (save/load from disk)
- ✅ Server management with configuration
- ✅ Interactive REPL mode
- ✅ RDF validation tools (syntax, SHACL, ShEx)
- 🚧 PREFIX support (planned for next release)
- 🚧 Advanced SPARQL features (FILTER, OPTIONAL, UNION)
- 🚧 Benchmarking tools (in progress)
- 🚧 TDB storage tools (in progress)
- ⏳ Migration utilities (planned)
Changes in v0.1.0-alpha.2
- Persistent storage: Data automatically saved to
<dataset>/data.nqon import - SPARQL queries: Basic SELECT, ASK, CONSTRUCT, DESCRIBE queries now working
- Dataset naming: Dataset names must be alphanumeric with
_and-only (no file extensions) - Command syntax: Simplified positional arguments (e.g.,
oxirs import mydata file.ttl) - Configuration format: Unified TOML format with
[datasets.{name}]sections - Interactive mode: Added
oxirs interactivecommand for REPL - Auto-load: Query command automatically loads data from disk
Working Example
# Initialize and use a dataset
# Data persists! Close terminal, reopen, and query again
Note: This is an alpha release. Some features are incomplete and APIs may change.