xsv is a command line program for indexing, slicing, analyzing, splitting and joining CSV files. Commands should be simple, fast and composable:
- Simple tasks should be easy.
- Performance trade offs should be exposed in the CLI interface.
- Composition should not come at the expense of performance.
This README contains information on how to
install xsv
, in addition to
a quick tour of several commands.
Available commands
- cat - Concatenate CSV files by row or by column.
- count - Count the rows in a CSV file. (Instantaneous with an index.)
- fixlengths - Force a CSV file to have same-length records by either padding or truncating them.
- flatten - A flattened view of CSV records. Useful for viewing one record
at a time. e.g.,
xsv slice -i 5 data.csv | xsv flatten
. - fmt - Reformat CSV data with different delimiters or record terminators. (Supports ASCII delimited data.)
- frequency - Build frequency tables of each column in CSV data. (Uses parallelism to go faster if an index is present.)
- headers - Show the headers of CSV data. Or show the intersection of all headers between many CSV files.
- index - Create an index for a CSV file. This is very quick and provides constant time indexing into the CSV file.
- join - Inner, outer and cross joins. Uses a simple hash index to make it fast.
- sample - Randomly draw rows from CSV data using reservoir sampling (i.e., use memory proportional to the size of the sample).
- search - Run a regex over CSV data. Applies the regex to each field individually and shows only matching rows.
- select - Select or re-order columns from CSV data.
- slice - Slice rows from any part of a CSV file. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice).
- sort - Sort CSV data.
- split - Split one CSV file into many CSV files of N chunks.
- stats - Show basic types and statistics of each column in the CSV file. (i.e., mean, standard deviation, median, range, etc.)
- table - Show aligned output of any CSV data using elastic tabstops.
A whirlwind tour
Let's say you're playing with some of the data from the Data Science Toolkit, which contains several CSV files. Maybe you're interested in the population counts of each city in the world. So grab the data and start examining it:
The next thing you might want to do is get an overview of the kind of data that
appears in each column. The stats
command will do this for you:
|
The xsv table
command takes any CSV data and formats it into aligned columns
using elastic tabstops. You'll
notice that it even gets alignment right with respect to Unicode characters.
So, this command takes about 12 seconds to run on my machine, but we can speed it up by creating an index and re-running the command:
|
Which cuts it down to about 8 seconds on my machine. (And creating the index takes less than 2 seconds.)
Notably, the same type of "statistics" command in another CSV command line toolkit takes about 2 minutes to produce similar statistics on the same data set.
Creating an index gives us more than just faster statistics gathering. It also makes slice operations extremely fast because only the sliced portion has to be parsed. For example, let's say you wanted to grab the last 10 records:
|
These commands are instantaneous because they run in time and memory proportional to the size of the slice (which means they will scale to arbitrarily large CSV data).
Switching gears a little bit, you might not always want to see every column in the CSV data. In this case, maybe we only care about the country, city and population. So let's take a look at 10 random rows:
| |
Whoops! It seems some cities don't have population counts. How pervasive is that?
)
(The xsv frequency
command builds a frequency table for each column in the
CSV data. This one only took 5 seconds.)
So it seems that most cities do not have a population count associated with them at all. No matter---we can adjust our previous command so that it only shows rows with a population count:
| | |
Erk. Which country is at
? No clue, but the Data Science Toolkit has a CSV
file called countrynames.csv
. Let's grab it and do a join so we can see which
countries these are:
|
|
| | | | | |
Whoops, now we have two columns called Country
and an Abbrev
column that we
no longer need. This is easy to fix by re-ordering columns with the xsv select
command:
| |
|
| | | | | |
Perhaps we can do this with the original CSV data? Indeed we can---because
joins in xsv
are fast.
|
|
|
|
The !Abbrev,Country[1]
syntax means, "remove the Abbrev
column and remove
the second occurrence of the Country
column." Since we joined with
countrynames.csv
first, the first Country
name (fully expanded) is now
included in the CSV data.
This xsv join
command takes about 7 seconds on my machine. The performance
comes from constructing a very simple hash index of one of the CSV data files
given. The join
command does an inner join by default, but it also has left,
right and full outer join support too.
Installation
Installing xsv
is a bit hokey right now. Ideally, I could release binaries
for Linux, Mac and Windows. Currently, I'm only able to release binaries for
Linux because I don't know how to cross compile Rust programs.
With that said, you can grab the latest release (Linux x86_64 binary) from GitHub:
Alternatively, you can compile from source by
installing Cargo
(Rust's package manager)
and building xsv
:
Compilation will probably take 1-2 minutes depending on your machine. The
binary will end up in ./target/release/xsv
.
Benchmarks
I've compiled some very rough
benchmarks of
various xsv
commands.
Motivation
Here are several valid criticisms of this project:
- You shouldn't be working with CSV data because CSV is a terrible format.
- If your data is gigabytes in size, then CSV is the wrong storage type.
- Various SQL databases provide all of the operations available in
xsv
with more sophisticated indexing support. And the performance is a zillion times better.
I'm sure there are more criticisms, but the impetus for this project was a 40GB CSV file that was handed to me. I was tasked with figuring out the shape of the data inside of it and coming up with a way to integrate it into our existing system. It was then that I realized that every single CSV tool I knew about was woefully inadequate. They were just too slow or didn't provide enough flexibility. (Another project I had comprised of a few dozen CSV files. They were smaller than 40GB, but they were each supposed to represent the same kind of data. But they all had different column and unintuitive column names. Useful CSV inspection tools were critical here---and they had to be reasonably fast.)
The key ingredients for helping me with my task were indexing, random sampling, searching, slicing and selecting columns. All of these things made dealing with 40GB of CSV data a bit more manageable (or dozens of CSV files).
Getting handed a large CSV file once was enough to launch me on this quest. From conversations I've had with others, CSV data files this large don't seem to be a rare event. Therefore, I believe there is room for a tool that has a hope of dealing with data that large.