Struct csv::Reader
[−]
[src]
pub struct Reader<R> { // some fields omitted }
A CSV reader.
This reader parses CSV data and exposes records via iterators.
Example
This example shows how to do type-based decoding for each record in the CSV data.
let data = " sticker,mortals,7 bribed,personae,7 wobbling,poncing,4 interposed,emmett,9 chocolate,refile,7"; let mut rdr = csv::Reader::from_string(data).has_headers(false); for row in rdr.decode() { let (n1, n2, dist): (String, String, u32) = row.unwrap(); println!("{}, {}: {}", n1, n2, dist); }
Here's another example that parses tab-delimited values with records of varying length:
let data = " sticker\tmortals\t7 bribed\tpersonae\t7 wobbling interposed\temmett\t9 chocolate\trefile\t7"; let mut rdr = csv::Reader::from_string(data) .has_headers(false) .delimiter(b'\t') .flexible(true); for row in rdr.records() { let row = row.unwrap(); println!("{:?}", row); }
Methods
impl<R: Read> Reader<R>
[src]
fn from_reader(rdr: R) -> Reader<R>
Creates a new CSV reader from an arbitrary io::Read
.
The reader is buffered for you automatically.
impl Reader<File>
[src]
fn from_file<P: AsRef<Path>>(path: P) -> Result<Reader<File>>
Creates a new CSV reader for the data at the file path given.
impl Reader<Cursor<Vec<u8>>>
[src]
fn from_string<'a, S>(s: S) -> Reader<Cursor<Vec<u8>>> where S: Into<String>
Creates a CSV reader for an in memory string buffer.
fn from_bytes<'a, V>(bytes: V) -> Reader<Cursor<Vec<u8>>> where V: Into<Vec<u8>>
Creates a CSV reader for an in memory buffer of bytes.
impl<R: Read> Reader<R>
[src]
fn decode<'a, D: Decodable>(&'a mut self) -> DecodedRecords<'a, R, D>
Uses type-based decoding to read a single record from CSV data.
The type that is being decoded into should correspond to one full
CSV record. A tuple, struct or Vec
fit this category. A tuple,
struct or Vec
should consist of primitive types like integers,
floats, characters and strings which map to single fields. If a field
cannot be decoded into the type requested, an error is returned.
Enums are also supported in a limited way. Namely, its variants must
have exactly 1
parameter each. Each variant decodes based on its
constituent type and variants are tried in the order that they appear
in their enum
definition. See below for examples.
Examples
This example shows how to decode records into a struct. (Note that currently, the names of the struct members are irrelevant.)
extern crate rustc_serialize; #[derive(RustcDecodable)] struct Pair { name1: String, name2: String, dist: u32, } let mut rdr = csv::Reader::from_string("foo,bar,1\nfoo,baz,2") .has_headers(false); // Instantiating a specific type when decoding is usually necessary. let rows = rdr.decode().collect::<csv::Result<Vec<Pair>>>().unwrap(); assert_eq!(rows[0].dist, 1); assert_eq!(rows[1].dist, 2);
We can get a little crazier with custon enum types or Option
types.
An Option
type in particular is useful when a column doesn't contain
valid data in every record (whether it be empty or malformed).
extern crate rustc_serialize; #[derive(RustcDecodable, PartialEq, Debug)] struct MyUint(u32); #[derive(RustcDecodable, PartialEq, Debug)] enum Number { Integer(i64), Float(f64) } #[derive(RustcDecodable)] struct Row { name1: String, name2: String, dist: Option<MyUint>, dist2: Number, } let mut rdr = csv::Reader::from_string("foo,bar,1,1\nfoo,baz,,1.5") .has_headers(false); let rows = rdr.decode().collect::<csv::Result<Vec<Row>>>().unwrap(); assert_eq!(rows[0].dist, Some(MyUint(1))); assert_eq!(rows[1].dist, None); assert_eq!(rows[0].dist2, Number::Integer(1)); assert_eq!(rows[1].dist2, Number::Float(1.5));
Finally, as a special case, a tuple/struct/Vec
can be used as the
"tail" of another tuple/struct/Vec
to capture all remaining fields:
extern crate rustc_serialize; #[derive(RustcDecodable)] struct Pair { name1: String, name2: String, attrs: Vec<u32>, } let mut rdr = csv::Reader::from_string("a,b,1,2,3,4\ny,z,5,6,7,8") .has_headers(false); let rows = rdr.decode().collect::<csv::Result<Vec<Pair>>>().unwrap(); assert_eq!(rows[0].attrs, vec![1,2,3,4]); assert_eq!(rows[1].attrs, vec![5,6,7,8]);
If a tuple/struct/Vec
appears any where other than the "tail" of a
record, then the behavior is undefined. (You'll likely get a runtime
error. I believe this is a limitation of the current decoding machinery
in the serialize
crate.)
```
fn records<'a>(&'a mut self) -> StringRecords<'a, R>
Returns an iterator of records in the CSV data where each field is
a String
.
Example
This is your standard CSV interface with no type decoding magic.
let data = " sticker,mortals,7 bribed,personae,7 wobbling,poncing,4 interposed,emmett,9 chocolate,refile,7"; let mut rdr = csv::Reader::from_string(data).has_headers(false); for row in rdr.records() { let row = row.unwrap(); println!("{:?}", row); }
fn headers(&mut self) -> Result<Vec<String>>
Returns a copy of the first record in the CSV data as strings.
This method may be called at any time and regardless of whether
no_headers
is set or not.
Example
let mut rdr = csv::Reader::from_string("a,b,c\n1,2,3"); let headers1 = rdr.headers().unwrap(); let rows = rdr.records().collect::<csv::Result<Vec<_>>>().unwrap(); let headers2 = rdr.headers().unwrap(); let s = |s: &'static str| s.to_string(); assert_eq!(headers1, headers2); assert_eq!(headers1, vec![s("a"), s("b"), s("c")]); assert_eq!(rows.len(), 1); assert_eq!(rows[0], vec![s("1"), s("2"), s("3")]);
Note that if no_headers
is called on the CSV reader, the rows
returned in this example include the first record:
let mut rdr = csv::Reader::from_string("a,b,c\n1,2,3") .has_headers(false); let headers1 = rdr.headers().unwrap(); let rows = rdr.records().collect::<csv::Result<Vec<_>>>().unwrap(); let headers2 = rdr.headers().unwrap(); let s = |s: &'static str| s.to_string(); assert_eq!(headers1, headers2); assert_eq!(headers1, vec![s("a"), s("b"), s("c")]); // The header rows are now part of the record iterators. assert_eq!(rows.len(), 2); assert_eq!(rows[0], headers1); assert_eq!(rows[1], vec![s("1"), s("2"), s("3")]);
impl<R: Read> Reader<R>
[src]
fn delimiter(self, delimiter: u8) -> Reader<R>
The delimiter to use when reading CSV data.
Since the CSV reader is meant to be mostly encoding agnostic, you must
specify the delimiter as a single ASCII byte. For example, to read
tab-delimited data, you would use b'\t'
.
The default value is b','
.
fn has_headers(self, yes: bool) -> Reader<R>
Whether to treat the first row as a special header row.
By default, the first row is treated as a special header row, which
means it is excluded from iterators returned by the decode
, records
or byte_records
methods. When yes
is set to false
, the first row
is included in those iterators.
Note that the headers
method is unaffected by whether this is set.
fn flexible(self, yes: bool) -> Reader<R>
Whether to allow flexible length records when reading CSV data.
When this is set to true
, records in the CSV data can have different
lengths. By default, this is disabled, which will cause the CSV reader
to return an error if it tries to read a record that has a different
length than the first record that it read.
fn record_terminator(self, term: RecordTerminator) -> Reader<R>
Set the record terminator to use when reading CSV data.
In the vast majority of situations, you'll want to use the default
value, RecordTerminator::CRLF
, which automatically handles \r
,
\n
or \r\n
as record terminators. (Notably, this is a special
case since two characters can correspond to a single terminator token.)
However, you may use RecordTerminator::Any
to specify any ASCII
character to use as the record terminator. For example, you could
use RecordTerminator::Any(b'\n')
to only accept line feeds as
record terminators, or b'\x1e'
for the ASCII record separator.
fn quote(self, quote: u8) -> Reader<R>
Set the quote character to use when reading CSV data.
Since the CSV reader is meant to be mostly encoding agnostic, you must
specify the quote as a single ASCII byte. For example, to read
single quoted data, you would use b'\''
.
The default value is b'"'
.
If quote
is None
, then no quoting will be used.
fn escape(self, escape: Option<u8>) -> Reader<R>
Set the escape character to use when reading CSV data.
Since the CSV reader is meant to be mostly encoding agnostic, you must specify the escape as a single ASCII byte.
When set to None
(which is the default), the "doubling" escape
is used for quote character.
When set to something other than None
, it is used as the escape
character for quotes. (e.g., b'\\'
.)
fn double_quote(self, yes: bool) -> Reader<R>
Enable double quote escapes.
When disabled, doubled quotes are not interpreted as escapes.
fn ascii(self) -> Reader<R>
A convenience method for reading ASCII delimited text.
This sets the delimiter and record terminator to the ASCII unit
separator (\x1f
) and record separator (\x1e
), respectively.
Since ASCII delimited text is meant to be unquoted, this also sets
quote
to None
.
impl<R: Read> Reader<R>
[src]
These are low level methods for dealing with the raw bytes of CSV records. You should only need to use these when you need the performance or if your CSV data isn't UTF-8 encoded.
fn byte_headers(&mut self) -> Result<Vec<ByteString>>
This is just like headers
, except fields are ByteString
s instead
of String
s.
fn byte_records<'a>(&'a mut self) -> ByteRecords<'a, R>
This is just like records
, except fields are ByteString
s instead
of String
s.
fn done(&self) -> bool
Returns true
if the CSV parser has reached its final state. When
this method returns true
, all iterators will always return None
.
This is not needed in typical usage since the record iterators will stop for you when the parser completes. This method is useful when you're accessing the parser's lowest-level iterator.
Example
This is the fastest way to compute the number of records in CSV data using this crate. (It is fast because it does not allocate space for every field.)
let data = " sticker,mortals,7 bribed,personae,7 wobbling,poncing,4 interposed,emmett,9 chocolate,refile,7"; let mut rdr = csv::Reader::from_string(data); let mut count = 0u64; while !rdr.done() { loop { // This case analysis is necessary because we only want to // increment the count when `EndOfRecord` is seen. (If the // CSV data is empty, then it will never be emitted.) match rdr.next_bytes() { csv::NextField::EndOfCsv => break, csv::NextField::EndOfRecord => { count += 1; break; }, csv::NextField::Error(err) => panic!(err), csv::NextField::Data(_) => {} } } } assert_eq!(count, 5);
fn next_bytes(&mut self) -> NextField<[u8]>
An iterator over fields in the current record.
This provides low level access to CSV records as raw byte slices.
Namely, no allocation is performed. Unlike other iterators in this
crate, this yields fields instead of records. Notably, this cannot
implement the Iterator
trait safely. As such, it cannot be used with
a for
loop.
See the documentation for the NextField
type on how the iterator
works.
This iterator always returns all records (i.e., it won't skip the header row).
Example
This method is most useful when used in conjunction with the the
done
method:
let data = " sticker,mortals,7 bribed,personae,7 wobbling,poncing,4 interposed,emmett,9 chocolate,refile,7"; let mut rdr = csv::Reader::from_string(data); while !rdr.done() { while let Some(r) = rdr.next_bytes().into_iter_result() { print!("{:?} ", r.unwrap()); } println!(""); }
fn next_str(&mut self) -> NextField<str>
This is just like next_bytes
except it converts each field to
a Unicode string in place.
fn byte_offset(&self) -> u64
Returns the byte offset at which the current record started.
impl<R: Read + Seek> Reader<R>
[src]
fn seek(&mut self, pos: u64) -> Result<()>
Seeks the underlying reader to the file cursor specified.
This comes with several caveats:
- The existing buffer is dropped and a new one is created.
- If you seek to a position other than the start of a record, you'll probably get an incorrect parse. (This is not unsafe.)
Mostly, this is intended for use with the index
sub module.
Note that if pos
is equivalent to the current parsed byte offset,
then no seeking is performed. (In this case, seek
is a no-op.)