Struct sonnerie_api::Client [−][src]
pub struct Client { /* fields omitted */ }
Sonnerie Client API
Methods
impl Client
[src]
impl Client
pub fn from_streams<R: 'static + Read, W: 'static + Write>(
reader: R,
writer: W
) -> Result<Client>
[src]
pub fn from_streams<R: 'static + Read, W: 'static + Write>(
reader: R,
writer: W
) -> Result<Client>
Create a Sonnerie client from a reader/writer stream.
This is useful if you want to connect to Sonnerie via a Unix Domain Socket tunnelled through SSH.
Failure may be caused by Sonnerie not sending its protocol "Hello" on connection.
pub fn new(connection: TcpStream) -> Result<Client>
[src]
pub fn new(connection: TcpStream) -> Result<Client>
Use a specific TCP connection to make a connection.
pub fn begin_read(&self) -> Result<()>
[src]
pub fn begin_read(&self) -> Result<()>
Start a read transaction.
End the transaction with commit()
or rollback()
, which
are both the same for a read transaction.
Read-only functions will automatically close and open a transaction, but calling this function allows you to not see changes made over the life if your transaction.
pub fn begin_write(&self) -> Result<()>
[src]
pub fn begin_write(&self) -> Result<()>
Create a writing transaction.
You must call this function before any calling any write functions. Write transactions are not made to prevent you from accidentally making many small changes, which are relatively slow.
You must call commit()
for the transactions to be saved.
You may also explicitly call rollback()
to discard your changes.
Transactions may not be nested.
pub fn read_series(&mut self, name: &str) -> Result<Vec<(NaiveDateTime, f64)>>
[src]
pub fn read_series(&mut self, name: &str) -> Result<Vec<(NaiveDateTime, f64)>>
Read all the values in a specific series.
Fails if the series does not exist, but returns an empty Vec if the series does exist and is simply empty.
pub fn read_series_range(
&mut self,
name: &str,
first_time: &NaiveDateTime,
last_time: &NaiveDateTime
) -> Result<Vec<(NaiveDateTime, f64)>>
[src]
pub fn read_series_range(
&mut self,
name: &str,
first_time: &NaiveDateTime,
last_time: &NaiveDateTime
) -> Result<Vec<(NaiveDateTime, f64)>>
Read values within a range of timestamps in a specific series.
Fails if the series does not exist, but returns an empty Vec if no samples were contained in that range.
first_time
is the first timestamp to begin reading fromlast_time
is the last timestamp to read (inclusive)
pub fn rollback(&self) -> Result<()>
[src]
pub fn rollback(&self) -> Result<()>
Discard and end the current transaction.
Same as drop
, except you can see errors
pub fn commit(&self) -> Result<()>
[src]
pub fn commit(&self) -> Result<()>
Save and end the current transaction.
This must be called for any changes by a write transaction
(that started by begin_write()
) to be recorded.
In a read-only transaction, this is the same as rollback()
.
pub fn create_series(&mut self, name: &str) -> Result<()>
[src]
pub fn create_series(&mut self, name: &str) -> Result<()>
Ensures a series by the given name already exists.
Does not fail if the series already exists.
You must call begin_write()
prior to this function.
pub fn add_value(
&mut self,
series_name: &str,
time: &NaiveDateTime,
value: f64
) -> Result<()>
[src]
pub fn add_value(
&mut self,
series_name: &str,
time: &NaiveDateTime,
value: f64
) -> Result<()>
Adds a single value to a series
Fails if a value at the given timestamp already exists.
series_name
is the name of the series, as created bycreate_series
.time
is the point in time to add the sample, which must be unique (and also must be after all other timestamps in this series, until this feature is added which should be soon).value
is the sample to insert at this timepoint.
You must call begin_write()
prior to this function.
pub fn add_values_from<I>(&mut self, series_name: &str, src: I) -> Result<()> where
I: Iterator<Item = (NaiveDateTime, f64)>,
[src]
pub fn add_values_from<I>(&mut self, series_name: &str, src: I) -> Result<()> where
I: Iterator<Item = (NaiveDateTime, f64)>,
Efficiently add many samples into a timeseries.
The timestamps must be sorted ascending.
series_name
is the series to insert the values into.src
is the iterator to read values from.
client.add_values_from( "fibonacci", [(ts1, 1.0), (ts2, 1.0), (ts3, 2.0), (ts3, 3.0)].iter().cloned() );
You must call begin_write()
prior to this function.
pub fn dump<F>(&mut self, like: &str, results: F) -> Result<()> where
F: FnMut(&str, NaiveDateTime, f64) -> Result<(), String>,
[src]
pub fn dump<F>(&mut self, like: &str, results: F) -> Result<()> where
F: FnMut(&str, NaiveDateTime, f64) -> Result<(), String>,
Read all values from many series
Selects many series with a SQL-like "LIKE" operator and dumps values from those series.
like
is a string with%
as a wildcard. For example,"192.168.%"
selects all series whose names start with192.168.
. If the%
appears in the end, then the query is very efficient.results
is a function which receives each value.
The values are always generated first for each series in ascending order and then each timestamp in ascending order. (In other words, each series gets its own group of samples before moving to the following series).
pub fn dump_range<F>(
&mut self,
like: &str,
first_time: &NaiveDateTime,
last_time: &NaiveDateTime,
results: F
) -> Result<()> where
F: FnMut(&str, NaiveDateTime, f64) -> Result<(), String>,
[src]
pub fn dump_range<F>(
&mut self,
like: &str,
first_time: &NaiveDateTime,
last_time: &NaiveDateTime,
results: F
) -> Result<()> where
F: FnMut(&str, NaiveDateTime, f64) -> Result<(), String>,
Read many values from many series
Selects many series with a SQL-like "LIKE" operator and dumps values from those series.
like
is a string with%
as a wildcard. For example,"192.168.%"
selects all series whose names start with192.168.
. If the%
appears in the end, then the query is very efficient.first_time
is the first timestamp for which to print all values per series.last_time
is the last timestamp (inclusive) to print all values per series.results
is a function which receives each value.
The values are always generated first for each series in ascending order and then each timestamp in ascending order. (In other words, each series gets its own group of samples before moving to the following series).