[][src]Struct analytics::batcher::Batcher

pub struct Batcher { /* fields omitted */ }

A batcher can accept messages into an internal buffer, and report when messages must be flushed.

The recommended usage pattern looks something like this:

use analytics::batcher::Batcher;
use analytics::client::Client;
use analytics::http::HttpClient;
use analytics::message::{BatchMessage, Track, User};
use serde_json::json;

let mut batcher = Batcher::new(None);
let client = HttpClient::default();

for i in 0..100 {
    let msg = BatchMessage::Track(Track {
        user: User::UserId { user_id: format!("user-{}", i) },
        event: "Example".to_owned(),
        properties: json!({ "foo": "bar" }),
        ..Default::default()
    });

    // Batcher returns back ownership of a message if the internal buffer
    // would overflow.
    //
    // When this occurs, we flush the batcher, create a new batcher, and add
    // the message into the new batcher.
    if let Some(msg) = batcher.push(msg).unwrap() {
        client.send("your_write_key", &batcher.into_message()).unwrap();
        batcher = Batcher::new(None);
        batcher.push(msg).unwrap();
    }
}

Batcher will attempt to fit messages into maximally-sized batches, thus reducing the number of round trips required with Segment's tracking API. However, if you produce messages infrequently, this may significantly delay the sending of messages to Segment.

If this delay is a concern, it is recommended that you periodically flush the batcher on your own by calling into_message.

Methods

impl Batcher[src]

pub fn new(context: Option<Value>) -> Self[src]

Construct a new, empty batcher.

Optionally, you may specify a context that should be set on every batch returned by into_message.

pub fn push(&mut self, msg: BatchMessage) -> Result<Option<BatchMessage>, Error>[src]

Push a message into the batcher.

Returns Ok(None) if the message was accepted and is now owned by the batcher.

Returns Ok(Some(msg)) if the message was rejected because the current batch would be oversized if this message were accepted. The given message is returned back, and it is recommended that you flush the current batch before attempting to push msg in again.

Returns an error if the message is too large to be sent to Segment's API.

pub fn into_message(self) -> Message[src]

Consumes this batcher and converts it into a message that can be sent to Segment.

Auto Trait Implementations

impl Sync for Batcher

impl Send for Batcher

impl Unpin for Batcher

impl RefUnwindSafe for Batcher

impl UnwindSafe for Batcher

Blanket Implementations

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> From<T> for T[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Erased for T

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 

type Err = <U as TryFrom<T>>::Err