Crate voyager

source · []
Expand description


With voyager you can easily extract structured data from websites.

Write your own crawler/crawler with Voyager following a state machine model.


/// Declare your scraper, with all the selectors etc.
struct HackernewsScraper {
    post_selector: Selector,
    author_selector: Selector,
    title_selector: Selector,
    comment_selector: Selector,
    max_page: usize,

/// The state model
enum HackernewsState {

/// The ouput the scraper should eventually produce
struct Entry {
    author: String,
    url: Url,
    link: Option<String>,
    title: String,

Implement the voyager::Scraper trait

A Scraper consists of two associated types:

  • Output, the type the scraper eventually produces
  • State, the type, the scraper can drag along several requests that eventually lead to an Output

and the scrape callback, which is invoked after each received response.

Based on the state attached to response you can supply the crawler with new urls to visit with, or without a state attached to it.

Scraping is done with causal-agent/scraper.

impl Scraper for HackernewsScraper {
    type Output = Entry;
    type State = HackernewsState;

    /// do your scraping
    fn scrape(
        &mut self,
        response: Response<Self::State>,
        crawler: &mut Crawler<Self>,
    ) -> Result<Option<Self::Output>> {
        let html = response.html();

        if let Some(state) = response.state {
            match state {
                HackernewsState::Page(page) => {
                    // find all entries
                    for id in html
                        .filter_map(|el| el.value().attr("id"))
                        // submit an url to a post
                            &format!("{}", id),
                    if page < self.max_page {
                        // queue in next page
                            &format!("{}", page + 1),
                            HackernewsState::Page(page + 1),

                HackernewsState::Post => {
                    // scrape the entry
                    let entry = Entry {
                        // ...
                    return Ok(Some(entry));


Setup and collect all the output

Configure the crawler with via CrawlerConfig:

  • Allow/Block list of URLs
  • Delays between requests
  • Whether to respect the Robots.txt rules

Feed your config and an instance of your scraper to the Collector that drives the Crawler and forwards the responses to your Scraper.

    // only fulfill requests to ``
    let config = CrawlerConfig::default().allow_domain_with_delay(
        // add a delay between requests

    let mut collector = Collector::new(HackernewsScraper::default(), config);


    while let Some(output) = {
        let post = output?;


pub use crate::response::Response;
pub use scraper;



Collector controls the Crawler and forwards the successful requests to the Scraper. and reports the Scraper’s Output back to the user.

The crawler that is responsible for driving the requests to completion and providing the crawl response for the Scraper.

Configure a Collector and its Crawler

Stats about sent requests and received responses


How to delay a request


A trait that is takes in successfully fetched responses, scrapes the valuable content from the responses html document and provides the with additional requests to visit and drive the scraper’s model completion.