voyager
With voyager you can easily extract structured data from websites.
Write your own crawler/scraper with voyager following a state machine model.
Example
The examples use tokio as its runtime, so your Cargo.toml
could look like this:
[]
= { = "0.1" }
= { = "1.0", = ["full"] }
Declare your own Scraper and model
// Declare your scraper, with all the selectors etc.
/// The state model
/// The ouput the scraper should eventually produce
Implement the voyager::Scraper
trait
A Scraper
consists of two associated types:
Output
, the type the scraper eventually producesState
, the type, the scraper can drag along several requests that eventually lead to anOutput
and the scrape
callback, which is invoked after each received response.
Based on the state attached to response
you can supply the crawler with new urls to visit with, or without a state attached to it.
Scraping is done with causal-agent/scraper.
Setup and collect all the output
Configure the crawler with via CrawlerConfig
:
- Allow/Block list of Domains
- Delays between requests
- Whether to respect the
Robots.txt
rules
Feed your config and an instance of your scraper to the Collector
that drives the Crawler
and forwards the responses to your Scraper
.
use Selector;
use *;
use StreamExt;
async
See examples for more.
Inject async calls
Sometimes it might be helpful to execute some other calls first, get a token etc.,
You submit async
closures to the crawler to manually get a response and inject a state or drive a state to completion
Recover a state that got lost
If the crawler encountered an error, due to a failed or disallowed http request, the error is reported as CrawlError
, which carries the last valid state. The error then can be down casted.
let mut collector = new;
while let Some = collector.next.await
Licensed under either of these:
- Apache License, Version 2.0, (LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)