Spider
Multithreaded web crawler/indexer using isolates and IPC channels for communication.
Dependencies
On Linux
- OpenSSL 1.0.1, 1.0.2, 1.1.0, or 1.1.1
Example
This is a basic blocking example crawling a web page, add spider to your Cargo.toml
:
[]
= "1.19"
And then the code:
extern crate spider;
use Website;
use tokio;
async
You can use Configuration
object to configure your crawler:
// ..
let mut website: Website = new;
website.configuration.blacklist_url.push;
website.configuration.respect_robots_txt = true;
website.configuration.subdomains = true;
website.configuration.tld = false;
website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling
website.configuration.request_timeout = None; // Defaults to 15000 ms
website.configuration.channel_buffer = 100; // Defaults to 50 - tune this depending on on_link_find_callback
website.configuration.user_agent = "myapp/version".to_string; // Defaults to spider/x.y.z, where x.y.z is the library version
website.on_link_find_callback = Some; // Callback to run on each link find
website.crawl.await;
Regex Blacklisting
There is an optional "regex" crate that can be enabled:
[]
= { = "1.19.41", = ["regex"] }
extern crate spider;
use Website;
use tokio;
async
Features
Currently we have three optional feature flags. Regex blacklisting, jemaloc backend, and randomizing User-Agents.
[]
= { = "1.19.41", = ["regex", "ua_generator"] }
Jemalloc performs better for concurrency and allows memory to release easier.
This changes the global allocator of the program so test accordingly to measure impact.
[]
= { = "1.19.41", = ["jemalloc"] }
Blocking
If you need a blocking sync imp use a version prior to v1.12.0
.
Pause, Resume, and Shutdown
If you are performing large workloads you may need to control the crawler using the following:
extern crate spider;
use tokio;
use Website;
async
Shutdown crawls
extern crate spider;
use tokio;
use Website;
async
Scrape/Gather HTML
extern crate spider;
use tokio;
use Website;
async