Spider
Multithreaded async crawler/indexer using isolates and IPC channels for communication with the ability to run decentralized.
Dependencies
On Linux
- OpenSSL 1.0.1, 1.0.2, 1.1.0, or 1.1.1
Example
This is a basic async example crawling a web page, add spider to your Cargo.toml:
[]
= "1.29.0"
And then the code:
extern crate spider;
use Website;
use tokio;
async
You can use Configuration object to configure your crawler:
// ..
let mut website: Website = new;
website.configuration.respect_robots_txt = true;
website.configuration.subdomains = true;
website.configuration.tld = false;
website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling
website.configuration.request_timeout = None; // Defaults to 15000 ms
website.configuration.http2_prior_knowledge = false; // Enable if you know the webserver supports http2
website.configuration.channel_buffer = 100; // Defaults to 50 - tune this depending on on_link_find_callback
website.configuration.user_agent = Some; // Defaults to using a random agent
website.on_link_find_callback = Some; // Callback to run on each link find
website.configuration.blacklist_url.get_or_insert.push;
website.configuration.proxies.get_or_insert.push; // Defaults to none - proxy list.
website.crawl.await;
Features
We have a couple optional feature flags. Regex blacklisting, jemaloc backend, globbing, fs temp storage, decentralization, serde, gathering full assets, and randomizing user agents.
[]
= { = "1.29.0", = ["regex", "ua_generator"] }
ua_generator: Enables auto generating a random real User-Agent. Enabled by default.regex: Enables blacklisting paths with regxjemalloc: Enables the jemalloc memory backend.decentralized: Enables decentralized processing of IO, requires the [spider_worker] startup before crawls.control: Enables the ability to pause, start, and shutdown crawls on demand.full_resources: Enables gathering all content that relates to the domain like css,jss, and etc.serde: Enables serde serialization support.socks: Enables socks5 proxy support.glob: Enables url glob support.fs: Enables storing resources to disk for parsing (may greatly increases performance at the cost of temp storage). Enabled by default.
Regex Blacklisting
Allow regex for blacklisting routes
[]
= { = "1.29.0", = ["regex"] }
extern crate spider;
use Website;
use tokio;
async
Pause, Resume, and Shutdown
If you are performing large workloads you may need to control the crawler by enabling the control feature flag:
[]
= { = "1.29.0", = ["control"] }
extern crate spider;
use tokio;
use Website;
async
Scrape/Gather HTML
extern crate spider;
use tokio;
use Website;
async
Decentralization
[]
= { = "1.29.0", = ["decentralized"] }
# install the worker
# start the worker [set the worker on another machine in prod]
RUST_LOG=info SPIDER_WORKER_PORT=3030
# start rust project as normal with SPIDER_WORKER env variable
SPIDER_WORKER=http://127.0.0.1:3030
The SPIDER_WORKER env variable takes a comma seperated list of urls to set the workers.
The proxy needs to match the transport type for the request to fullfill correctly.
If the scrape feature flag is enabled, use the SPIDER_WORKER_SCRAPER env variable to determine the scraper worker.
Sequential
Perform crawls sequential without any concurrency.
[]
= { = "1.29.0", = ["sequential"] }
Blocking
If you need a blocking sync implementation use a version prior to v1.12.0.