Spider
Multithreaded async crawler/indexer using isolates and IPC channels for communication with the ability to run decentralized.
Dependencies
On Linux
- OpenSSL 1.0.1, 1.0.2, 1.1.0, or 1.1.1
Example
This is a basic async example crawling a web page, add spider to your Cargo.toml:
[]
= "2"
And then the code:
extern crate spider;
use Website;
use tokio;
async
You can use Configuration object to configure your crawler:
// ..
let mut website = new;
website.configuration.respect_robots_txt = true;
website.configuration.subdomains = true;
website.configuration.tld = false;
website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling
website.configuration.request_timeout = None; // Defaults to 15000 ms
website.configuration.http2_prior_knowledge = false; // Enable if you know the webserver supports http2
website.configuration.user_agent = Some; // Defaults to using a random agent
website.on_link_find_callback = Some; // Callback to run on each link find - useful for mutating the url, ex: convert the top level domain from `.fr` to `.es`.
website.configuration.blacklist_url.get_or_insert.push;
website.configuration.proxies.get_or_insert.push; // Defaults to None - proxy list.
website.configuration.budget = Some; // Defaults to None.
website.configuration.cron_str = "1/5 * * * * *".into; // Defaults to empty string - Requires the `cron` feature flag
website.configuration.cron_type = Crawl; // Defaults to CronType::Crawl - Requires the `cron` feature flag
website.configuration.limit = 300; // The limit of pages crawled. By default there is no limit.
website.configuration.cache = false; // HTTP caching. Requires the `cache` or `chrome` feature flag.
website.crawl.await;
The builder pattern is also available v1.33.0 and up:
let mut website = new;
website
.with_respect_robots_txt
.with_subdomains
.with_tld
.with_delay
.with_request_timeout
.with_http2_prior_knowledge
.with_user_agent
.with_budget
.with_limit
.with_caching
.with_external_domains
.with_headers
.with_blacklist_url
.with_cron
.with_proxies;
Features
We have the following optional feature flags.
[]
= { = "2", = ["regex", "ua_generator"] }
ua_generator: Enables auto generating a random real User-Agent.regex: Enables blacklisting and whitelisting paths with regex.disk: Enables SQLite hybrid disk storage to balance memory usage with no tls.disk_native_tls: Enables SQLite hybrid disk storage to balance memory usage with native tls.disk_aws: Enables SQLite hybrid disk storage to balance memory usage with aws_tls.balance: Enables balancing the CPU and memory to scale more efficiently.decentralized: Enables decentralized processing of IO, requires the spider_worker startup before crawls.sync: Subscribe to changes for Page data processing async. [Enabled by default]control: Enables the ability to pause, start, and shutdown crawls on demand.full_resources: Enables gathering all content that relates to the domain like CSS, JS, and etc.serde: Enables serde serialization support.socks: Enables socks5 proxy support.glob: Enables url glob support.fs: Enables storing resources to disk for parsing (may greatly increases performance at the cost of temp storage).sitemap: Include sitemap pages in results.time: Enables duration tracking per page.cache: Enables HTTP caching request to disk.cache_mem: Enables HTTP caching request to persist in memory.cache_chrome_hybrid: Enables hybrid chrome request caching between HTTP.cache_openai: Enables caching the OpenAI request. This can drastically save costs when developing AI workflows.chrome: Enables chrome headless rendering, use the env varCHROME_URLto connect remotely.chrome_store_page: Store the page object to perform other actions. The page may be closed.chrome_screenshot: Enables storing a screenshot of each page on crawl. Defaults the screenshots to the ./storage/ directory. Use the env variableSCREENSHOT_DIRECTORYto adjust the directory. To save the background set the env varSCREENSHOT_OMIT_BACKGROUNDto false.chrome_headed: Enables chrome rendering headful rendering.chrome_headless_new: Use headless=new to launch the browser.chrome_cpu: Disable gpu usage for chrome browser.chrome_stealth: Enables stealth mode to make it harder to be detected as a bot.chrome_intercept: Allows intercepting network request to speed up processing.chrome_remote_cache: Use a remote cache for chrome and HTTP (hybrid) - view the chromey remote caching to learn more.adblock: Enables the ability to block ads when using chrome and chrome_intercept.cookies: Enables cookies storing and setting to use for request.real_browser: Enables the ability to bypass protected pages.cron: Enables the ability to start cron jobs for the website.spoof: Spoof HTTP headers for the request.openai: Enables OpenAI to generate dynamic browser executable scripts. Make sure to use the env varOPENAI_API_KEY.smart: Enables smart mode. This runs request as HTTP until JavaScript rendering is needed. This avoids sending multiple network request by re-using the content.encoding: Enables handling the content with different encodings like Shift_JIS.headers: Enables the extraction of header information on each retrieved page. Adds aheadersfield to the page struct.decentralized_headers: Enables the extraction of suppressed header information of the decentralized processing of IO. This is needed ifheadersis set in both spider and spider_worker.string_interner_buffer_backend: Enables the String interning using the buffer backend [default].string_interner_string_backend: Enables the String interning using the string backend.string_interner_bucket_backend: Enables the String interning using the bucket backend.
Decentralization
Move processing to a worker, drastically increases performance even if worker is on the same machine due to efficient runtime split IO work.
[]
= { = "2", = ["decentralized"] }
# install the worker
# start the worker [set the worker on another machine in prod]
RUST_LOG=info SPIDER_WORKER_PORT=3030
# start rust project as normal with SPIDER_WORKER env variable
SPIDER_WORKER=http://127.0.0.1:3030
The SPIDER_WORKER env variable takes a comma seperated list of urls to set the workers. If the scrape feature flag is enabled, use the SPIDER_WORKER_SCRAPER env variable to determine the scraper worker.
Handling headers with decentralisation
Without decentralisation the values of the headers for a page are unmodified.
When working with decentralized workers, each worker stores the headers retrieved
for the original request with prefixed element names ("zz-spider-r--").
Using the feature decentralized_headers provides some useful tools to clean and extract the original header
entries under spider::features::decentralized_headers.
[WORKER_SUPPRESSED_HEADER_PREFIX]
Subscribe to changes
Use the subscribe method to get a broadcast channel.
[]
= { = "2", = ["sync"] }
extern crate spider;
use Website;
use tokio;
async
Regex Blacklisting
Allow regex for blacklisting routes
[]
= { = "2", = ["regex"] }
extern crate spider;
use Website;
use tokio;
async
Pause, Resume, and Shutdown
If you are performing large workloads you may need to control the crawler by enabling the control feature flag:
[]
= { = "2", = ["control"] }
extern crate spider;
use tokio;
use Website;
async
Scrape/Gather HTML
extern crate spider;
use tokio;
use Website;
async
Cron Jobs
Use cron jobs to run crawls continuously at anytime.
[]
= { = "2", = ["sync", "cron"] }
extern crate spider;
use ;
use tokio;
async
Chrome
Connecting to Chrome can be done using the ENV variable CHROME_URL, if no connection is found a new browser is launched on the system. You do not need a chrome installation if you are connecting remotely. If you are not scraping content for downloading use
the feature flag [chrome_intercept] to possibly speed up request using Network Interception.
[]
= { = "2", = ["chrome", "chrome_intercept"] }
You can use website.crawl_concurrent_raw to perform a crawl without chromium when needed. Use the feature flag chrome_headed to enable headful browser usage if needed to debug.
extern crate spider;
use tokio;
use Website;
async
Caching
Enabling HTTP cache can be done with the feature flag [cache] or [cache_mem].
[]
= { = "2.0.12", = ["cache"] }
You need to set website.cache to true to enable as well.
extern crate spider;
use tokio;
use Website;
async
Smart Mode
Intelligently run crawls using HTTP and JavaScript Rendering when needed. The best of both worlds to maintain speed and extract every page. This requires a chrome connection or browser installed on the system.
[]
= { = "2.0.12", = ["smart"] }
extern crate spider;
use Website;
use tokio;
async
OpenAI
Use OpenAI to generate dynamic scripts to drive the browser done with the feature flag [openai].
[]
= { = "2.0.12", = ["openai"] }
extern crate spider;
use ;
async
Depth
Set a depth limit to prevent forwarding.
[]
= { = "2.0.12" }
extern crate spider;
use ;
async
Reusable Configuration
It is possible to re-use the same configuration for a crawl list.
extern crate spider;
use Configuration;
use ;
use Error;
use Instant;
const CAPACITY: usize = 5;
const CRAWL_LIST: = ;
async
Blocking
If you need a blocking sync implementation use a version prior to v1.12.0.