spider-lib
spider-lib is an async web scraping framework for Rust with a layout that will feel familiar if you have used Scrapy before: spiders define crawl logic, the runtime schedules and downloads requests, middleware can shape traffic, and pipelines handle extracted items.
The workspace is split into small crates, but the root crate is the easiest place to start. It re-exports the common pieces through spider_lib::prelude::*, so a normal application does not need to wire the lower-level crates by hand.
Why this crate exists
spider-lib is meant for projects that need more structure than a one-off reqwest + scraper script:
- multiple follow-up requests from each page
- shared crawl state
- middleware for retries, rate limiting, cookies, robots, or proxying
- pipelines for validation, deduplication, and output
- typed item schemas that can drive validation and export mapping
- a runtime that keeps the crawling loop organized
If you only need to fetch one or two pages, the lower ceremony of plain reqwest may still be a better fit.
By default, the built-in reqwest downloader now sends a balanced set of browser-like headers when a request does not already define them. That helps HTML crawling behave more like a normal browser without taking control away from spiders or middleware that set headers explicitly.
Installation
[]
= "3.0.2"
= { = "1.0", = ["derive"] }
= "1.0"
serde and serde_json are required when you use #[scraped_item].
Recommended path
For most projects, the smoothest path is:
- start with
use spider_lib::prelude::*; - implement
Spider - build a runtime with
CrawlerBuilder - add middleware for HTTP behavior
- add pipelines for item shaping and output
Only drop to the lower-level crates when you need deeper runtime control or want to publish reusable extensions.
If you are coming from Scrapy, start with the dedicated migration guide in
MIGRATION.md before porting an existing spider.
Typed data workflow
#[scraped_item] now generates typed schema metadata in addition to the
existing ScrapedItem implementation. That schema can drive validation,
export ordering, and schema-version tagging without forcing you to hand-maintain
JSON field lists.
use *;
let crawler = new
.crawl_shape_preset
.add_pipeline
.add_pipeline
.build
.await?;
If you are crawling APIs or want a minimal request shape, disable the default header profile with:
let crawler = new
.browser_like_headers
.build
.await?;
Quick start
This is the smallest useful shape of a spider in the current API:
use *;
;
async
Run the examples
The repository ships with maintained examples that you can run as-is:
minimal is the quickest smoke example.
sitemap shows runtime-managed sitemap discovery and page metadata extraction.
That example crawls books.toscrape.com and prints the final page and item counts.
request_priority is a local scheduler demo that shows higher-priority
requests being dequeued first without needing network access.
There are also several feature-gated showcase examples:
showcase_state demonstrates facade usage plus shared state primitives.
showcase_middleware focuses on request/response middleware composition.
showcase_pipelines writes the same scraped item through the maintained output pipelines.
showcase_runtime focuses on builder tuning, live stats, and checkpoint configuration.
books_live writes CSV output to output/books_live.csv.
kusonime writes streaming JSON output to output/kusonime-stream.json.
These examples depend on public sites being reachable, so they are good smoke runs but still network-dependent.
How the crawl loop fits together
At a high level:
Spider::start_requestsseeds the crawl.- The scheduler accepts and deduplicates requests.
- The downloader performs the HTTP work.
- Middleware can inspect or modify requests and responses.
Spider::parseturns aResponseintoParseOutput.- Pipelines process emitted items.
That separation is what makes the workspace easier to extend than a single-file scraper.
Where to add behavior
- Put page extraction logic in
Spider::parse. - Put shared crawl state in
Spider::State. - Put cross-cutting request/response behavior in middleware.
- Put item cleanup, validation, deduplication, and output in pipelines.
- Put transport-specific behavior in a custom downloader only when middleware is too high-level.
Feature flags
Root crate features mirror the lower-level crates:
| Feature | What it enables |
|---|---|
core |
Base runtime support. Enabled by default. |
live-stats |
Live terminal crawl stats. |
middleware-cache |
HTTP response cache middleware. |
middleware-autothrottle |
Adaptive throttling middleware. |
middleware-proxy |
Proxy middleware. |
middleware-user-agent |
User-agent middleware. |
middleware-robots |
robots.txt middleware. |
middleware-cookies |
Cookie middleware. |
pipeline-csv |
CSV output pipeline. |
pipeline-json |
JSON array output pipeline. |
pipeline-jsonl |
JSON Lines output pipeline. |
pipeline-sqlite |
SQLite output pipeline. |
pipeline-stream-json |
Streaming JSON output pipeline. |
checkpoint |
Checkpoint and resume support. |
cookie-store |
Cookie store integration in core state. |
Runtime discovery
The crawler can now add follow-up requests without manual spider boilerplate for common discovery flows:
DiscoveryMode::HtmlLinksfor same-site page linksDiscoveryMode::HtmlAndMetadatafor page links plus injected page metadataDiscoveryMode::FullResourcesfor scripts, stylesheets, images, and other resourcesDiscoveryMode::SitemapOnlyfor sitemap-driven crawling
Example:
let crawler = new
.discovery_mode
.enable_sitemaps
.extract_page_metadata
.build
.await?;
Runtime discovery can also be filtered more aggressively when you want a rule-like crawl shape:
let crawler = new
.discovery_mode
.discover_allow_domains
.discover_allow_path_prefixes
.discover_deny_patterns
.discover_allowed_tags
.discover_allowed_attributes
.build
.await?;
For more structured crawling, you can define named discovery rules and route parse logic without manual metadata matching:
let listing_rule = new
.with_allow_path_prefixes
.with_allowed_tags
.with_allowed_attributes
.with_follow_allow_path_prefixes;
let crawler = new
.discovery_mode
.add_discovery_rule
.build
.await?;
// later in parse:
route_by_rule!;
Example:
[]
= { = "3.0.2", = ["live-stats", "pipeline-csv"] }
= { = "1.0", = ["derive"] }
= "1.0"
Workspace map
spider-core: crawler runtime, builder, scheduler, state, and statsspider-downloader: downloader trait and the default reqwest implementationspider-macro:#[scraped_item]spider-middleware: built-in middleware implementationsspider-pipeline: item pipelines and output backendsspider-util: shared request, response, item, and error types
When to use the lower-level crates directly
Stay on spider-lib if you are building an application spider.
Reach for individual crates when you are:
- publishing reusable middleware, pipeline, or downloader extensions
- composing the runtime more explicitly
- depending on shared types without pulling in the whole facade crate
The most common next step down is spider-core, which
keeps the runtime API but drops the facade re-exports.
Status
The current workspace builds successfully with:
That is a useful baseline when updating docs or examples.
License
MIT. See LICENSE.