spider 1.7.3

Multithreaded web crawler written in Rust.
Documentation

Spider

crate version

Multithreaded web crawler written in Rust main repo.

Dependencies

On Debian or other DEB based distributions:

$ sudo apt install openssl libssl-dev

On Fedora and other RPM based distributions:

$ sudo dnf install openssl-devel

Usage

Add this dependency to your Cargo.toml file.

[dependencies]
spider = "1.5.0"

Then you'll be able to use library. Here is a simple example:

extern crate spider;

use spider::website::Website;

fn main() {
    let mut website: Website = Website::new("https://choosealicense.com");
    website.crawl();

    for page in website.get_pages() {
        println!("- {}", page.get_url());
    }
}

You can use Configuration object to configure your crawler:

// ..
let mut website: Website = Website::new("https://choosealicense.com");
website.configuration.blacklist_url.push("https://choosealicense.com/licenses/".to_string());
website.configuration.respect_robots_txt = true;
website.configuration.verbose = true; // Defaults to false
website.configuration.delay = 2000; // Defaults to 250 ms
website.configuration.concurrency = 10; // Defaults to number of cpus available
website.configuration.user_agent = "myapp/version"; // Defaults to spider/x.y.z, where x.y.z is the library version
website.on_link_find_callback = |s| { println!("link target: {}", s); s }; // Callback to run on each link find

website.crawl();