story-dl 0.2.0

Story web scraping
Documentation
# *story-dl*


*story-dl* is a program allowing you to download stories from a multitude of different sites.

## Note


This scraper does not support the scraping of characters, pairings, tags, warnings, or anything other than the required information.
Or at least not yet.

## Supported Sites


- Archive Of Our Own
- FanFiction
- (yah i need to add more...)

## Dependencies


- Rust (build)
- C/C++ compiler (build)
- OpenSSL (Unix/Linux and macOS) (runtime)

## Command


### Usage


*story-dl* tries to have a simple command line, with just enough to change what you need to but simple enough so that it doesn't require 20 man pages.

#### Examples


Download story as EPub:
```
story-dl -u <URL> -o epub
```

Download all in import file as EPub:
```
story-dl -f import.json -o epub
```

`import.json`:
```json
[
    "<URL>",
    {
        "url": "<URL>"
    }
]
```

## Library


### Usage


Add this to your `Cargo.toml`.

```toml
story_dl = { version = "0.1", default-features = false }
```

This will add *story-dl* but disabled all crates required by the command version.

Choose what website will be scraped, lets use `FanFiction` for this example.

```rust
// Import required structs.
use story_dl::FanFiction;

fn main() {
    // Create an instance of FanFiction.
    // This houses all the css selectors required for scraping,
    // its better to have 'global' instance rather than recreating it.
    let ffn = FanFiction::new();

    // Convert story url string into the required Uri.
    let url = "".parse().expect("Not a valid URL");

    // Start the scraper, this will return the finished story.
    let story = ffn.scrape(&url).expect("Error scraping story");

    // Scraped information
    println!("Title: {}", story.name);
    println!("Authors: {}", story.authors.join(", "));
    println!("Chapters: {}", story.chapters);
}
```