ArchWiki CLI 📖
A CLI tool to read pages from the ArchWiki
Table of contents
Installation
Currently, you can only install this tool from crates.io or build it from source.
After you are finished with the installation you should run update-all.
crates.io
Source
Usage
Reading Pages
Basic request
Using a different format
Caching
By default, pages are cached in the file system after they are fetched and subsequent request for that page then use that cache. The cache is invalidated if the cache file hasn't been updated in the last 14 days.
404 page not found (-̥̥̥n-̥̥̥ )
If the page you are searching for doesn't exist, a list of the pages that are most similar (in name) to the page you asked for will be output instead of the page content. The categories are stored locally and can be fetched with the update-all command.
# output
Unlike the output when the page name does exist, this output is written to stderr instead of stdout. If you want to, you can create a program that checks if no page was found and uses stderr to give the user suggestions on what they might have wanted to type.
An example shell script to do something like this is available in the repository
under the name example.sh.
Downloading page info
The page names used for suggestions are stored locally to prevent having to scrape the entire table of contents of the ArchWiki with every command
Updating everything
Be warned, since this scrapes multiple thousand links, this is very slow (-, - )…zzzZZ
Updating a specific category
Listing pages and categories
This is output a styled tree of categories and pages but if you need an easily parseable
list for a different program to use, you can use the -f flag to flatten the output into a
newline separated list that only contains the names of all pages
File locations
All file paths use the the directories crate
The page file
See data_local_dir for more information.
Page cache files
See cache_dir for more information.