Crate fast_blurhash

source ·
Expand description

Fast(er) BlurHash

Provides a faster implementation of the BlurHash algorithm for encoding and decoding BlurHashes. It minimizes the number of allocated arrays to reduce the memory footprint. The base83 encode and decode are also both very fast!

Example

Generating a blurhash from an image:

use fast_blurhash::compute_dct;

let (width, height) = todo!("Get image width and height");
let image: Vec<u32> = todo!("Load the image");
let blurhash = compute_dct(&image, width, height, 3, 4).into_blurhash();

Generating an image from a blurhash:

use fast_blurhash::decode;

let blurhash = "LlMF%n00%#MwS|WCWEM{R*bbWBbH";
let image: Vec<u32> = decode(&blurhash, 1.).unwrap().to_rgba(32, 32);

Custom color types

fast-blurhash provides an easy way to convert custom types for pixel values into the linear space to be used by the algorithm. Simply implements the trait AsLinear on your type!

Example
use fast_blurhash::{convert::{AsLinear, Linear, srgb_to_linear}, compute_dct};

struct MyColor {
    r: u8,
    g: u8,
    b: u8
}

impl AsLinear for MyColor {
    fn as_linear(&self) -> Linear {
        [srgb_to_linear(self.r), srgb_to_linear(self.g), srgb_to_linear(self.b)]
    }
}

// And then compute the blurhash!
let (width, height) = todo!("Get image width and height");
let image: Vec<MyColor> = todo!("Load the image");
let blurhash = compute_dct(&image, width, height, 3, 4).into_blurhash();

Several conversion function are available such as sRGB to Linear, check out the convert module.

You can also generate an image using your custom type:

use fast_blurhash::{decode, convert::linear_to_srgb};

struct MyColor {
    r: u8,
    g: u8,
    b: u8
}

let blurhash = "LlMF%n00%#MwS|WCWEM{R*bbWBbH";
let image: Vec<MyColor> = decode(&blurhash, 1.).unwrap().to_image(32, 32, |l| MyColor {
    r: linear_to_srgb(l[0]),
    g: linear_to_srgb(l[1]),
    b: linear_to_srgb(l[2])
});

Using iterators

You can also use the iterator version of the compute_dct function to prevent allocating more memory for the type conversion. This is espacially useful with nested types. Plus it has no performance overhead. However, make sure the iterator is long enough or the result of the DCT will be incorrect.

Example
use fast_blurhash::{convert::{AsLinear, Linear, srgb_to_linear}, compute_dct_iter};

struct Color(u8, u8, u8);

impl AsLinear for &Color {
    fn as_linear(&self) -> Linear {
        [srgb_to_linear(self.0), srgb_to_linear(self.1), srgb_to_linear(self.2)]
    }
}

// And then compute the blurhash!
let (width, height) = todo!("Get image width and height");
let image: Vec<Vec<Color>> = todo!("Load the image");
let blurhash = compute_dct_iter(image.iter().flatten(), width, height, 3, 4).into_blurhash();

Modules

  • base83 encode and decode utilities
  • Color conversion and BlurHash specific encoding utilities

Structs

  • DCTResult is the result of a Discrete Cosine Transform performed on a image with a specific number of X and Y components. It stores the frequency and location of colors within the image.

Enums

Functions

  • Compute the Discrete Cosine Transform on an image in linear space. The slice must be long enough (it must have at least width * height items).
  • Compute the Discrete Cosine Transform on an image in linear space. The iterator must be long enough (it must have at least width * height items).
  • Decode a blurhash to retrive the DCT results (containing the color frequencies disposition) using the wolt/blurhash format. This function may allocate up to a vector of length 81 contained in the DCTResult struct.
  • Compute the blurhash string from the DCT result using the wolt/blurhash format. This function allocates a string of length (1 + 1 + 4 + 2 * components) where components is the total number of components (components_x * components_y).
  • Compute an iteration of the inverse DCT for every component on the pixel (x, y) and stores the color of that pixel into col. Note that the currents slice must be long enough (x_comps * y_comps).
  • Compute an iteration of the DCT for every component on the pixel (x, y) that have the color col in linear space. Note that the currents slice must be long enough (x_comps * y_comps) and the pixel coordinates (x, y) are between 0 and 1.
  • Normalizes in-plae the currents by a predefined quantization table for the wolt/blurhash encoding algorithm (1 for DC and 2 for ACs) and returns the absolute maximum value within every channel of every currents. Note that currents must have one or more items and len is the total number of pixels of the image (width * height).