Crate concision_transformer

Source
Expand description

§Transformers

§Resources

Re-exports§

pub use self::attention::scaled_dot_product_attention;
pub use self::attention::AttentionHead;
pub use self::params::*;

Modules§

attention
Attention
codec
Codec
consts
model
ops
params
prelude

Structs§

Transformer

Constants§

DK
The default dimension of the key and query vectors
D_MODEL
The default dimension of the model; i.e. the number of inputs
D_NETWORK
The default size of the network; i.e. the number of neurons in the network
HEADS
The default number of attention heads
N
The default number of layers used for the encoder / decoder.

Functions§

outputs_from_ratio