Expand description
Quantum Transformer Architectures
This module implements quantum transformer models with quantum attention mechanisms, position encoding, and multi-head attention for processing quantum and classical data in transformer-style architectures.
Structs§
- Attention
Output - Attention computation result
- Quantum
Attention Info - Quantum attention information
- Quantum
Feed Forward - Quantum feedforward network
- Quantum
Multi Head Attention - Quantum multi-head attention module
- Quantum
Position Encoding - Quantum position encoding module
- Quantum
Transformer - Main quantum transformer model
- Quantum
Transformer Config - Quantum transformer model configuration
- Quantum
Transformer Layer - Single quantum transformer layer
Enums§
- Activation
Type - Activation function types for quantum networks
- Position
Encoding Type - Position encoding types for quantum transformers
- Quantum
Attention Type - Types of quantum attention mechanisms
Functions§
- create_
causal_ mask - Helper function to create causal attention mask
- create_
padding_ mask - Helper function to create padding mask