pub struct LTC<B: Backend> { /* private fields */ }Expand description
LTC RNN Layer
A full RNN layer that processes sequences using LTC cells. Supports batching, state management, mixed memory (LSTM), and variable timespans.
§Type Parameters
B- The backend type
Implementations§
Source§impl<B: Backend> LTC<B>
impl<B: Backend> LTC<B>
Sourcepub fn new(input_size: usize, wiring: impl Wiring, device: &B::Device) -> Self
pub fn new(input_size: usize, wiring: impl Wiring, device: &B::Device) -> Self
Create a new LTC RNN layer with the given wiring
§Arguments
input_size- Number of input featureswiring- Wiring configuration defining the network structuredevice- Device to create the module on
Sourcepub fn with_batch_first(self, batch_first: bool) -> Self
pub fn with_batch_first(self, batch_first: bool) -> Self
Set whether input is batch-first (default: true)
When true: input shape is [batch, seq, features] When false: input shape is [seq, batch, features]
Sourcepub fn with_return_sequences(self, return_sequences: bool) -> Self
pub fn with_return_sequences(self, return_sequences: bool) -> Self
Set whether to return full sequences (default: true)
When true: returns all timesteps [batch, seq, state_size] When false: returns only last timestep [batch, state_size]
Sourcepub fn with_mixed_memory(self, mixed_memory: bool, device: &B::Device) -> Self
pub fn with_mixed_memory(self, mixed_memory: bool, device: &B::Device) -> Self
Enable or disable mixed memory mode (LSTM augmentation)
When enabled, an LSTM cell processes the LTC output for better long-term memory.
The LSTM cell is initialized when this is called with true.
§Arguments
mixed_memory- Whether to enable mixed memory modedevice- Device to create the LSTM cell on (required when enabling)
Sourcepub fn input_size(&self) -> usize
pub fn input_size(&self) -> usize
Get input size
Sourcepub fn state_size(&self) -> usize
pub fn state_size(&self) -> usize
Get state size (number of neurons)
Sourcepub fn motor_size(&self) -> usize
pub fn motor_size(&self) -> usize
Get motor/output size
Sourcepub fn forward(
&self,
input: Tensor<B, 3>,
state: Option<Tensor<B, 2>>,
timespans: Option<Tensor<B, 2>>,
) -> (Tensor<B, 3>, Tensor<B, 2>)
pub fn forward( &self, input: Tensor<B, 3>, state: Option<Tensor<B, 2>>, timespans: Option<Tensor<B, 2>>, ) -> (Tensor<B, 3>, Tensor<B, 2>)
Forward pass through the LTC RNN layer
§Arguments
input- Input tensor of shape:- 3D batched: [batch, seq, features] if batch_first=true
- 3D batched: [seq, batch, features] if batch_first=false
- 2D unbatched: [seq, features]
state- Optional initial state tensor of shape [batch, state_size]timespans- Optional time intervals tensor of shape [batch, seq] or scalar
§Returns
Tuple of (output, final_state) where:
- output: [batch, seq, motor_size] or [batch, motor_size] depending on return_sequences
- final_state: [batch, state_size] or ([batch, state_size], [batch, state_size]) for mixed_memory
Sourcepub fn forward_mixed(
&self,
input: Tensor<B, 3>,
state: Option<(Tensor<B, 2>, Tensor<B, 2>)>,
timespans: Option<Tensor<B, 2>>,
) -> (Tensor<B, 3>, (Tensor<B, 2>, Tensor<B, 2>))where
B: Backend,
pub fn forward_mixed(
&self,
input: Tensor<B, 3>,
state: Option<(Tensor<B, 2>, Tensor<B, 2>)>,
timespans: Option<Tensor<B, 2>>,
) -> (Tensor<B, 3>, (Tensor<B, 2>, Tensor<B, 2>))where
B: Backend,
Forward pass with mixed memory (LSTM augmentation)
This follows the Python implementation order: LSTM first (for memory), then LTC (for continuous-time dynamics).
This is only available when mixed_memory is enabled
Trait Implementations§
Source§impl<B> AutodiffModule<B> for LTC<B>
impl<B> AutodiffModule<B> for LTC<B>
Source§type InnerModule = LTC<<B as AutodiffBackend>::InnerBackend>
type InnerModule = LTC<<B as AutodiffBackend>::InnerBackend>
Source§fn valid(&self) -> Self::InnerModule
fn valid(&self) -> Self::InnerModule
Source§impl<B: Backend> Module<B> for LTC<B>
impl<B: Backend> Module<B> for LTC<B>
Source§fn load_record(self, record: Self::Record) -> Self
fn load_record(self, record: Self::Record) -> Self
Source§fn into_record(self) -> Self::Record
fn into_record(self) -> Self::Record
Source§fn num_params(&self) -> usize
fn num_params(&self) -> usize
Source§fn visit<Visitor: ModuleVisitor<B>>(&self, visitor: &mut Visitor)
fn visit<Visitor: ModuleVisitor<B>>(&self, visitor: &mut Visitor)
Source§fn map<Mapper: ModuleMapper<B>>(self, mapper: &mut Mapper) -> Self
fn map<Mapper: ModuleMapper<B>>(self, mapper: &mut Mapper) -> Self
Source§fn collect_devices(&self, devices: Devices<B>) -> Devices<B>
fn collect_devices(&self, devices: Devices<B>) -> Devices<B>
Source§fn to_device(self, device: &B::Device) -> Self
fn to_device(self, device: &B::Device) -> Self
Source§fn fork(self, device: &B::Device) -> Self
fn fork(self, device: &B::Device) -> Self
Source§fn devices(&self) -> Vec<<B as Backend>::Device>
fn devices(&self) -> Vec<<B as Backend>::Device>
Source§fn save_file<FR, PB>(
self,
file_path: PB,
recorder: &FR,
) -> Result<(), RecorderError>
fn save_file<FR, PB>( self, file_path: PB, recorder: &FR, ) -> Result<(), RecorderError>
Source§fn load_file<FR, PB>(
self,
file_path: PB,
recorder: &FR,
device: &<B as Backend>::Device,
) -> Result<Self, RecorderError>
fn load_file<FR, PB>( self, file_path: PB, recorder: &FR, device: &<B as Backend>::Device, ) -> Result<Self, RecorderError>
Source§fn quantize_weights<C>(self, quantizer: &mut Quantizer<C>) -> Selfwhere
C: Calibration,
fn quantize_weights<C>(self, quantizer: &mut Quantizer<C>) -> Selfwhere
C: Calibration,
Source§impl<B: Backend> ModuleDisplay for LTC<B>
impl<B: Backend> ModuleDisplay for LTC<B>
Source§fn format(&self, passed_settings: DisplaySettings) -> String
fn format(&self, passed_settings: DisplaySettings) -> String
Source§fn custom_settings(&self) -> Option<DisplaySettings>
fn custom_settings(&self) -> Option<DisplaySettings>
Auto Trait Implementations§
impl<B> !Freeze for LTC<B>
impl<B> !RefUnwindSafe for LTC<B>
impl<B> Send for LTC<B>
impl<B> !Sync for LTC<B>
impl<B> Unpin for LTC<B>where
<B as Backend>::FloatTensorPrimitive: Unpin,
<B as Backend>::QuantizedTensorPrimitive: Unpin,
<B as Backend>::Device: Unpin,
impl<B> UnwindSafe for LTC<B>where
<B as Backend>::FloatTensorPrimitive: UnwindSafe,
<B as Backend>::QuantizedTensorPrimitive: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more