Skip to main content

ExpertParallelismConfig

Struct ExpertParallelismConfig 

Source
pub struct ExpertParallelismConfig {
Show 22 fields pub num_experts: usize, pub num_experts_per_token: usize, pub capacity_factor: f32, pub load_balance_loss_coeff: f32, pub router_z_loss_coeff: f32, pub expert_dropout: f32, pub enable_load_balancing: bool, pub sharding_strategy: ExpertShardingStrategy, pub max_expert_batch_size: Option<usize>, pub enable_gradient_accumulation: bool, pub gradient_accumulation_steps: usize, pub initialization_strategy: ExpertInitStrategy, pub enable_expert_sync: bool, pub sync_frequency: usize, pub gate_network: Option<GateNetworkConfig>, pub load_balancing: Option<LoadBalancingConfig>, pub migration: Option<ExpertMigrationConfig>, pub enable_expert_migration: bool, pub migration_threshold: f32, pub memory_per_expert_mb: usize, pub communication_overlap: bool, pub gradient_compression: bool,
}
Expand description

Expert parallelism configuration

This structure contains all the configuration parameters needed to set up and run a Mixture of Experts (MoE) model with distributed expert parallelism.

§Examples

use torsh_distributed::expert_parallelism::config::{ExpertParallelismConfig, ExpertShardingStrategy};

let config = ExpertParallelismConfig {
    num_experts: 16,
    num_experts_per_token: 2,
    capacity_factor: 1.5,
    sharding_strategy: ExpertShardingStrategy::ModelParallel,
    ..Default::default()
};

Fields§

§num_experts: usize

Number of experts in the MoE layer

This determines the total number of expert networks available for routing. Typical values range from 8 to 1024 depending on model size and requirements.

§num_experts_per_token: usize

Number of experts to activate per token (top-k)

Each token is routed to the top-k experts based on router scores. Common values are 1, 2, or 4. Higher values increase computational cost but may improve model quality.

§capacity_factor: f32

Expert capacity factor (capacity = tokens_per_expert * capacity_factor)

This factor determines how many tokens each expert can process. Values > 1.0 provide buffer capacity to handle load imbalance. Typical range: 1.0 to 2.0.

§load_balance_loss_coeff: f32

Load balancing loss coefficient

Weight for the auxiliary loss that encourages balanced expert utilization. Higher values enforce stronger load balancing but may hurt model quality. Typical range: 0.001 to 0.1.

§router_z_loss_coeff: f32

Router z-loss coefficient (for numerical stability)

Weight for the z-loss that encourages router logits to stay close to zero, improving numerical stability. Typical range: 0.0001 to 0.01.

§expert_dropout: f32

Enable expert dropout during training

Probability of randomly dropping experts during training to improve robustness and prevent overfitting. Range: 0.0 to 1.0.

§enable_load_balancing: bool

Enable load balancing across devices

When true, the system actively monitors and rebalances expert utilization across different devices to optimize resource usage.

§sharding_strategy: ExpertShardingStrategy

Expert sharding strategy

Determines how experts are distributed across devices and processes.

§max_expert_batch_size: Option<usize>

Maximum batch size for expert processing

Limits the number of tokens that can be processed by a single expert in one forward pass. Helps control memory usage.

§enable_gradient_accumulation: bool

Enable gradient accumulation across experts

When true, gradients are accumulated across multiple expert invocations before updating parameters, which can improve training stability.

§gradient_accumulation_steps: usize

Number of gradient accumulation steps

Only relevant when gradient accumulation is enabled.

§initialization_strategy: ExpertInitStrategy

Expert initialization strategy

Method used to initialize expert parameters.

§enable_expert_sync: bool

Enable expert synchronization

When true, experts synchronize their parameters periodically during training.

§sync_frequency: usize

Synchronization frequency (in steps)

How often to synchronize expert parameters when synchronization is enabled.

§gate_network: Option<GateNetworkConfig>

Gate network configuration

Optional configuration for hierarchical or advanced gate networks.

§load_balancing: Option<LoadBalancingConfig>

Load balancing configuration

Configuration for expert load balancing and migration.

§migration: Option<ExpertMigrationConfig>

Migration configuration

Configuration for expert migration strategies and triggers.

§enable_expert_migration: bool

Enable expert migration (simplified flag)

§migration_threshold: f32

Migration threshold for triggering migrations

§memory_per_expert_mb: usize

Memory allocated per expert (in MB)

§communication_overlap: bool

Enable communication overlap

§gradient_compression: bool

Enable gradient compression

Implementations§

Source§

impl ExpertParallelismConfig

Source

pub fn new() -> Self

Create a new configuration with default values

Source

pub fn small_scale() -> Self

Create a configuration optimized for small-scale deployment

§Returns

A configuration suitable for models with 8-16 experts

Source

pub fn large_scale() -> Self

Create a configuration optimized for large-scale deployment

§Returns

A configuration suitable for models with 64+ experts

Source

pub fn inference() -> Self

Create a configuration optimized for inference

§Returns

A configuration with settings optimized for inference workloads

Source

pub fn validate(&self) -> Result<(), String>

Validate the configuration parameters

§Returns

Result indicating whether the configuration is valid

Source

pub fn calculate_expert_capacity(&self, total_tokens: usize) -> usize

Calculate the effective expert capacity

§Arguments
  • total_tokens - Total number of tokens in the batch
§Returns

The effective capacity per expert

Source

pub fn recommended_num_devices(&self) -> usize

Get the recommended number of devices for this configuration

§Returns

Recommended number of devices based on the sharding strategy

Trait Implementations§

Source§

impl Clone for ExpertParallelismConfig

Source§

fn clone(&self) -> ExpertParallelismConfig

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ExpertParallelismConfig

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for ExpertParallelismConfig

Source§

fn default() -> Self

Returns the “default value” for a type. Read more
Source§

impl<'de> Deserialize<'de> for ExpertParallelismConfig

Source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
Source§

impl Serialize for ExpertParallelismConfig

Source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> CommunicationMessage for T
where T: Serialize + for<'de> Deserialize<'de> + Send + Sync,

Source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,