[][src]Struct serenity::client::bridge::gateway::ShardManager

pub struct ShardManager {
    pub runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>,
    // some fields omitted
}

A manager for handling the status of shards by starting them, restarting them, and stopping them when required.

Note: The Client internally uses a shard manager. If you are using a Client, then you do not need to make one of these.

Examples

Initialize a shard manager with a framework responsible for shards 0 through 2, of 5 total shards:

use tokio::sync::{Mutex, RwLock};
use serenity::client::bridge::gateway::{ShardManager, ShardManagerOptions};
use serenity::client::{EventHandler, RawEventHandler};
use serenity::http::Http;
use serenity::CacheAndHttp;
use serenity::prelude::*;
use serenity::framework::{Framework, StandardFramework};
use std::sync::Arc;
use std::env;

struct Handler;

impl EventHandler for Handler { }
impl RawEventHandler for Handler { }

let gateway_url = Arc::new(Mutex::new(http.get_gateway().await?.url));
let data = Arc::new(RwLock::new(TypeMap::new()));
let event_handler = Arc::new(Handler) as Arc<dyn EventHandler>;
let framework = Arc::new(Box::new(StandardFramework::new()) as Box<dyn Framework + 'static + Send + Sync>);

ShardManager::new(ShardManagerOptions {
    data: &data,
    event_handler: &Some(event_handler),
    raw_event_handler: &None,
    framework: &framework,
    // the shard index to start initiating from
    shard_index: 0,
    // the number of shards to initiate (this initiates 0, 1, and 2)
    shard_init: 3,
    // the total number of shards in use
    shard_total: 5,
    ws_url: &gateway_url,
    guild_subscriptions: true,
    intents: None,
});

Fields

runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>

The shard runners currently managed.

Note: It is highly unrecommended to mutate this yourself unless you need to. Instead prefer to use methods on this struct that are provided where possible.

Implementations

impl ShardManager[src]

pub async fn new<'_>(
    opt: ShardManagerOptions<'_>
) -> (Arc<Mutex<Self>>, ShardManagerMonitor)
[src]

Creates a new shard manager, returning both the manager and a monitor for usage in a separate thread.

pub async fn has<'_>(&'_ self, shard_id: ShardId) -> bool[src]

Returns whether the shard manager contains either an active instance of a shard runner responsible for the given ID.

If a shard has been queued but has not yet been initiated, then this will return false. Consider double-checking is_responsible_for to determine whether this shard manager is responsible for the given shard.

pub fn initialize(&mut self) -> Result<()>[src]

Initializes all shards that the manager is responsible for.

This will communicate shard boots with the ShardQueuer so that they are properly queued.

pub async fn set_shards<'_>(&'_ mut self, index: u64, init: u64, total: u64)[src]

Sets the new sharding information for the manager.

This will shutdown all existing shards.

This will not instantiate the new shards.

pub async fn restart<'_>(&'_ mut self, shard_id: ShardId)[src]

Restarts a shard runner.

This sends a shutdown signal to a shard's associated ShardRunner, and then queues a initialization of a shard runner for the same shard via the ShardQueuer.

Examples

Creating a client and then restarting a shard by ID:

(note: in reality this precise code doesn't have an effect since the shard would not yet have been initialized via initialize, but the concept is the same)

use serenity::client::bridge::gateway::ShardId;
use serenity::client::{Client, EventHandler};
use std::env;

struct Handler;

impl EventHandler for Handler { }

let token = std::env::var("DISCORD_TOKEN")?;
let mut client = Client::new(&token).event_handler(Handler).await?;

// restart shard ID 7
client.shard_manager.lock().await.restart(ShardId(7)).await;

pub async fn shards_instantiated<'_>(&'_ self) -> Vec<ShardId>[src]

Returns the ShardIds of the shards that have been instantiated and currently have a valid ShardRunner.

pub async fn shutdown<'_>(&'_ mut self, shard_id: ShardId, code: u16)[src]

Attempts to shut down the shard runner by Id.

Returns a boolean indicating whether a shard runner was present. This is not necessary an indicator of whether the shard runner was successfully shut down.

Note: If the receiving end of an mpsc channel - theoretically owned by the shard runner - no longer exists, then the shard runner will not know it should shut down. This should never happen. It may already be stopped.

pub async fn shutdown_all<'_>(&'_ mut self)[src]

Sends a shutdown message for all shards that the manager is responsible for that are still known to be running.

If you only need to shutdown a select number of shards, prefer looping over the shutdown method.

Trait Implementations

impl Debug for ShardManager[src]

impl Drop for ShardManager[src]

fn drop(&mut self)[src]

A custom drop implementation to clean up after the manager.

This shuts down all active ShardRunners and attempts to tell the ShardQueuer to shutdown.

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>, 

impl<T> WithSubscriber for T[src]