[][src]Struct serenity::client::bridge::gateway::ShardManager

pub struct ShardManager {
    pub runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>,
    // some fields omitted
}

A manager for handling the status of shards by starting them, restarting them, and stopping them when required.

Note: The Client internally uses a shard manager. If you are using a Client, then you do not need to make one of these.

Examples

Initialize a shard manager with a framework responsible for shards 0 through 2, of 5 total shards:

use parking_lot::{Mutex, RwLock};
use serenity::client::bridge::gateway::{ShardManager, ShardManagerOptions};
use serenity::client::{EventHandler, RawEventHandler};
// Of note, this imports `typemap`'s `ShareMap` type.
use serenity::prelude::*;
use serenity::http::Http;
use serenity::CacheAndHttp;
// Of note, this imports `typemap`'s `ShareMap` type.
use serenity::prelude::*;
use std::sync::Arc;
use std::env;
use threadpool::ThreadPool;

struct Handler;

impl EventHandler for Handler { }
impl RawEventHandler for Handler { }

let gateway_url = Arc::new(Mutex::new(http.get_gateway()?.url));
let data = Arc::new(RwLock::new(ShareMap::custom()));
let event_handler = Arc::new(Handler);
let framework = Arc::new(Mutex::new(None));
let threadpool = ThreadPool::with_name("my threadpool".to_owned(), 5);

ShardManager::new(ShardManagerOptions {
    data: &data,
    event_handler: &Some(event_handler),
    raw_event_handler: &None::<Arc<Handler>>,
    framework: &framework,
    // the shard index to start initiating from
    shard_index: 0,
    // the number of shards to initiate (this initiates 0, 1, and 2)
    shard_init: 3,
    // the total number of shards in use
    shard_total: 5,
    threadpool,
    ws_url: &gateway_url,
});

Fields

runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>

The shard runners currently managed.

Note: It is highly unrecommended to mutate this yourself unless you need to. Instead prefer to use methods on this struct that are provided where possible.

Methods

impl ShardManager[src]

pub fn new<H, RH>(
    opt: ShardManagerOptions<H, RH>
) -> (Arc<Mutex<Self>>, ShardManagerMonitor) where
    H: EventHandler + Send + Sync + 'static,
    RH: RawEventHandler + Send + Sync + 'static, 
[src]

Creates a new shard manager, returning both the manager and a monitor for usage in a separate thread.

pub fn has(&self, shard_id: ShardId) -> bool[src]

Returns whether the shard manager contains either an active instance of a shard runner responsible for the given ID.

If a shard has been queued but has not yet been initiated, then this will return false. Consider double-checking is_responsible_for to determine whether this shard manager is responsible for the given shard.

pub fn initialize(&mut self) -> Result<()>[src]

Initializes all shards that the manager is responsible for.

This will communicate shard boots with the ShardQueuer so that they are properly queued.

pub fn set_shards(&mut self, index: u64, init: u64, total: u64)[src]

Sets the new sharding information for the manager.

This will shutdown all existing shards.

This will not instantiate the new shards.

pub fn restart(&mut self, shard_id: ShardId)[src]

Restarts a shard runner.

This sends a shutdown signal to a shard's associated ShardRunner, and then queues a initialization of a shard runner for the same shard via the ShardQueuer.

Examples

Creating a client and then restarting a shard by ID:

(note: in reality this precise code doesn't have an effect since the shard would not yet have been initialized via initialize, but the concept is the same)

use serenity::client::bridge::gateway::ShardId;
use serenity::client::{Client, EventHandler};
use std::env;

struct Handler;

impl EventHandler for Handler { }

let token = env::var("DISCORD_TOKEN").unwrap();
let mut client = Client::new(&token, Handler).unwrap();

// restart shard ID 7
client.shard_manager.lock().restart(ShardId(7));

pub fn shards_instantiated(&self) -> Vec<ShardId>[src]

Returns the ShardIds of the shards that have been instantiated and currently have a valid ShardRunner.

pub fn shutdown(&mut self, shard_id: ShardId) -> bool[src]

Attempts to shut down the shard runner by Id.

Returns a boolean indicating whether a shard runner was present. This is not necessary an indicator of whether the shard runner was successfully shut down.

Note: If the receiving end of an mpsc channel - theoretically owned by the shard runner - no longer exists, then the shard runner will not know it should shut down. This should never happen. It may already be stopped.

pub fn shutdown_all(&mut self)[src]

Sends a shutdown message for all shards that the manager is responsible for that are still known to be running.

If you only need to shutdown a select number of shards, prefer looping over the shutdown method.

Trait Implementations

impl Drop for ShardManager[src]

fn drop(&mut self)[src]

A custom drop implementation to clean up after the manager.

This shuts down all active ShardRunners and attempts to tell the ShardQueuer to shutdown.

impl Debug for ShardManager[src]

Auto Trait Implementations

Blanket Implementations

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> Erased for T

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 

type Err = <U as TryFrom<T>>::Err

impl<T> DebugAny for T where
    T: Any + Debug
[src]

impl<T> UnsafeAny for T where
    T: Any