pub struct PGNotifier {
pub client: Client,
/* private fields */
}
Expand description
Forwards PostgreSQL NOTIFY
and RAISE
commands to subscribers.
Fields§
§client: Client
Implementations§
Source§impl PGNotifier
impl PGNotifier
Sourcepub fn spawn<S, T>(client: PGClient, conn: PGConnection<S, T>) -> Selfwhere
S: AsyncRead + AsyncWrite + Unpin + Send + Sync + 'static,
T: AsyncRead + AsyncWrite + Unpin + Send + Sync + 'static,
pub fn spawn<S, T>(client: PGClient, conn: PGConnection<S, T>) -> Selfwhere
S: AsyncRead + AsyncWrite + Unpin + Send + Sync + 'static,
T: AsyncRead + AsyncWrite + Unpin + Send + Sync + 'static,
Spawns a new postgres client/connection pair.
Sourcepub async fn subscribe_notify<F>(
&mut self,
channel: impl Into<String>,
callback: F,
) -> Result<(), Error>
pub async fn subscribe_notify<F>( &mut self, channel: impl Into<String>, callback: F, ) -> Result<(), Error>
Subscribes to notifications on a particular channel.
The call will issue the LISTEN
command to PostgreSQL. There is
currently no mechanism to unsubscribe even though postgres does
supports UNLISTEN.
Sourcepub fn subscribe_raise(
&mut self,
callback: impl Fn(&PGRaise) + Send + Sync + 'static,
)
pub fn subscribe_raise( &mut self, callback: impl Fn(&PGRaise) + Send + Sync + 'static, )
Subscribes to RAISE <level> <message>
notifications.
There is currently no mechanism to unsubscribe. This would only require returning some form of “token”, which could be used to unsubscribe.
Sourcepub fn capture_log(&self) -> Option<Vec<PGRaise>>
pub fn capture_log(&self) -> Option<Vec<PGRaise>>
Returns the accumulated log since the last capture.
If the code being called issues many RAISE
commands and you never
call capture_log
, then eventually, you
might run out of memory. To ensure that this does not happen, you
might consider using with_captured_log
instead.
Sourcepub async fn with_captured_log<F, T>(
&self,
f: F,
) -> Result<(T, Vec<PGRaise>), Error>
pub async fn with_captured_log<F, T>( &self, f: F, ) -> Result<(T, Vec<PGRaise>), Error>
Given an async closure taking the postgres client, returns the result of said closure along with the accumulated log since the beginning of the closure.
If you use query pipelining then collect the logs for all queries in the pipeline. Otherwise, the logs might not be what you expect.