pub struct AdminClient { /* private fields */ }Expand description
Kafka admin client for cluster administration.
Implementations§
Source§impl AdminClient
impl AdminClient
Sourcepub fn builder() -> AdminClientBuilder
pub fn builder() -> AdminClientBuilder
Create a new admin client builder.
Sourcepub async fn create_topics(
&self,
topics: Vec<NewTopic>,
timeout: Duration,
) -> Result<Vec<CreateTopicResult>>
pub async fn create_topics( &self, topics: Vec<NewTopic>, timeout: Duration, ) -> Result<Vec<CreateTopicResult>>
Create topics.
Sourcepub async fn delete_topics(
&self,
topics: Vec<String>,
timeout: Duration,
) -> Result<Vec<DeleteTopicResult>>
pub async fn delete_topics( &self, topics: Vec<String>, timeout: Duration, ) -> Result<Vec<DeleteTopicResult>>
Delete topics.
Sourcepub async fn create_partitions(
&self,
topic: impl Into<String>,
new_total_count: i32,
timeout: Duration,
) -> Result<CreatePartitionsResult>
pub async fn create_partitions( &self, topic: impl Into<String>, new_total_count: i32, timeout: Duration, ) -> Result<CreatePartitionsResult>
Increase the number of partitions for a topic.
Note: Partition count can only be increased, never decreased.
Sourcepub async fn describe_configs(
&self,
request: DescribeConfigsRequest,
) -> Result<Vec<ConfigEntry>>
pub async fn describe_configs( &self, request: DescribeConfigsRequest, ) -> Result<Vec<ConfigEntry>>
Describe configuration for one or more resources (topics, brokers, etc.).
Uses DescribeConfigs (API Key 32). Build a DescribeConfigsRequest
via its convenience constructors (for_topic, for_broker) or manually
populate the resources field for multi-resource queries.
Sourcepub async fn alter_topic_config(
&self,
topic: &str,
configs: HashMap<String, String>,
) -> Result<AlterConfigResult>
pub async fn alter_topic_config( &self, topic: &str, configs: HashMap<String, String>, ) -> Result<AlterConfigResult>
Alter configuration for a topic.
Uses IncrementalAlterConfigs (API Key 44) to set individual config keys without replacing the entire config. Each key-value pair is applied as a SET operation.
Sourcepub async fn list_topics(&self) -> Result<Vec<String>>
pub async fn list_topics(&self) -> Result<Vec<String>>
List all topics.
Sourcepub async fn describe_topics(&self, topics: &[String]) -> Result<Vec<TopicInfo>>
pub async fn describe_topics(&self, topics: &[String]) -> Result<Vec<TopicInfo>>
Describe topics.
Sourcepub async fn describe_cluster(&self) -> Result<DescribeClusterResult>
pub async fn describe_cluster(&self) -> Result<DescribeClusterResult>
Describe the cluster using the DescribeCluster API (Key 60).
Returns cluster metadata including cluster ID, controller, brokers, and authorized operations.
Sourcepub async fn partition_count(&self, topic: &str) -> Result<Option<usize>>
pub async fn partition_count(&self, topic: &str) -> Result<Option<usize>>
Get partition count for a topic.
Sourcepub fn request_timeout(&self) -> Duration
pub fn request_timeout(&self) -> Duration
Get the request timeout.
Sourcepub async fn describe_acls(
&self,
filter: AclFilter,
) -> Result<DescribeAclsResult>
pub async fn describe_acls( &self, filter: AclFilter, ) -> Result<DescribeAclsResult>
Sourcepub async fn create_acls(
&self,
acls: Vec<AclBinding>,
) -> Result<CreateAclsResult>
pub async fn create_acls( &self, acls: Vec<AclBinding>, ) -> Result<CreateAclsResult>
Sourcepub async fn delete_acls(
&self,
filters: Vec<AclBindingFilter>,
) -> Result<DeleteAclsResult>
pub async fn delete_acls( &self, filters: Vec<AclBindingFilter>, ) -> Result<DeleteAclsResult>
Delete ACLs matching the specified filters.
§Arguments
filters- List of ACL binding filters to match for deletion
§Example
// Delete all ACLs for a specific topic
let filter = AclBindingFilter {
resource_type: AclResourceType::Topic,
resource_name: Some("my-topic".to_string()),
pattern_type: AclPatternType::Literal,
principal: None,
host: None,
operation: AclOperation::Any,
permission_type: AclPermissionType::Any,
};
admin.delete_acls(vec![filter]).await?;Sourcepub async fn describe_consumer_groups(
&self,
group_ids: Vec<String>,
) -> Result<Vec<ConsumerGroupDescription>>
pub async fn describe_consumer_groups( &self, group_ids: Vec<String>, ) -> Result<Vec<ConsumerGroupDescription>>
Describe consumer groups.
Automatically detects whether each group uses the classic protocol or the new consumer protocol (KIP-848) and dispatches to the appropriate API:
- Classic groups → DescribeGroups (Key 15)
- Consumer groups → ConsumerGroupDescribe (Key 69)
The returned ConsumerGroupDescription is a unified type.
Fields specific to one protocol variant are Option-wrapped.
§Example
let groups = admin
.describe_consumer_groups(vec!["my-group".to_string()])
.await?;
for group in &groups {
println!("{}: type={}, state={}, members={}",
group.group_id, group.group_type, group.state, group.members.len());
}Sourcepub async fn list_consumer_groups(&self) -> Result<Vec<ConsumerGroupListing>>
pub async fn list_consumer_groups(&self) -> Result<Vec<ConsumerGroupListing>>
Sourcepub async fn delete_records(
&self,
offsets: HashMap<(String, i32), i64>,
timeout: Duration,
) -> Result<Vec<DeleteRecordResult>>
pub async fn delete_records( &self, offsets: HashMap<(String, i32), i64>, timeout: Duration, ) -> Result<Vec<DeleteRecordResult>>
Delete records from topic partitions before the specified offsets.
Records with offsets less than the specified offset for each partition will be marked for deletion. This adjusts the log start offset.
§Arguments
offsets- Map of (topic, partition) to the offset before which to deletetimeout- Operation timeout
§Example
use std::collections::HashMap;
let mut offsets = HashMap::new();
offsets.insert(("my-topic".to_string(), 0), 100i64);
let results = admin.delete_records(offsets, Duration::from_secs(30)).await?;Sourcepub async fn offset_for_leader_epoch(
&self,
partitions: Vec<(String, i32, i32)>,
) -> Result<Vec<LeaderEpochResult>>
pub async fn offset_for_leader_epoch( &self, partitions: Vec<(String, i32, i32)>, ) -> Result<Vec<LeaderEpochResult>>
Get the end offset for each partition at the given leader epoch.
This is used to detect log truncation after a leader change. For each topic-partition, the broker returns the end offset for the requested leader epoch. If the epoch is no longer valid, the broker returns the epoch and offset where the log was truncated.
§Arguments
partitions- List of (topic, partition, leader_epoch) tuples
§Example
let results = admin.offset_for_leader_epoch(
vec![("my-topic".to_string(), 0, 5)]
).await?;
for r in &results {
println!("{}:{} epoch={} end_offset={}", r.topic, r.partition, r.leader_epoch, r.end_offset);
}Sourcepub async fn create_delegation_token(
&self,
renewers: &[(&str, &str)],
max_lifetime: Option<Duration>,
) -> Result<CreateDelegationTokenResult>
pub async fn create_delegation_token( &self, renewers: &[(&str, &str)], max_lifetime: Option<Duration>, ) -> Result<CreateDelegationTokenResult>
Create a delegation token.
Delegation tokens allow a principal to delegate authentication to another principal without sharing credentials (KIP-48). The token HMAC can be used for SASL/SCRAM authentication.
§Arguments
renewers- Principals authorized to renew the token (type, name pairs). Pass an empty slice to allow only the token owner to renew.max_lifetime- Maximum token lifetime. UseNonefor the server default (typically 7 days).
Sourcepub async fn renew_delegation_token(
&self,
hmac: &[u8],
renew_period: Duration,
) -> Result<RenewDelegationTokenResult>
pub async fn renew_delegation_token( &self, hmac: &[u8], renew_period: Duration, ) -> Result<RenewDelegationTokenResult>
Renew a delegation token, extending its expiry time.
§Arguments
hmac- HMAC of the token to renew (fromDelegationToken::hmac).renew_period- How long to extend the token’s lifetime.
Sourcepub async fn expire_delegation_token(
&self,
hmac: &[u8],
expiry_period: Option<Duration>,
) -> Result<ExpireDelegationTokenResult>
pub async fn expire_delegation_token( &self, hmac: &[u8], expiry_period: Option<Duration>, ) -> Result<ExpireDelegationTokenResult>
Expire a delegation token, revoking it before its natural expiry.
§Arguments
hmac- HMAC of the token to expire (fromDelegationToken::hmac).expiry_period- How long until the token expires. PassNoneto expire the token immediately (sends-1to the broker).
Sourcepub async fn describe_delegation_token(
&self,
owners: Option<&[(&str, &str)]>,
) -> Result<Vec<DelegationToken>>
pub async fn describe_delegation_token( &self, owners: Option<&[(&str, &str)]>, ) -> Result<Vec<DelegationToken>>
Describe delegation tokens visible to the caller.
§Arguments
owners- Filter by token owners (type, name pairs). PassNoneto return all tokens visible to the caller.
Sourcepub async fn describe_client_quotas(
&self,
components: &[(&str, i8, Option<&str>)],
strict: bool,
) -> Result<DescribeClientQuotasResult>
pub async fn describe_client_quotas( &self, components: &[(&str, i8, Option<&str>)], strict: bool, ) -> Result<DescribeClientQuotasResult>
Describe client quotas matching the given filter.
§Arguments
components- Filter components. Each component specifies an entity type and match criteria. The broker returns entities matching all components.strict- Iftrue, exclude entities with unspecified entity types (i.e., only return entities that exactly match all given component types).
§Filter Match Types
Each component has a match_type:
0(exact): match the entity with the given name1(default): match the default entity for this type2(any specified): match any entity with a name (non-default)
§Example
// Describe all quotas for user "alice"
let results = admin.describe_client_quotas(
&[("user", 0, Some("alice"))],
false,
).await?;Sourcepub async fn alter_client_quotas(
&self,
entries: &[QuotaAlteration<'_>],
validate_only: bool,
) -> Result<Vec<AlterClientQuotaResult>>
pub async fn alter_client_quotas( &self, entries: &[QuotaAlteration<'_>], validate_only: bool, ) -> Result<Vec<AlterClientQuotaResult>>
Alter client quotas.
Each entry specifies an entity (user, client-id, ip) and a set of quota operations (set or remove). Results are returned per-entity.
§Arguments
entries- Quota alterations. Each entry has an entity and operations.validate_only- Iftrue, validate the request without applying changes.
§Example
use krafka::admin::QuotaAlteration;
// Set producer byte rate quota for user "alice"
let results = admin.alter_client_quotas(
&[QuotaAlteration {
entity: vec![("user", Some("alice"))],
ops: vec![("producer_byte_rate", Some(1_048_576.0))],
}],
false,
).await?;Sourcepub async fn delete_consumer_groups(
&self,
group_ids: Vec<String>,
) -> Result<Vec<DeleteGroupResult>>
pub async fn delete_consumer_groups( &self, group_ids: Vec<String>, ) -> Result<Vec<DeleteGroupResult>>
Delete consumer groups by ID.
Returns one DeleteGroupResult per group. Each result may contain
an error if that particular group could not be deleted (e.g., it has
active members).
Sourcepub async fn describe_topic_partitions(
&self,
topics: Vec<String>,
) -> Result<DescribeTopicPartitionsResult>
pub async fn describe_topic_partitions( &self, topics: Vec<String>, ) -> Result<DescribeTopicPartitionsResult>
Describe topic partitions using the DescribeTopicPartitions API (Key 75).
Returns detailed per-partition information including leader, replicas, ISR, eligible leader replicas (ELR), and offline replicas. Supports pagination for topics with many partitions.
§Example
let result = admin
.describe_topic_partitions(vec!["my-topic".to_string()])
.await?;
for topic in &result.topics {
println!("{}: {} partitions", topic.name.as_deref().unwrap_or("?"), topic.partitions.len());
for p in &topic.partitions {
println!(" partition {}: leader={}, isr={:?}", p.partition_index, p.leader_id, p.isr_nodes);
}
}Sourcepub fn pool(&self) -> &Arc<ConnectionPool>
pub fn pool(&self) -> &Arc<ConnectionPool>
Get access to the connection pool.
Sourcepub fn update_seed_brokers(&self, servers: Vec<String>) -> Result<()>
pub fn update_seed_brokers(&self, servers: Vec<String>) -> Result<()>
Replace the bootstrap server list at runtime (KIP-899).
The new addresses are used on the next metadata refresh that falls back to bootstrap servers. Does not close existing connections.
§Errors
Returns an error if servers is empty.
Sourcepub async fn rebootstrap(&self)
pub async fn rebootstrap(&self)
Force a rebootstrap: close all connections, clear the metadata cache, and fall back to bootstrap servers (KIP-899).
Sourcepub async fn close(&self)
pub async fn close(&self)
Close the admin client.
Sets the closed flag and tears down all broker connections.
In-flight RPCs that have not yet received a response will fail
with a network error. Callers should ensure long-running admin
operations have completed before calling close().
Calling close() more than once is a no-op.
Sourcepub async fn describe_features(&self) -> Result<DescribeFeaturesResult>
pub async fn describe_features(&self) -> Result<DescribeFeaturesResult>
Describe broker-supported and cluster-finalized features (KIP-584).
Sends an ApiVersions request (v3+) to any broker and extracts the
feature information from the tagged fields. The response includes:
- Features supported by the responding broker (per-broker)
- Cluster-wide finalized features and their epoch (cluster-wide)
§Example
let features = admin.describe_features().await?;
for f in &features.supported_features {
println!("{}: v{}–v{}", f.name, f.min_version, f.max_version);
}
for f in &features.finalized_features {
println!("{}: v{}–v{} (finalized)", f.name, f.min_version_level, f.max_version_level);
}Sourcepub async fn update_features(
&self,
feature_updates: Vec<FeatureUpdateKey>,
validate_only: bool,
) -> Result<UpdateFeaturesResult>
pub async fn update_features( &self, feature_updates: Vec<FeatureUpdateKey>, validate_only: bool, ) -> Result<UpdateFeaturesResult>
Update cluster-wide finalized feature version levels (KIP-584).
This is a destructive operation — downgrades and deletions can be data-lossy. Only the controller broker serves this request; the client sends to any broker, which forwards to the controller.
Requires ALTER permission on the cluster.
§Example
use krafka::protocol::FeatureUpdateKey;
let results = admin.update_features(
vec![FeatureUpdateKey::upgrade("metadata.version", 17)],
false, // validate_only
).await?;
for result in &results.results {
if let Some(e) = &result.error {
eprintln!("Failed to update {}: {e}", result.feature);
}
}Sourcepub async fn describe_log_dirs(
&self,
topics: Option<Vec<DescribableLogDirTopic>>,
) -> Result<Vec<LogDirInfo>>
pub async fn describe_log_dirs( &self, topics: Option<Vec<DescribableLogDirTopic>>, ) -> Result<Vec<LogDirInfo>>
Describe log directories on all known brokers.
Each broker maintains one or more log directories; this method queries every broker and returns per-directory information including sizes, partition assignments, and (v4+) volume capacity.
Pass None for topics to describe all partitions on every
broker, or pass a list of DescribableLogDirTopic to filter.
§Example
// Describe all log dirs on every broker
let dirs = admin.describe_log_dirs(None).await?;
for dir in &dirs {
println!("broker {} dir {}: {:?}", dir.broker_id, dir.log_dir, dir.error);
}
// Describe specific topic partitions
use krafka::protocol::DescribableLogDirTopic;
let filter = vec![DescribableLogDirTopic {
topic: "my-topic".into(),
partitions: vec![0, 1, 2],
}];
let dirs = admin.describe_log_dirs(Some(filter)).await?;Sourcepub async fn elect_leaders(
&self,
election_type: ElectionType,
topic_partitions: Option<Vec<ElectLeadersTopicPartitions>>,
timeout: Duration,
) -> Result<Vec<ElectLeadersResult>>
pub async fn elect_leaders( &self, election_type: ElectionType, topic_partitions: Option<Vec<ElectLeadersTopicPartitions>>, timeout: Duration, ) -> Result<Vec<ElectLeadersResult>>
Trigger leader election for the specified partitions.
When topic_partitions is None, leaders for all partitions are
elected. The election_type controls whether to perform a preferred
or unclean leader election (requires broker v1+; v0 always does
preferred election).
Returns per-partition results — individual partitions may fail even when the RPC succeeds.
§Example
use krafka::protocol::ElectionType;
use std::time::Duration;
// Preferred election for all partitions
let results = admin
.elect_leaders(ElectionType::Preferred, None, Duration::from_secs(60))
.await?;Sourcepub async fn alter_partition_reassignments(
&self,
topics: Vec<ReassignableTopic>,
timeout: Duration,
) -> Result<AlterReassignmentsResult>
pub async fn alter_partition_reassignments( &self, topics: Vec<ReassignableTopic>, timeout: Duration, ) -> Result<AlterReassignmentsResult>
Alter partition reassignments.
Initiates or cancels partition reassignments. To cancel a pending
reassignment, set replicas to None for that partition.
This is a destructive operation — reassigning partitions moves data between brokers and can significantly impact cluster load.
Returns per-partition results — individual partitions may fail even when the RPC succeeds.
§Example
use krafka::protocol::{ReassignableTopic, ReassignablePartition};
use std::time::Duration;
let results = admin.alter_partition_reassignments(
vec![ReassignableTopic {
name: "my-topic".into(),
partitions: vec![ReassignablePartition {
partition_index: 0,
replicas: Some(vec![1, 2, 3]),
}],
}],
Duration::from_secs(60),
).await?;Sourcepub async fn list_partition_reassignments(
&self,
topics: Option<Vec<ListPartitionReassignmentsTopic>>,
timeout: Duration,
) -> Result<Vec<PartitionReassignmentInfo>>
pub async fn list_partition_reassignments( &self, topics: Option<Vec<ListPartitionReassignmentsTopic>>, timeout: Duration, ) -> Result<Vec<PartitionReassignmentInfo>>
List ongoing partition reassignments.
When topics is None, all ongoing reassignments are listed.
Otherwise, only the specified topic-partitions are checked.
§Example
// List all ongoing reassignments
let reassignments = admin
.list_partition_reassignments(None, Duration::from_secs(60))
.await?;
for topic in &reassignments {
for p in &topic.partitions {
println!("{} p{}: adding {:?}, removing {:?}",
topic.name, p.partition_index, p.adding_replicas, p.removing_replicas);
}
}Sourcepub async fn alter_replica_log_dirs(
&self,
dirs: Vec<AlterReplicaLogDir>,
) -> Result<Vec<AlterReplicaLogDirsResult>>
pub async fn alter_replica_log_dirs( &self, dirs: Vec<AlterReplicaLogDir>, ) -> Result<Vec<AlterReplicaLogDirsResult>>
Move partition replicas to a different log directory on the broker.
This is a destructive operation — moving replicas between log directories triggers data copying and can impact broker I/O.
This is a per-broker operation. The request is sent to every broker that currently hosts at least one replica for the specified topic-partitions.
Returns per-partition results — individual partitions may fail even when the RPC succeeds.
§Example
use krafka::protocol::{AlterReplicaLogDir, AlterReplicaLogDirTopic};
let results = admin.alter_replica_log_dirs(vec![
AlterReplicaLogDir {
path: "/data/kafka-logs-2".into(),
topics: vec![AlterReplicaLogDirTopic {
name: "my-topic".into(),
partitions: vec![0, 1],
}],
},
]).await?;Sourcepub async fn delete_consumer_group_offsets(
&self,
group_id: &str,
topic_partitions: &[(&str, &[i32])],
) -> Result<OffsetDeleteResult>
pub async fn delete_consumer_group_offsets( &self, group_id: &str, topic_partitions: &[(&str, &[i32])], ) -> Result<OffsetDeleteResult>
Delete committed offsets for a consumer group.
This is a destructive operation — deleted offsets cannot be
recovered. The consumer group must be in the Empty state.
The request is sent to the group coordinator.
§Example
let results = admin.delete_offsets(
"my-group",
&[("my-topic", &[0, 1, 2])],
).await?;Sourcepub async fn describe_user_scram_credentials(
&self,
users: Option<Vec<String>>,
) -> Result<DescribeUserScramCredentialsResult>
pub async fn describe_user_scram_credentials( &self, users: Option<Vec<String>>, ) -> Result<DescribeUserScramCredentialsResult>
Sourcepub async fn alter_user_scram_credentials(
&self,
deletions: Vec<ScramCredentialDeletion>,
upsertions: Vec<ScramCredentialUpsertion>,
) -> Result<Vec<AlterScramCredentialResult>>
pub async fn alter_user_scram_credentials( &self, deletions: Vec<ScramCredentialDeletion>, upsertions: Vec<ScramCredentialUpsertion>, ) -> Result<Vec<AlterScramCredentialResult>>
Alter (upsert or delete) SCRAM credentials for users.
This is a destructive operation — deleting a SCRAM credential removes the user’s ability to authenticate with that mechanism.
§Example
use krafka::protocol::{ScramCredentialDeletion, ScramCredentialUpsertion};
use krafka::auth::ScramMechanism;
use zeroize::Zeroizing;
let results = admin.alter_user_scram_credentials(
vec![ScramCredentialDeletion {
name: "alice".into(),
mechanism: ScramMechanism::Sha512,
}],
vec![ScramCredentialUpsertion {
name: "bob".into(),
mechanism: ScramMechanism::Sha256,
iterations: 8192,
salt: Zeroizing::new(vec![1, 2, 3]),
salted_password: Zeroizing::new(vec![4, 5, 6]),
}],
).await?;Sourcepub async fn describe_producers(
&self,
topic_partitions: &[(&str, &[i32])],
) -> Result<Vec<DescribeProducersTopicResult>>
pub async fn describe_producers( &self, topic_partitions: &[(&str, &[i32])], ) -> Result<Vec<DescribeProducersTopicResult>>
Describe active producers on the given topic-partitions.
Routes each topic-partition to its leader broker via cached metadata for optimal performance. Falls back to any broker if the leader is unknown.
Returns per-partition producer state useful for debugging transactional and idempotent producers.
§Example
let results = admin
.describe_producers(&[("my-topic", &[0, 1])])
.await?;Sourcepub async fn describe_transactions(
&self,
transactional_ids: &[&str],
) -> Result<Vec<TransactionDescription>>
pub async fn describe_transactions( &self, transactional_ids: &[&str], ) -> Result<Vec<TransactionDescription>>
Sourcepub async fn list_transactions(
&self,
state_filters: &[&str],
producer_id_filters: &[i64],
duration_filter: i64,
) -> Result<ListTransactionsResult>
pub async fn list_transactions( &self, state_filters: &[&str], producer_id_filters: &[i64], duration_filter: i64, ) -> Result<ListTransactionsResult>
List transactions matching the given filters.
Queries all brokers and merges results, because each broker only knows about transactions it coordinates.
Pass empty slices for state_filters and producer_id_filters to
list all transactions.
§Example
// List all ongoing transactions
let txns = admin
.list_transactions(&["Ongoing"], &[], -1)
.await?;Sourcepub async fn list_client_metrics_resources(&self) -> Result<Vec<String>>
pub async fn list_client_metrics_resources(&self) -> Result<Vec<String>>
Sourcepub async fn write_txn_markers(
&self,
markers: &[WritableTxnMarker],
) -> Result<Vec<WriteTxnMarkersResult>>
pub async fn write_txn_markers( &self, markers: &[WritableTxnMarker], ) -> Result<Vec<WriteTxnMarkersResult>>
Write transaction markers (COMMIT or ABORT) to the given topic-partitions.
This is an inter-broker API used to finalize transactions.
The admin client exposes it primarily for aborting stuck transactions
(abort_transaction).
Each marker is sent to all brokers since the partitions may be led by different brokers. Per-broker errors are logged and skipped so results from reachable brokers are still returned.
§Example
use krafka::protocol::{WritableTxnMarker, WritableTxnMarkerTopic};
let results = admin
.write_txn_markers(&[WritableTxnMarker {
producer_id: 42,
producer_epoch: 5,
transaction_result: false, // ABORT
topics: vec![WritableTxnMarkerTopic {
name: "my-topic".into(),
partition_indexes: vec![0, 1],
}],
coordinator_epoch: 10,
}])
.await?;Sourcepub async fn abort_transaction(
&self,
transactional_id: &str,
) -> Result<Vec<WriteTxnMarkersResult>>
pub async fn abort_transaction( &self, transactional_id: &str, ) -> Result<Vec<WriteTxnMarkersResult>>
Abort a stuck transaction by writing an ABORT marker.
This is the admin-friendly wrapper around write_txn_markers
that looks up the transaction coordinator, discovers the affected
partitions via describe_transactions,
and writes an ABORT marker.
§Example
admin.abort_transaction("my-transactional-id").await?;Sourcepub async fn describe_metadata_quorum(
&self,
topic_partitions: &[(&str, &[i32])],
) -> Result<DescribeQuorumResult>
pub async fn describe_metadata_quorum( &self, topic_partitions: &[(&str, &[i32])], ) -> Result<DescribeQuorumResult>
Describe the KRaft quorum for the given topic-partitions.
In a KRaft-mode cluster this returns the current voters, observers, leader, leader epoch, and high watermark for each quorum partition.
The primary use case is inspecting __cluster_metadata partition 0.
§Example
let result = admin
.describe_quorum(&[("__cluster_metadata", &[0])])
.await?;Trait Implementations§
Auto Trait Implementations§
impl !Freeze for AdminClient
impl !RefUnwindSafe for AdminClient
impl Send for AdminClient
impl Sync for AdminClient
impl Unpin for AdminClient
impl UnsafeUnpin for AdminClient
impl !UnwindSafe for AdminClient
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more