pub struct DebertaConfig {
Show 30 fields pub hidden_act: Activation, pub attention_probs_dropout_prob: f64, pub hidden_dropout_prob: f64, pub hidden_size: i64, pub initializer_range: f64, pub intermediate_size: i64, pub max_position_embeddings: i64, pub num_attention_heads: i64, pub num_hidden_layers: i64, pub type_vocab_size: i64, pub vocab_size: i64, pub position_biased_input: Option<bool>, pub pos_att_type: Option<PositionAttentionTypes>, pub pooler_dropout: Option<f64>, pub pooler_hidden_act: Option<Activation>, pub pooler_hidden_size: Option<i64>, pub layer_norm_eps: Option<f64>, pub pad_token_id: Option<i64>, pub relative_attention: Option<bool>, pub max_relative_positions: Option<i64>, pub embedding_size: Option<i64>, pub talking_head: Option<bool>, pub output_hidden_states: Option<bool>, pub output_attentions: Option<bool>, pub classifier_dropout: Option<f64>, pub is_decoder: Option<bool>, pub id2label: Option<HashMap<i64, String>>, pub label2id: Option<HashMap<String, i64>>, pub share_att_key: Option<bool>, pub position_buckets: Option<i64>,
}
Expand description

DeBERTa model configuration

Defines the DeBERTa model architecture (e.g. number of layers, hidden layer size, label mapping…)

Fields§

§hidden_act: Activation§attention_probs_dropout_prob: f64§hidden_dropout_prob: f64§hidden_size: i64§initializer_range: f64§intermediate_size: i64§max_position_embeddings: i64§num_attention_heads: i64§num_hidden_layers: i64§type_vocab_size: i64§vocab_size: i64§position_biased_input: Option<bool>§pos_att_type: Option<PositionAttentionTypes>§pooler_dropout: Option<f64>§pooler_hidden_act: Option<Activation>§pooler_hidden_size: Option<i64>§layer_norm_eps: Option<f64>§pad_token_id: Option<i64>§relative_attention: Option<bool>§max_relative_positions: Option<i64>§embedding_size: Option<i64>§talking_head: Option<bool>§output_hidden_states: Option<bool>§output_attentions: Option<bool>§classifier_dropout: Option<f64>§is_decoder: Option<bool>§id2label: Option<HashMap<i64, String>>§label2id: Option<HashMap<String, i64>>§share_att_key: Option<bool>§position_buckets: Option<i64>

Trait Implementations§

Loads a Config object from a JSON file. The format is expected to be aligned with the Transformers library configuration files for each model. The parsing will fail if non-optional keys expected by the model are missing. Read more
Formats the value using the given formatter. Read more
Returns the “default value” for a type. Read more
Deserialize this value from the given Serde deserializer. Read more
Converts to this type from the input type.
Converts to this type from the input type.
Serialize this value into the given Serde serializer. Read more

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The alignment of pointer.
The type for initializers.
Initializes a with the given initializer. Read more
Dereferences the given pointer. Read more
Mutably dereferences the given pointer. Read more
Drops the object pointed to by the given pointer. Read more
Should always be Self
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.
Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more