Struct rust_bert::pipelines::summarization::SummarizationConfig
source · [−]pub struct SummarizationConfig {Show 20 fields
pub model_type: ModelType,
pub model_resource: Box<dyn ResourceProvider + Send>,
pub config_resource: Box<dyn ResourceProvider + Send>,
pub vocab_resource: Box<dyn ResourceProvider + Send>,
pub merges_resource: Box<dyn ResourceProvider + Send>,
pub min_length: i64,
pub max_length: i64,
pub do_sample: bool,
pub early_stopping: bool,
pub num_beams: i64,
pub temperature: f64,
pub top_k: i64,
pub top_p: f64,
pub repetition_penalty: f64,
pub length_penalty: f64,
pub no_repeat_ngram_size: i64,
pub num_return_sequences: i64,
pub num_beam_groups: Option<i64>,
pub diversity_penalty: Option<f64>,
pub device: Device,
}Expand description
Configuration for text summarization
Contains information regarding the model to load, mirrors the GenerationConfig, with a different set of default parameters and sets the device to place the model on.
Fields
model_type: ModelTypeModel type
model_resource: Box<dyn ResourceProvider + Send>Model weights resource (default: pretrained BART model on CNN-DM)
config_resource: Box<dyn ResourceProvider + Send>Config resource (default: pretrained BART model on CNN-DM)
vocab_resource: Box<dyn ResourceProvider + Send>Vocab resource (default: pretrained BART model on CNN-DM)
merges_resource: Box<dyn ResourceProvider + Send>Merges resource (default: pretrained BART model on CNN-DM)
min_length: i64Minimum sequence length (default: 0)
max_length: i64Maximum sequence length (default: 20)
do_sample: boolSampling flag. If true, will perform top-k and/or nucleus sampling on generated tokens, otherwise greedy (deterministic) decoding (default: true)
early_stopping: boolEarly stopping flag indicating if the beam search should stop as soon as num_beam hypotheses have been generated (default: false)
num_beams: i64Number of beams for beam search (default: 5)
temperature: f64Temperature setting. Values higher than 1 will improve originality at the risk of reducing relevance (default: 1.0)
top_k: i64Top_k values for sampling tokens. Value higher than 0 will enable the feature (default: 0)
top_p: f64Top_p value for Nucleus sampling, Holtzman et al.. Keep top tokens until cumulative probability reaches top_p (default: 0.9)
repetition_penalty: f64Repetition penalty (mostly useful for CTRL decoders). Values higher than 1 will penalize tokens that have been already generated. (default: 1.0)
length_penalty: f64Exponential penalty based on the length of the hypotheses generated (default: 1.0)
no_repeat_ngram_size: i64Number of allowed repetitions of n-grams. Values higher than 0 turn on this feature (default: 3)
num_return_sequences: i64Number of sequences to return for each prompt text (default: 1)
num_beam_groups: Option<i64>Number of beam groups for diverse beam generation. If provided and higher than 1, will split the beams into beam subgroups leading to more diverse generation.
diversity_penalty: Option<f64>Diversity penalty for diverse beam search. High values will enforce more difference between beam groups (default: 5.5)
device: DeviceDevice to place the model on (default: CUDA/GPU when available)
Implementations
sourceimpl SummarizationConfig
impl SummarizationConfig
sourcepub fn new<R>(
model_type: ModelType,
model_resource: R,
config_resource: R,
vocab_resource: R,
merges_resource: R
) -> SummarizationConfig where
R: ResourceProvider + Send + 'static,
pub fn new<R>(
model_type: ModelType,
model_resource: R,
config_resource: R,
vocab_resource: R,
merges_resource: R
) -> SummarizationConfig where
R: ResourceProvider + Send + 'static,
Instantiate a new summarization configuration of the supplied type.
Arguments
model_type-ModelTypeindicating the model type to load (must match with the actual data to be loaded!)- model_resource - The
ResourceProviderpointing to the model to load (e.g. model.ot) - config_resource - The
ResourceProviderpointing to the model configuration to load (e.g. config.json) - vocab_resource - The
ResourceProviderpointing to the tokenizer’s vocabulary to load (e.g. vocab.txt/vocab.json) - merges_resource - The
ResourceProviderpointing to the tokenizer’s merge file or SentencePiece model to load (e.g. merges.txt).
Trait Implementations
sourceimpl Default for SummarizationConfig
impl Default for SummarizationConfig
sourcefn default() -> SummarizationConfig
fn default() -> SummarizationConfig
Returns the “default value” for a type. Read more
sourceimpl From<SummarizationConfig> for GenerateConfig
impl From<SummarizationConfig> for GenerateConfig
sourcefn from(config: SummarizationConfig) -> GenerateConfig
fn from(config: SummarizationConfig) -> GenerateConfig
Converts to this type from the input type.
Auto Trait Implementations
impl !RefUnwindSafe for SummarizationConfig
impl Send for SummarizationConfig
impl !Sync for SummarizationConfig
impl Unpin for SummarizationConfig
impl !UnwindSafe for SummarizationConfig
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more