PyLogisticRegression

Struct PyLogisticRegression 

Source
pub struct PyLogisticRegression { /* private fields */ }
Expand description

Logistic Regression (aka logit, MaxEnt) classifier.

In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.)

This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied).

The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation, or no regularization. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the ‘saga’ solver.

§Parameters

penalty : {‘l1’, ‘l2’, ‘elasticnet’, None}, default=‘l2’ Specify the norm of the penalty:

- None: no penalty is added;
- 'l2': add a L2 penalty term and it is the default choice;
- 'l1': add a L1 penalty term;
- 'elasticnet': both L1 and L2 penalty terms are added.

dual : bool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.

tol : float, default=1e-4 Tolerance for stopping criteria.

C : float, default=1.0 Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.

fit_intercept : bool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.

intercept_scaling : float, default=1 Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight.

Note! the synthetic feature weight is subject to l1/l2 regularization
as all other features.
To lessen the effect of regularization on synthetic feature weight
(and therefore on the intercept) intercept_scaling has to be increased.

class_weight : dict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one.

The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``.

Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified.

random_state : int, RandomState instance, default=None Used when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. See :term:Glossary <random_state> for details.

solver : {‘lbfgs’, ‘liblinear’, ‘newton-cg’, ‘newton-cholesky’, ‘sag’, ‘saga’},
default=‘lbfgs’

Algorithm to use in the optimization problem. Default is 'lbfgs'.
To choose a solver, you might want to consider the following aspects:

    - For small datasets, 'liblinear' is a good choice, whereas 'sag'
      and 'saga' are faster for large ones;
    - For multiclass problems, only 'newton-cg', 'sag', 'saga' and
      'lbfgs' handle multinomial loss;
    - 'liblinear' is limited to one-versus-rest schemes.

max_iter : int, default=100 Maximum number of iterations taken for the solvers to converge.

multi_class : {‘auto’, ‘ovr’, ‘multinomial’}, default=‘auto’ If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=‘liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=‘liblinear’, and otherwise selects ‘multinomial’.

verbose : int, default=0 For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.

warm_start : bool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. See :term:the Glossary <warm_start>.

n_jobs : int, default=None Number of CPU cores used when parallelizing over classes if multi_class=‘ovr’“. This parameter is ignored when the solver is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. None means 1 unless in a :obj:joblib.parallel_backend context. -1 means using all processors. See :term:Glossary <n_jobs> for more details.

l1_ratio : float, default=None The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2.

§Attributes

classes_ : ndarray of shape (n_classes, ) A list of class labels known to the classifier.

coef_ : ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function.

`coef_` is of shape (1, n_features) when the given problem is binary.
In particular, when `multi_class='multinomial'`, `coef_` corresponds
to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False).

intercept_ : ndarray of shape (1,) or (n_classes,) Intercept (a.k.a. bias) added to the decision function.

If `fit_intercept` is set to False, the intercept is set to zero.
`intercept_` is of shape (1,) when the given problem is binary.
In particular, when `multi_class='multinomial'`, `intercept_`
corresponds to outcome 1 (True) and `-intercept_` corresponds to
outcome 0 (False).

n_features_in_ : int Number of features seen during :term:fit.

n_iter_ : ndarray of shape (n_classes,) or (1, ) Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given.

§Examples

from sklears_python import LogisticRegression from sklearn.datasets import load_iris X, y = load_iris(return_X_y=True) clf = LogisticRegression(random_state=0).fit(X, y) clf.predict(X[:2, :]) array([0, 0]) clf.predict_proba(X[:2, :]) array([[9.8…e-01, 1.8…e-02, 1.4…e-08], [9.7…e-01, 2.8…e-02, …e-08]]) clf.score(X, y) 0.97…

§Notes

The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter.

Predict output may not match that of standalone liblinear in certain cases. See :ref:differences from liblinear <liblinear_differences> in the narrative documentation.

§References

L-BFGS-B – Software for Large-scale Bound-constrained Optimization Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html

LIBLINEAR – A Library for Large Linear Classification https://www.csie.ntu.edu.tw/~cjlin/liblinear/

SAG – Mark Schmidt, Nicolas Le Roux, and Francis Bach Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document

SAGA – Defazio, A., Bach F. & Lacoste-Julien S. (2014). SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202

Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf

Trait Implementations§

Source§

impl<'py> IntoPyObject<'py> for PyLogisticRegression

Source§

type Target = PyLogisticRegression

The Python output type
Source§

type Output = Bound<'py, <PyLogisticRegression as IntoPyObject<'py>>::Target>

The smart pointer type to use. Read more
Source§

type Error = PyErr

The type returned in the event of a conversion error.
Source§

fn into_pyobject( self, py: Python<'py>, ) -> Result<<Self as IntoPyObject<'_>>::Output, <Self as IntoPyObject<'_>>::Error>

Performs the conversion.
Source§

impl PyClass for PyLogisticRegression

Source§

type Frozen = False

Whether the pyclass is frozen. Read more
Source§

impl PyClassImpl for PyLogisticRegression

Source§

const IS_BASETYPE: bool = false

#[pyclass(subclass)]
Source§

const IS_SUBCLASS: bool = false

#[pyclass(extends=…)]
Source§

const IS_MAPPING: bool = false

#[pyclass(mapping)]
Source§

const IS_SEQUENCE: bool = false

#[pyclass(sequence)]
Source§

const IS_IMMUTABLE_TYPE: bool = false

#[pyclass(immutable_type)]
Source§

const RAW_DOC: &'static CStr = /// Logistic Regression (aka logit, MaxEnt) classifier. /// /// In the multiclass case, the training algorithm uses the one-vs-rest (OvR) /// scheme if the 'multi_class' option is set to 'ovr', and uses the /// cross-entropy loss if the 'multi_class' option is set to 'multinomial'. /// (Currently the 'multinomial' option is supported only by the 'lbfgs', /// 'sag', 'saga' and 'newton-cg' solvers.) /// /// This class implements regularized logistic regression using the /// 'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers. /// **Note that regularization is applied by default**. It can handle both /// dense and sparse input. Use C-ordered arrays or CSR matrices containing /// 64-bit floats for optimal performance; any other input format will be /// converted (and copied). /// /// The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization /// with primal formulation, or no regularization. The 'liblinear' solver /// supports both L1 and L2 regularization, with a dual formulation only for /// the L2 penalty. The Elastic-Net regularization is only supported by the /// 'saga' solver. /// /// Parameters /// ---------- /// penalty : {'l1', 'l2', 'elasticnet', None}, default='l2' /// Specify the norm of the penalty: /// /// - None: no penalty is added; /// - 'l2': add a L2 penalty term and it is the default choice; /// - 'l1': add a L1 penalty term; /// - 'elasticnet': both L1 and L2 penalty terms are added. /// /// dual : bool, default=False /// Dual or primal formulation. Dual formulation is only implemented for /// l2 penalty with liblinear solver. Prefer dual=False when /// n_samples > n_features. /// /// tol : float, default=1e-4 /// Tolerance for stopping criteria. /// /// C : float, default=1.0 /// Inverse of regularization strength; must be a positive float. /// Like in support vector machines, smaller values specify stronger /// regularization. /// /// fit_intercept : bool, default=True /// Specifies if a constant (a.k.a. bias or intercept) should be /// added to the decision function. /// /// intercept_scaling : float, default=1 /// Useful only when the solver 'liblinear' is used /// and self.fit_intercept is set to True. In this case, x becomes /// [x, self.intercept_scaling], /// i.e. a "synthetic" feature with constant value equal to /// intercept_scaling is appended to the instance vector. /// The intercept becomes intercept_scaling * synthetic_feature_weight. /// /// Note! the synthetic feature weight is subject to l1/l2 regularization /// as all other features. /// To lessen the effect of regularization on synthetic feature weight /// (and therefore on the intercept) intercept_scaling has to be increased. /// /// class_weight : dict or 'balanced', default=None /// Weights associated with classes in the form ``{class_label: weight}``. /// If not given, all classes are supposed to have weight one. /// /// The "balanced" mode uses the values of y to automatically adjust /// weights inversely proportional to class frequencies in the input data /// as ``n_samples / (n_classes * np.bincount(y))``. /// /// Note that these weights will be multiplied with sample_weight (passed /// through the fit method) if sample_weight is specified. /// /// random_state : int, RandomState instance, default=None /// Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the /// data. See :term:`Glossary <random_state>` for details. /// /// solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, \ /// default='lbfgs' /// /// Algorithm to use in the optimization problem. Default is 'lbfgs'. /// To choose a solver, you might want to consider the following aspects: /// /// - For small datasets, 'liblinear' is a good choice, whereas 'sag' /// and 'saga' are faster for large ones; /// - For multiclass problems, only 'newton-cg', 'sag', 'saga' and /// 'lbfgs' handle multinomial loss; /// - 'liblinear' is limited to one-versus-rest schemes. /// /// max_iter : int, default=100 /// Maximum number of iterations taken for the solvers to converge. /// /// multi_class : {'auto', 'ovr', 'multinomial'}, default='auto' /// If the option chosen is 'ovr', then a binary problem is fit for each /// label. For 'multinomial' the loss minimised is the multinomial loss fit /// across the entire probability distribution, *even when the data is /// binary*. 'multinomial' is unavailable when solver='liblinear'. /// 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', /// and otherwise selects 'multinomial'. /// /// verbose : int, default=0 /// For the liblinear and lbfgs solvers set verbose to any positive /// number for verbosity. /// /// warm_start : bool, default=False /// When set to True, reuse the solution of the previous call to fit as /// initialization, otherwise, just erase the previous solution. /// Useless for liblinear solver. See :term:`the Glossary <warm_start>`. /// /// n_jobs : int, default=None /// Number of CPU cores used when parallelizing over classes if /// multi_class='ovr'". This parameter is ignored when the ``solver`` /// is set to 'liblinear' regardless of whether 'multi_class' is specified or /// not. ``None`` means 1 unless in a /// :obj:`joblib.parallel_backend` context. ``-1`` means using all /// processors. See :term:`Glossary <n_jobs>` for more details. /// /// l1_ratio : float, default=None /// The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only /// used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent /// to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent /// to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a /// combination of L1 and L2. /// /// Attributes /// ---------- /// classes_ : ndarray of shape (n_classes, ) /// A list of class labels known to the classifier. /// /// coef_ : ndarray of shape (1, n_features) or (n_classes, n_features) /// Coefficient of the features in the decision function. /// /// `coef_` is of shape (1, n_features) when the given problem is binary. /// In particular, when `multi_class='multinomial'`, `coef_` corresponds /// to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False). /// /// intercept_ : ndarray of shape (1,) or (n_classes,) /// Intercept (a.k.a. bias) added to the decision function. /// /// If `fit_intercept` is set to False, the intercept is set to zero. /// `intercept_` is of shape (1,) when the given problem is binary. /// In particular, when `multi_class='multinomial'`, `intercept_` /// corresponds to outcome 1 (True) and `-intercept_` corresponds to /// outcome 0 (False). /// /// n_features_in_ : int /// Number of features seen during :term:`fit`. /// /// n_iter_ : ndarray of shape (n_classes,) or (1, ) /// Actual number of iterations for all classes. If binary or multinomial, /// it returns only 1 element. For liblinear solver, only the maximum /// number of iteration across all classes is given. /// /// Examples /// -------- /// >>> from sklears_python import LogisticRegression /// >>> from sklearn.datasets import load_iris /// >>> X, y = load_iris(return_X_y=True) /// >>> clf = LogisticRegression(random_state=0).fit(X, y) /// >>> clf.predict(X[:2, :]) /// array([0, 0]) /// >>> clf.predict_proba(X[:2, :]) /// array([[9.8...e-01, 1.8...e-02, 1.4...e-08], /// [9.7...e-01, 2.8...e-02, ...e-08]]) /// >>> clf.score(X, y) /// 0.97... /// /// Notes /// ----- /// The underlying C implementation uses a random number generator to /// select features when fitting the model. It is thus not uncommon, /// to have slightly different results for the same input data. If /// that happens, try with a smaller tol parameter. /// /// Predict output may not match that of standalone liblinear in certain /// cases. See :ref:`differences from liblinear <liblinear_differences>` /// in the narrative documentation. /// /// References /// ---------- /// L-BFGS-B -- Software for Large-scale Bound-constrained Optimization /// Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. /// http://users.iems.northwestern.edu/~nocedal/lbfgsb.html /// /// LIBLINEAR -- A Library for Large Linear Classification /// https://www.csie.ntu.edu.tw/~cjlin/liblinear/ /// /// SAG -- Mark Schmidt, Nicolas Le Roux, and Francis Bach /// Minimizing Finite Sums with the Stochastic Average Gradient /// https://hal.inria.fr/hal-00860051/document /// /// SAGA -- Defazio, A., Bach F. & Lacoste-Julien S. (2014). /// SAGA: A Fast Incremental Gradient Method With Support /// for Non-Strongly Convex Composite Objectives /// https://arxiv.org/abs/1407.0202 /// /// Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent /// methods for logistic regression and maximum entropy models. /// Machine Learning 85(1-2):41-75. /// https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf

Docstring for the class provided on the struct or enum. Read more
Source§

const DOC: &'static CStr

Fully rendered class doc, including the text_signature if a constructor is defined. Read more
Source§

type BaseType = PyAny

Base class
Source§

type ThreadChecker = SendablePyClass<PyLogisticRegression>

This handles following two situations: Read more
Source§

type PyClassMutability = <<PyAny as PyClassBaseType>::PyClassMutability as PyClassMutability>::MutableChild

Immutable or mutable
Source§

type Dict = PyClassDummySlot

Specify this class has #[pyclass(dict)] or not.
Source§

type WeakRef = PyClassDummySlot

Specify this class has #[pyclass(weakref)] or not.
Source§

type BaseNativeType = PyAny

The closest native ancestor. This is PyAny by default, and when you declare #[pyclass(extends=PyDict)], it’s PyDict.
Source§

fn items_iter() -> PyClassItemsIter

Source§

fn lazy_type_object() -> &'static LazyTypeObject<Self>

Source§

fn dict_offset() -> Option<isize>

Source§

fn weaklist_offset() -> Option<isize>

Source§

impl PyClassNewTextSignature for PyLogisticRegression

Source§

const TEXT_SIGNATURE: &'static str = "(penalty=\"l2\", dual=False, tol=1e-4, c=1.0, fit_intercept=True, intercept_scaling=1.0, class_weight=None, random_state=None, solver=\"lbfgs\", max_iter=100, multi_class=\"auto\", verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)"

Source§

impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder PyLogisticRegression

Source§

type Holder = Option<PyClassGuard<'a, PyLogisticRegression>>

Source§

fn extract( obj: &'a Bound<'py, PyAny>, holder: &'holder mut Self::Holder, ) -> PyResult<Self>

Source§

impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder mut PyLogisticRegression

Source§

type Holder = Option<PyClassGuardMut<'a, PyLogisticRegression>>

Source§

fn extract( obj: &'a Bound<'py, PyAny>, holder: &'holder mut Self::Holder, ) -> PyResult<Self>

Source§

impl PyMethods<PyLogisticRegression> for PyClassImplCollector<PyLogisticRegression>

Source§

fn py_methods(self) -> &'static PyClassItems

Source§

impl PyTypeInfo for PyLogisticRegression

Source§

const NAME: &'static str = "LogisticRegression"

Class name.
Source§

const MODULE: Option<&'static str> = ::core::option::Option::None

Module name, if any.
Source§

fn type_object_raw(py: Python<'_>) -> *mut PyTypeObject

Returns the PyTypeObject instance for this type.
Source§

fn type_object(py: Python<'_>) -> Bound<'_, PyType>

Returns the safe abstraction over the type object.
Source§

fn is_type_of(object: &Bound<'_, PyAny>) -> bool

Checks if object is an instance of this type or a subclass of this type.
Source§

fn is_exact_type_of(object: &Bound<'_, PyAny>) -> bool

Checks if object is an instance of this type.
Source§

impl DerefToPyAny for PyLogisticRegression

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<'py, T> IntoPyObjectExt<'py> for T
where T: IntoPyObject<'py>,

Source§

fn into_bound_py_any(self, py: Python<'py>) -> Result<Bound<'py, PyAny>, PyErr>

Converts self into an owned Python object, dropping type information.
Source§

fn into_py_any(self, py: Python<'py>) -> Result<Py<PyAny>, PyErr>

Converts self into an owned Python object, dropping type information and unbinding it from the 'py lifetime.
Source§

fn into_pyobject_or_pyerr(self, py: Python<'py>) -> Result<Self::Output, PyErr>

Converts self into a Python object. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> PyErrArguments for T
where T: for<'py> IntoPyObject<'py> + Send + Sync,

Source§

fn arguments(self, py: Python<'_>) -> Py<PyAny>

Arguments for exception
Source§

impl<T> PyTypeCheck for T
where T: PyTypeInfo,

Source§

const NAME: &'static str = <T as PyTypeInfo>::NAME

Name of self. This is used in error messages, for example.
Source§

fn type_check(object: &Bound<'_, PyAny>) -> bool

Checks if object is an instance of Self, which may include a subtype. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> Ungil for T
where T: Send,