pub struct PyLogisticRegression { /* private fields */ }Expand description
Logistic Regression (aka logit, MaxEnt) classifier.
In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.)
This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied).
The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation, or no regularization. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the ‘saga’ solver.
§Parameters
penalty : {‘l1’, ‘l2’, ‘elasticnet’, None}, default=‘l2’ Specify the norm of the penalty:
- None: no penalty is added;
- 'l2': add a L2 penalty term and it is the default choice;
- 'l1': add a L1 penalty term;
- 'elasticnet': both L1 and L2 penalty terms are added.dual : bool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
tol : float, default=1e-4 Tolerance for stopping criteria.
C : float, default=1.0 Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.
fit_intercept : bool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
intercept_scaling : float, default=1 Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight.
Note! the synthetic feature weight is subject to l1/l2 regularization
as all other features.
To lessen the effect of regularization on synthetic feature weight
(and therefore on the intercept) intercept_scaling has to be increased.class_weight : dict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}.
If not given, all classes are supposed to have weight one.
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``.
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified.random_state : int, RandomState instance, default=None
Used when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the
data. See :term:Glossary <random_state> for details.
solver : {‘lbfgs’, ‘liblinear’, ‘newton-cg’, ‘newton-cholesky’, ‘sag’, ‘saga’},
default=‘lbfgs’
Algorithm to use in the optimization problem. Default is 'lbfgs'.
To choose a solver, you might want to consider the following aspects:
- For small datasets, 'liblinear' is a good choice, whereas 'sag'
and 'saga' are faster for large ones;
- For multiclass problems, only 'newton-cg', 'sag', 'saga' and
'lbfgs' handle multinomial loss;
- 'liblinear' is limited to one-versus-rest schemes.max_iter : int, default=100 Maximum number of iterations taken for the solvers to converge.
multi_class : {‘auto’, ‘ovr’, ‘multinomial’}, default=‘auto’ If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=‘liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=‘liblinear’, and otherwise selects ‘multinomial’.
verbose : int, default=0 For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as
initialization, otherwise, just erase the previous solution.
Useless for liblinear solver. See :term:the Glossary <warm_start>.
n_jobs : int, default=None
Number of CPU cores used when parallelizing over classes if
multi_class=‘ovr’“. This parameter is ignored when the solver
is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or
not. None means 1 unless in a
:obj:joblib.parallel_backend context. -1 means using all
processors. See :term:Glossary <n_jobs> for more details.
l1_ratio : float, default=None
The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only
used if penalty='elasticnet'. Setting l1_ratio=0 is equivalent
to using penalty='l2', while setting l1_ratio=1 is equivalent
to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a
combination of L1 and L2.
§Attributes
classes_ : ndarray of shape (n_classes, ) A list of class labels known to the classifier.
coef_ : ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function.
`coef_` is of shape (1, n_features) when the given problem is binary.
In particular, when `multi_class='multinomial'`, `coef_` corresponds
to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False).intercept_ : ndarray of shape (1,) or (n_classes,) Intercept (a.k.a. bias) added to the decision function.
If `fit_intercept` is set to False, the intercept is set to zero.
`intercept_` is of shape (1,) when the given problem is binary.
In particular, when `multi_class='multinomial'`, `intercept_`
corresponds to outcome 1 (True) and `-intercept_` corresponds to
outcome 0 (False).n_features_in_ : int
Number of features seen during :term:fit.
n_iter_ : ndarray of shape (n_classes,) or (1, ) Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given.
§Examples
from sklears_python import LogisticRegression from sklearn.datasets import load_iris X, y = load_iris(return_X_y=True) clf = LogisticRegression(random_state=0).fit(X, y) clf.predict(X[:2, :]) array([0, 0]) clf.predict_proba(X[:2, :]) array([[9.8…e-01, 1.8…e-02, 1.4…e-08], [9.7…e-01, 2.8…e-02, …e-08]]) clf.score(X, y) 0.97…
§Notes
The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter.
Predict output may not match that of standalone liblinear in certain
cases. See :ref:differences from liblinear <liblinear_differences>
in the narrative documentation.
§References
L-BFGS-B – Software for Large-scale Bound-constrained Optimization Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html
LIBLINEAR – A Library for Large Linear Classification https://www.csie.ntu.edu.tw/~cjlin/liblinear/
SAG – Mark Schmidt, Nicolas Le Roux, and Francis Bach Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document
SAGA – Defazio, A., Bach F. & Lacoste-Julien S. (2014). SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202
Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf
Trait Implementations§
Source§impl<'py> IntoPyObject<'py> for PyLogisticRegression
impl<'py> IntoPyObject<'py> for PyLogisticRegression
Source§type Target = PyLogisticRegression
type Target = PyLogisticRegression
Source§type Output = Bound<'py, <PyLogisticRegression as IntoPyObject<'py>>::Target>
type Output = Bound<'py, <PyLogisticRegression as IntoPyObject<'py>>::Target>
Source§fn into_pyobject(
self,
py: Python<'py>,
) -> Result<<Self as IntoPyObject<'_>>::Output, <Self as IntoPyObject<'_>>::Error>
fn into_pyobject( self, py: Python<'py>, ) -> Result<<Self as IntoPyObject<'_>>::Output, <Self as IntoPyObject<'_>>::Error>
Source§impl PyClass for PyLogisticRegression
impl PyClass for PyLogisticRegression
Source§impl PyClassImpl for PyLogisticRegression
impl PyClassImpl for PyLogisticRegression
Source§const IS_BASETYPE: bool = false
const IS_BASETYPE: bool = false
Source§const IS_SUBCLASS: bool = false
const IS_SUBCLASS: bool = false
Source§const IS_MAPPING: bool = false
const IS_MAPPING: bool = false
Source§const IS_SEQUENCE: bool = false
const IS_SEQUENCE: bool = false
Source§const IS_IMMUTABLE_TYPE: bool = false
const IS_IMMUTABLE_TYPE: bool = false
Source§const RAW_DOC: &'static CStr = /// Logistic Regression (aka logit, MaxEnt) classifier.
///
/// In the multiclass case, the training algorithm uses the one-vs-rest (OvR)
/// scheme if the 'multi_class' option is set to 'ovr', and uses the
/// cross-entropy loss if the 'multi_class' option is set to 'multinomial'.
/// (Currently the 'multinomial' option is supported only by the 'lbfgs',
/// 'sag', 'saga' and 'newton-cg' solvers.)
///
/// This class implements regularized logistic regression using the
/// 'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers.
/// **Note that regularization is applied by default**. It can handle both
/// dense and sparse input. Use C-ordered arrays or CSR matrices containing
/// 64-bit floats for optimal performance; any other input format will be
/// converted (and copied).
///
/// The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization
/// with primal formulation, or no regularization. The 'liblinear' solver
/// supports both L1 and L2 regularization, with a dual formulation only for
/// the L2 penalty. The Elastic-Net regularization is only supported by the
/// 'saga' solver.
///
/// Parameters
/// ----------
/// penalty : {'l1', 'l2', 'elasticnet', None}, default='l2'
/// Specify the norm of the penalty:
///
/// - None: no penalty is added;
/// - 'l2': add a L2 penalty term and it is the default choice;
/// - 'l1': add a L1 penalty term;
/// - 'elasticnet': both L1 and L2 penalty terms are added.
///
/// dual : bool, default=False
/// Dual or primal formulation. Dual formulation is only implemented for
/// l2 penalty with liblinear solver. Prefer dual=False when
/// n_samples > n_features.
///
/// tol : float, default=1e-4
/// Tolerance for stopping criteria.
///
/// C : float, default=1.0
/// Inverse of regularization strength; must be a positive float.
/// Like in support vector machines, smaller values specify stronger
/// regularization.
///
/// fit_intercept : bool, default=True
/// Specifies if a constant (a.k.a. bias or intercept) should be
/// added to the decision function.
///
/// intercept_scaling : float, default=1
/// Useful only when the solver 'liblinear' is used
/// and self.fit_intercept is set to True. In this case, x becomes
/// [x, self.intercept_scaling],
/// i.e. a "synthetic" feature with constant value equal to
/// intercept_scaling is appended to the instance vector.
/// The intercept becomes intercept_scaling * synthetic_feature_weight.
///
/// Note! the synthetic feature weight is subject to l1/l2 regularization
/// as all other features.
/// To lessen the effect of regularization on synthetic feature weight
/// (and therefore on the intercept) intercept_scaling has to be increased.
///
/// class_weight : dict or 'balanced', default=None
/// Weights associated with classes in the form ``{class_label: weight}``.
/// If not given, all classes are supposed to have weight one.
///
/// The "balanced" mode uses the values of y to automatically adjust
/// weights inversely proportional to class frequencies in the input data
/// as ``n_samples / (n_classes * np.bincount(y))``.
///
/// Note that these weights will be multiplied with sample_weight (passed
/// through the fit method) if sample_weight is specified.
///
/// random_state : int, RandomState instance, default=None
/// Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the
/// data. See :term:`Glossary <random_state>` for details.
///
/// solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, \
/// default='lbfgs'
///
/// Algorithm to use in the optimization problem. Default is 'lbfgs'.
/// To choose a solver, you might want to consider the following aspects:
///
/// - For small datasets, 'liblinear' is a good choice, whereas 'sag'
/// and 'saga' are faster for large ones;
/// - For multiclass problems, only 'newton-cg', 'sag', 'saga' and
/// 'lbfgs' handle multinomial loss;
/// - 'liblinear' is limited to one-versus-rest schemes.
///
/// max_iter : int, default=100
/// Maximum number of iterations taken for the solvers to converge.
///
/// multi_class : {'auto', 'ovr', 'multinomial'}, default='auto'
/// If the option chosen is 'ovr', then a binary problem is fit for each
/// label. For 'multinomial' the loss minimised is the multinomial loss fit
/// across the entire probability distribution, *even when the data is
/// binary*. 'multinomial' is unavailable when solver='liblinear'.
/// 'auto' selects 'ovr' if the data is binary, or if solver='liblinear',
/// and otherwise selects 'multinomial'.
///
/// verbose : int, default=0
/// For the liblinear and lbfgs solvers set verbose to any positive
/// number for verbosity.
///
/// warm_start : bool, default=False
/// When set to True, reuse the solution of the previous call to fit as
/// initialization, otherwise, just erase the previous solution.
/// Useless for liblinear solver. See :term:`the Glossary <warm_start>`.
///
/// n_jobs : int, default=None
/// Number of CPU cores used when parallelizing over classes if
/// multi_class='ovr'". This parameter is ignored when the ``solver``
/// is set to 'liblinear' regardless of whether 'multi_class' is specified or
/// not. ``None`` means 1 unless in a
/// :obj:`joblib.parallel_backend` context. ``-1`` means using all
/// processors. See :term:`Glossary <n_jobs>` for more details.
///
/// l1_ratio : float, default=None
/// The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only
/// used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent
/// to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent
/// to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a
/// combination of L1 and L2.
///
/// Attributes
/// ----------
/// classes_ : ndarray of shape (n_classes, )
/// A list of class labels known to the classifier.
///
/// coef_ : ndarray of shape (1, n_features) or (n_classes, n_features)
/// Coefficient of the features in the decision function.
///
/// `coef_` is of shape (1, n_features) when the given problem is binary.
/// In particular, when `multi_class='multinomial'`, `coef_` corresponds
/// to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False).
///
/// intercept_ : ndarray of shape (1,) or (n_classes,)
/// Intercept (a.k.a. bias) added to the decision function.
///
/// If `fit_intercept` is set to False, the intercept is set to zero.
/// `intercept_` is of shape (1,) when the given problem is binary.
/// In particular, when `multi_class='multinomial'`, `intercept_`
/// corresponds to outcome 1 (True) and `-intercept_` corresponds to
/// outcome 0 (False).
///
/// n_features_in_ : int
/// Number of features seen during :term:`fit`.
///
/// n_iter_ : ndarray of shape (n_classes,) or (1, )
/// Actual number of iterations for all classes. If binary or multinomial,
/// it returns only 1 element. For liblinear solver, only the maximum
/// number of iteration across all classes is given.
///
/// Examples
/// --------
/// >>> from sklears_python import LogisticRegression
/// >>> from sklearn.datasets import load_iris
/// >>> X, y = load_iris(return_X_y=True)
/// >>> clf = LogisticRegression(random_state=0).fit(X, y)
/// >>> clf.predict(X[:2, :])
/// array([0, 0])
/// >>> clf.predict_proba(X[:2, :])
/// array([[9.8...e-01, 1.8...e-02, 1.4...e-08],
/// [9.7...e-01, 2.8...e-02, ...e-08]])
/// >>> clf.score(X, y)
/// 0.97...
///
/// Notes
/// -----
/// The underlying C implementation uses a random number generator to
/// select features when fitting the model. It is thus not uncommon,
/// to have slightly different results for the same input data. If
/// that happens, try with a smaller tol parameter.
///
/// Predict output may not match that of standalone liblinear in certain
/// cases. See :ref:`differences from liblinear <liblinear_differences>`
/// in the narrative documentation.
///
/// References
/// ----------
/// L-BFGS-B -- Software for Large-scale Bound-constrained Optimization
/// Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales.
/// http://users.iems.northwestern.edu/~nocedal/lbfgsb.html
///
/// LIBLINEAR -- A Library for Large Linear Classification
/// https://www.csie.ntu.edu.tw/~cjlin/liblinear/
///
/// SAG -- Mark Schmidt, Nicolas Le Roux, and Francis Bach
/// Minimizing Finite Sums with the Stochastic Average Gradient
/// https://hal.inria.fr/hal-00860051/document
///
/// SAGA -- Defazio, A., Bach F. & Lacoste-Julien S. (2014).
/// SAGA: A Fast Incremental Gradient Method With Support
/// for Non-Strongly Convex Composite Objectives
/// https://arxiv.org/abs/1407.0202
///
/// Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent
/// methods for logistic regression and maximum entropy models.
/// Machine Learning 85(1-2):41-75.
/// https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf
const RAW_DOC: &'static CStr = /// Logistic Regression (aka logit, MaxEnt) classifier. /// /// In the multiclass case, the training algorithm uses the one-vs-rest (OvR) /// scheme if the 'multi_class' option is set to 'ovr', and uses the /// cross-entropy loss if the 'multi_class' option is set to 'multinomial'. /// (Currently the 'multinomial' option is supported only by the 'lbfgs', /// 'sag', 'saga' and 'newton-cg' solvers.) /// /// This class implements regularized logistic regression using the /// 'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers. /// **Note that regularization is applied by default**. It can handle both /// dense and sparse input. Use C-ordered arrays or CSR matrices containing /// 64-bit floats for optimal performance; any other input format will be /// converted (and copied). /// /// The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization /// with primal formulation, or no regularization. The 'liblinear' solver /// supports both L1 and L2 regularization, with a dual formulation only for /// the L2 penalty. The Elastic-Net regularization is only supported by the /// 'saga' solver. /// /// Parameters /// ---------- /// penalty : {'l1', 'l2', 'elasticnet', None}, default='l2' /// Specify the norm of the penalty: /// /// - None: no penalty is added; /// - 'l2': add a L2 penalty term and it is the default choice; /// - 'l1': add a L1 penalty term; /// - 'elasticnet': both L1 and L2 penalty terms are added. /// /// dual : bool, default=False /// Dual or primal formulation. Dual formulation is only implemented for /// l2 penalty with liblinear solver. Prefer dual=False when /// n_samples > n_features. /// /// tol : float, default=1e-4 /// Tolerance for stopping criteria. /// /// C : float, default=1.0 /// Inverse of regularization strength; must be a positive float. /// Like in support vector machines, smaller values specify stronger /// regularization. /// /// fit_intercept : bool, default=True /// Specifies if a constant (a.k.a. bias or intercept) should be /// added to the decision function. /// /// intercept_scaling : float, default=1 /// Useful only when the solver 'liblinear' is used /// and self.fit_intercept is set to True. In this case, x becomes /// [x, self.intercept_scaling], /// i.e. a "synthetic" feature with constant value equal to /// intercept_scaling is appended to the instance vector. /// The intercept becomes intercept_scaling * synthetic_feature_weight. /// /// Note! the synthetic feature weight is subject to l1/l2 regularization /// as all other features. /// To lessen the effect of regularization on synthetic feature weight /// (and therefore on the intercept) intercept_scaling has to be increased. /// /// class_weight : dict or 'balanced', default=None /// Weights associated with classes in the form ``{class_label: weight}``. /// If not given, all classes are supposed to have weight one. /// /// The "balanced" mode uses the values of y to automatically adjust /// weights inversely proportional to class frequencies in the input data /// as ``n_samples / (n_classes * np.bincount(y))``. /// /// Note that these weights will be multiplied with sample_weight (passed /// through the fit method) if sample_weight is specified. /// /// random_state : int, RandomState instance, default=None /// Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the /// data. See :term:`Glossary <random_state>` for details. /// /// solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, \ /// default='lbfgs' /// /// Algorithm to use in the optimization problem. Default is 'lbfgs'. /// To choose a solver, you might want to consider the following aspects: /// /// - For small datasets, 'liblinear' is a good choice, whereas 'sag' /// and 'saga' are faster for large ones; /// - For multiclass problems, only 'newton-cg', 'sag', 'saga' and /// 'lbfgs' handle multinomial loss; /// - 'liblinear' is limited to one-versus-rest schemes. /// /// max_iter : int, default=100 /// Maximum number of iterations taken for the solvers to converge. /// /// multi_class : {'auto', 'ovr', 'multinomial'}, default='auto' /// If the option chosen is 'ovr', then a binary problem is fit for each /// label. For 'multinomial' the loss minimised is the multinomial loss fit /// across the entire probability distribution, *even when the data is /// binary*. 'multinomial' is unavailable when solver='liblinear'. /// 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', /// and otherwise selects 'multinomial'. /// /// verbose : int, default=0 /// For the liblinear and lbfgs solvers set verbose to any positive /// number for verbosity. /// /// warm_start : bool, default=False /// When set to True, reuse the solution of the previous call to fit as /// initialization, otherwise, just erase the previous solution. /// Useless for liblinear solver. See :term:`the Glossary <warm_start>`. /// /// n_jobs : int, default=None /// Number of CPU cores used when parallelizing over classes if /// multi_class='ovr'". This parameter is ignored when the ``solver`` /// is set to 'liblinear' regardless of whether 'multi_class' is specified or /// not. ``None`` means 1 unless in a /// :obj:`joblib.parallel_backend` context. ``-1`` means using all /// processors. See :term:`Glossary <n_jobs>` for more details. /// /// l1_ratio : float, default=None /// The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only /// used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent /// to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent /// to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a /// combination of L1 and L2. /// /// Attributes /// ---------- /// classes_ : ndarray of shape (n_classes, ) /// A list of class labels known to the classifier. /// /// coef_ : ndarray of shape (1, n_features) or (n_classes, n_features) /// Coefficient of the features in the decision function. /// /// `coef_` is of shape (1, n_features) when the given problem is binary. /// In particular, when `multi_class='multinomial'`, `coef_` corresponds /// to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False). /// /// intercept_ : ndarray of shape (1,) or (n_classes,) /// Intercept (a.k.a. bias) added to the decision function. /// /// If `fit_intercept` is set to False, the intercept is set to zero. /// `intercept_` is of shape (1,) when the given problem is binary. /// In particular, when `multi_class='multinomial'`, `intercept_` /// corresponds to outcome 1 (True) and `-intercept_` corresponds to /// outcome 0 (False). /// /// n_features_in_ : int /// Number of features seen during :term:`fit`. /// /// n_iter_ : ndarray of shape (n_classes,) or (1, ) /// Actual number of iterations for all classes. If binary or multinomial, /// it returns only 1 element. For liblinear solver, only the maximum /// number of iteration across all classes is given. /// /// Examples /// -------- /// >>> from sklears_python import LogisticRegression /// >>> from sklearn.datasets import load_iris /// >>> X, y = load_iris(return_X_y=True) /// >>> clf = LogisticRegression(random_state=0).fit(X, y) /// >>> clf.predict(X[:2, :]) /// array([0, 0]) /// >>> clf.predict_proba(X[:2, :]) /// array([[9.8...e-01, 1.8...e-02, 1.4...e-08], /// [9.7...e-01, 2.8...e-02, ...e-08]]) /// >>> clf.score(X, y) /// 0.97... /// /// Notes /// ----- /// The underlying C implementation uses a random number generator to /// select features when fitting the model. It is thus not uncommon, /// to have slightly different results for the same input data. If /// that happens, try with a smaller tol parameter. /// /// Predict output may not match that of standalone liblinear in certain /// cases. See :ref:`differences from liblinear <liblinear_differences>` /// in the narrative documentation. /// /// References /// ---------- /// L-BFGS-B -- Software for Large-scale Bound-constrained Optimization /// Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. /// http://users.iems.northwestern.edu/~nocedal/lbfgsb.html /// /// LIBLINEAR -- A Library for Large Linear Classification /// https://www.csie.ntu.edu.tw/~cjlin/liblinear/ /// /// SAG -- Mark Schmidt, Nicolas Le Roux, and Francis Bach /// Minimizing Finite Sums with the Stochastic Average Gradient /// https://hal.inria.fr/hal-00860051/document /// /// SAGA -- Defazio, A., Bach F. & Lacoste-Julien S. (2014). /// SAGA: A Fast Incremental Gradient Method With Support /// for Non-Strongly Convex Composite Objectives /// https://arxiv.org/abs/1407.0202 /// /// Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent /// methods for logistic regression and maximum entropy models. /// Machine Learning 85(1-2):41-75. /// https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf
Source§const DOC: &'static CStr
const DOC: &'static CStr
text_signature if a constructor is defined. Read moreSource§type ThreadChecker = SendablePyClass<PyLogisticRegression>
type ThreadChecker = SendablePyClass<PyLogisticRegression>
Source§type PyClassMutability = <<PyAny as PyClassBaseType>::PyClassMutability as PyClassMutability>::MutableChild
type PyClassMutability = <<PyAny as PyClassBaseType>::PyClassMutability as PyClassMutability>::MutableChild
Source§type BaseNativeType = PyAny
type BaseNativeType = PyAny
PyAny by default, and when you declare
#[pyclass(extends=PyDict)], it’s PyDict.fn items_iter() -> PyClassItemsIter
fn lazy_type_object() -> &'static LazyTypeObject<Self>
fn dict_offset() -> Option<isize>
fn weaklist_offset() -> Option<isize>
Source§impl PyClassNewTextSignature for PyLogisticRegression
impl PyClassNewTextSignature for PyLogisticRegression
const TEXT_SIGNATURE: &'static str = "(penalty=\"l2\", dual=False, tol=1e-4, c=1.0, fit_intercept=True, intercept_scaling=1.0, class_weight=None, random_state=None, solver=\"lbfgs\", max_iter=100, multi_class=\"auto\", verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)"
Source§impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder PyLogisticRegression
impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder PyLogisticRegression
Source§impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder mut PyLogisticRegression
impl<'a, 'holder, 'py> PyFunctionArgument<'a, 'holder, 'py, false> for &'holder mut PyLogisticRegression
Source§impl PyMethods<PyLogisticRegression> for PyClassImplCollector<PyLogisticRegression>
impl PyMethods<PyLogisticRegression> for PyClassImplCollector<PyLogisticRegression>
fn py_methods(self) -> &'static PyClassItems
Source§impl PyTypeInfo for PyLogisticRegression
impl PyTypeInfo for PyLogisticRegression
Source§fn type_object_raw(py: Python<'_>) -> *mut PyTypeObject
fn type_object_raw(py: Python<'_>) -> *mut PyTypeObject
Source§fn type_object(py: Python<'_>) -> Bound<'_, PyType>
fn type_object(py: Python<'_>) -> Bound<'_, PyType>
impl DerefToPyAny for PyLogisticRegression
Auto Trait Implementations§
impl Freeze for PyLogisticRegression
impl RefUnwindSafe for PyLogisticRegression
impl Send for PyLogisticRegression
impl Sync for PyLogisticRegression
impl Unpin for PyLogisticRegression
impl UnwindSafe for PyLogisticRegression
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<'py, T> IntoPyObjectExt<'py> for Twhere
T: IntoPyObject<'py>,
impl<'py, T> IntoPyObjectExt<'py> for Twhere
T: IntoPyObject<'py>,
Source§fn into_bound_py_any(self, py: Python<'py>) -> Result<Bound<'py, PyAny>, PyErr>
fn into_bound_py_any(self, py: Python<'py>) -> Result<Bound<'py, PyAny>, PyErr>
self into an owned Python object, dropping type information.