celer.ElasticNet¶
- class celer.ElasticNet(alpha=1.0, l1_ratio=1.0, max_iter=100, max_epochs=50000, p0=10, verbose=0, tol=0.0001, prune=True, fit_intercept=True, weights=None, warm_start=False, positive=False)[source]¶
ElasticNet scikit-learn estimator based on Celer solver
The optimization objective for ElasticNet is:
1 / (2 * n_samples) * ||y - X w||^2_2 + alpha * l1_ratio * \sum_j weights_j |w_j| + 0.5 * alpha * (1 - l1_ratio) * \sum_j weights_j |w_j|^2)
- Parameters:
- alphafloat, optional
Constant that multiplies the penalty term. Defaults to 1.0.
alpha = 0
is equivalent to an ordinary least square. For numerical reasons, usingalpha = 0
with theLasso
object is not advised.- l1_ratiofloat, optional
The ElasticNet mixing parameter, with
0 < l1_ratio <= 1
. Defaults to 1.0 which corresponds to L1 penalty (Lasso).l1_ratio = 0
(Ridge regression) is not supported.- max_iterint, optional
The maximum number of iterations (subproblem definitions).
- max_epochsint
Maximum number of CD epochs on each subproblem.
- p0int
First working set size.
- verbosebool or integer
Amount of verbosity.
- tolfloat, optional
Stopping criterion for the optimization: the solver runs until the duality gap is smaller than
tol * norm(y) ** 2 / len(y)
or the maximum number of iteration is reached.- prune0 | 1, optional
Whether or not to use pruning when growing working sets.
- fit_interceptbool, optional (default=True)
Whether or not to fit an intercept.
- weightsarray, shape (n_features,), optional (default=None)
Strictly positive weights used in the L1 penalty part of the Lasso objective. If None, weights equal to 1 are used.
- warm_startbool, optional (default=False)
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.
- positivebool, optional (default=False)
When set to True, forces the coefficients to be positive.
See also
References
[1]M. Massias, A. Gramfort, J. Salmon “Celer: a Fast Solver for the Lasso wit Dual Extrapolation”, ICML 2018, http://proceedings.mlr.press/v80/massias18a.html
Examples
>>> from celer import ElasticNet >>> clf = ElasticNet(l1_ratio=0.8, alpha=0.1) >>> clf.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2]) ElasticNet(alpha=0.1, l1_ratio=0.8) >>> print(clf.coef_) [0.43470641 0.43232388] >>> print(clf.intercept_) 0.13296971635785026
- Attributes:
- coef_array, shape (n_features,)
parameter vector (w in the cost function formula).
sparse_coef_
scipy.sparse matrix, shape (n_features, 1)Sparse representation of the fitted coef_.
- intercept_float
constant term in decision function.
- n_iter_int
Number of subproblems solved by Celer to reach the specified tolerance.
- __init__(alpha=1.0, l1_ratio=1.0, max_iter=100, max_epochs=50000, p0=10, verbose=0, tol=0.0001, prune=True, fit_intercept=True, weights=None, warm_start=False, positive=False)[source]¶
Methods
__init__
([alpha, l1_ratio, max_iter, ...])fit
(X, y[, sample_weight, check_input])Fit model with coordinate descent.
get_metadata_routing
()Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
path
(X, y, alphas[, coef_init, return_n_iter])Compute ElasticNet path with Celer.
predict
(X)Predict using the linear model.
score
(X, y[, sample_weight])Return the coefficient of determination of the prediction.
set_fit_request
(*[, check_input, sample_weight])Request metadata passed to the
fit
method.set_params
(**params)Set the parameters of this estimator.
set_score_request
(*[, sample_weight])Request metadata passed to the
score
method.Attributes
sparse_coef_
Sparse representation of the fitted coef_.