Whether to add intercept (default: false).
Whether to add intercept (default: false).
Create a model given the weights and intercept
Create a model given the weights and intercept
The dimension of training features.
The dimension of training features.
In GeneralizedLinearModel
, only single linear predictor is allowed for both weights
and intercept.
In GeneralizedLinearModel
, only single linear predictor is allowed for both weights
and intercept. However, for multinomial logistic regression, with K possible outcomes,
we are training K-1 independent binary logistic regression models which requires K-1 sets
of linear predictor.
As a result, the workaround here is if more than two sets of linear predictors are needed,
we construct bigger weights
vector which can hold both weights and intercepts.
If the intercepts are added, the dimension of weights
will be
(numOfLinearPredictor) * (numFeatures + 1) . If the intercepts are not added,
the dimension of weights
will be (numOfLinearPredictor) * numFeatures.
Thus, the intercepts will be encapsulated into weights, and we leave the value of intercept in GeneralizedLinearModel as zero.
The optimizer to solve the problem.
The optimizer to solve the problem.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries starting from the initial weights provided.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
Set if the algorithm should add an intercept.
Set if the algorithm should add an intercept. Default false. We set the default to false because adding the intercept will cause memory allocation.
Set if the algorithm should validate data before training.
Set if the algorithm should validate data before training. Default true.
Train a regression model with L1-regularization using Stochastic Gradient Descent. This solves the l1-regularized least squares regression formulation f(weights) = 1/2n ||A weights-y||2 + regParam ||weights||_1 Here the data matrix has n rows, and the input RDD holds the set of rows of A, each with its corresponding right hand side label y. See also the documentation for the precise formulation.