PLS regression
Combining aspects of PCA (principal component analysis) and linear regression, the PLS regression algorithm extracts a set of latent components that explain as much covariance as possible between the predictor and response variables, and then performs a regression that predicts response values using the extracted components.
This technique is particularly useful when the number of predictor variables is greater than the number of observations or the predictor variables are highly collinear. If either of these conditions is true of the input relation, ordinary linear regression fails to converge to an accurate model.
Use the following functions to train and make predictions with PLS regression models:
-
PLS_REG: Creates and trains a PLS regression model
-
PREDICT_PLS_REG: Applies a trained PLS model to an input relation and returns predicted values
The PLS_REG function supports PLS regression with only one response column, often referred to as PLS1. PLS regression with multiple response columns, known as PLS2, is not currently supported.
Example
This example uses a Monarch butterfly population dataset, which includes columns such as:
Log_N
(dependent variable): natural log of the western monarch population in the overwintering habitat for the respective yearLog_PrevN
: natural log of the western monarch population in the overwintering habitat for the previous yearCoast_Dev
: estimated proportion of developed lands in the overwintering habitat in coastal CaliforniaBr_Temp
: average monthly maximum temperature in degrees Celsius from June to August in the breeding habitatGly_Ag
: summed amount of glyphosate, in pounds, applied for agricultural purposes in CaliforniaCoast_T
: minimum monthly temperature in degrees Celsius averaged across the overwintering habitat in coastal California from December to February
As reported in Crone et al. (2019), the predictor variables in this dataset are highly collinear. Unlike ordinary linear regression techniques, which cannot disambiguate linearly dependent variables, the PLS regression algorithm is designed to handle collinearity.
After you have downloaded the Monarch data locally, you can load the data into Vertica with the following statements:
You can then split the data into a training and test set:
To train a PLS regression model on the training data, use the PLS_REG function. In this example, two models are trained, one with the default num_components
of 2 and the other with num_components
set to 3:
You can use the GET_MODEL_SUMMARY function to view a summary of the model, including coefficient and parameter values:
After you train the PLS models, use the PREDICT_PLS_REG function to make predictions on an input relation, in this case the monarch_test
data:
Query the monarch_data
table to view the actual measured monarch population from the years for which values were predicted:
To compute the mean squared error (MSE) of the models' predictions, use the MSE function:
Comparing the MSE of the models' predictions, the PLS model trained with 3 components performs better than the model with only 2 components.