Skip to main content

PLS for regression and binary classification: robustness and sparsity

Peter Filzmoser

 

Robust methods for multiple linear regression are under development already since the 1970s. Nowadays, methods are available that are robust to a high amount of contamination, while being still highly efficient if the usual model assumptions are valid. Moreover, fast algorithms have been developed and implemented in the standard statistical software environments.

For high-dimensional data, particularly if the number of explanatory variables is higher than the number of observations, robust methods are mainly available in the context of Partial Least Squares (PLS) regression. However, these methods loose their predictive power if the high-dimensional data contain many noise variables which are not related to the response. It is desirable that their corresponding regression coefficients are zero, such that their  contribution to the prediction is suppressed. This is possible with an L1 penalty term added to the objective function, as it has been done in LASSO regression, leading to so-called sparsity of the vector of regression coefficients.

We will present a robust and sparse PLS method and compare to standard methods with simulated and real data sets. Moreover, an extension of the method to a two-group classification method which is sparse and robust will be outlined. The methods are available in the R package sprm.

 

Mini-CV:
Peter Filzmoser is full professor at the statistics department of the Vienna University of Technology. His main research interests include robust statistics, methods for compositional data analysis, statistical computing, and R. For more information, see: http://www.statistik.tuwien.ac.at/public/filz/.