ONLINE COURSE – Model selection and model simplification (MSMS03)

oliver-hookerOliver Hooker
  • 6 May '22

ONLINE COURSE – Model selection and model simplification (MSMS03) This course will be delivered live

https://www.prstatistics.com/course/model-selection-and-model-simplification-msms03/

Please feel free to share!

ABOUT THIS COURSE

This two day course covers the important and general topics of statistical model building, model evaluation, model selection, model comparison, model simplification, and model averaging. These topics are vitally important to almost every type of statistical analysis, yet these topics are often poorly or incompletely understood. We begin by considering the fundamental issue of how to measure model fit and a model’s predictive performance, and discuss a wide range of other major model fit measurement concepts like likelihood, log likelihood, deviance, residual sums of squares etc. We then turn to nested model comparison, particularly in general and generalized linear models, and their mixed effects counterparts. We then consider the key concept of out-of-sample predictive performance, and discuss over-fitting or how excellent fits to the observed data can lead to very poor generalization performance. As part of this discussion of out-of-sample generalization, we introduce leave-one-out cross-validation and Akaike Information Criterion (AIC). We then cover general concepts and methods related to variable selection, including stepwise regression, ridge regression, Lasso, and elastic nets. Following this, we turn to model averaging, which is an arguably always preferable alternative to model selection. Finally, we cover Bayesian methods of model comparison. Here, we describe how Bayesian methods allow us to easily compare completely distinct statistical models using a common metric. We also describe how Bayesian methods allow us to fit all the candidate models of potential interest, including cases where traditional methods fail.

Please email oliverhooker@prstatistics.com.