Cambridge series in statistical and probabilistic mathematics ;
Volume Designation
48
INTERNAL BIBLIOGRAPHIES/INDEXES NOTE
Text of Note
Includes bibliographical references and indexes.
CONTENTS NOTE
Text of Note
Introduction -- Basic tail and concentration bounds -- Concentration of measure -- Uniform laws of large numbers -- Metric entropy and its uses -- Random matrices and covariance estimation -- Sparse linear models in high dimensions -- Principal component analysis in high dimensions -- Decomposability and restricted strong convexity -- Matrix estimation with rank constraints -- Graphical models for high-dimensional data -- Reproducing kernel Hilbert spaces -- Nonparametric least squares -- Localization and uniform laws -- Minimax lower bounds.
0
SUMMARY OR ABSTRACT
Text of Note
Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.