题目:Conquer: Convolution-type smoothed quantile regression
主讲人:加州大学圣地亚哥分校 周文心助理教授
主持人:西南财经大学统计学院 常晋源教授
时间:2020年4月24日(星期五)10:00-11:20
直播平台及会议ID:腾讯会议,822 749 036
报告摘要:
Quantile regression is a powerful tool for learning the relationship between a scalar response and a multivariate predictor in the presence of outliers and under data heterogeneity. Despite of its statistical significance, scalable algorithm for quantile regression is in shortage due to the non-differentiablity and lack of strong convexity of the piecewise linear loss function. In this talk, we provide a comprehensive study of a convolution-type smoothing approach for quantile regression, for which we coin the term ‘conquer' as a tribute to Roger Koenker. Different from Horowitz’s kernel smoothing method that yields a non-convex loss function, the conquer loss is locally strongly convex and twice differentiable. We provide a scalable gradient-based algorithm with Barzilai-Borwein stepsize to solve the corresponding optimization problem. For statistical inference, we propose a multiplier bootstrap method for constructing confidence intervals for conquer estimates. Theoretically, we study the conquer estimator under the regime in which the parametric dimension is allowed to increase with sample size. We characterize the bias induced by convolution-based smoothing, provide upper bounds on both estimation and Bahadur-Kiefer linearization errors, and establish a Berry-Esseen bound for linear functionals of conquer estimator. Validity of the multiplier bootstrap is also provided. As evidenced by extensive numerical studies, the proposed algorithm is scalable to problems with very large sample size and dimension. This talk is based on a joint work with Xuming He, Xiaoou Pan, and Kean Mean Tan. The R package 'conquer' can be found in http://cran.r-project.org/web/packages/conquer/index.html.
主讲人简介:
Wenxin Zhou is an Assistant Professor in the Department of Mathematics at the University of California, San Diego. Prior to joining UCSD, Dr. Zhou was a postdoc in University of Melbourne and then Princeton University, working with Aurore Delaigle and Jianqing Fan. Dr. Zhou’s research uses tools and ideas from probability theory, functional and geometric analysis and numerical optimization to understand high-dimensional and large-scale estimation and inference problems, with a particular focus on issues such as robustness, heterogeneity, model uncertainty, and statistical and computational trade-offs.