Statistics and Data Science Seminar
A Stability Principle for Learning under Non-Stationarity

Abstract: In this talk, I will present a versatile framework for statistical learning in non-stationary environments. In each time period, our approach applies a stability principle to select a look-back window that maximizes the utilization of historical data while keeping the cumulative bias within an acceptable range relative to the stochastic error. Our theory showcases the adaptability of this approach to unknown non-stationarity. The regret bound is minimax optimal up to logarithmic factors when the population losses are strongly convex, or Lipschitz only. At the heart of our analysis lie two novel components: a measure of similarity between functions and a segmentation technique for dividing the non-stationary data sequence into quasi-stationary pieces. The talk is based on joint work with Chengpiao Huang. The preprint is available at https://arxiv.org/abs/2310.18304.

About the Speaker: Kaizheng Wang is an assistant professor of Industrial Engineering and Operations Research, and a member of the Data Science Institute at Columbia University. He works at the intersection of statistics, machine learning, and optimization. He received his PhD from Princeton University in 2020 and BS from Peking University in 2015.

  

????_20231225085606.jpg

Baidu
sogou
Baidu
sogou