网站地图 | 联系我们 | English | 意见反馈 | 主任信箱
 
首页 中心概况 新闻动态 科研进展 交流合作 人才培养 研究队伍 人才招聘 政策规章 数学交叉科学传播
学术报告
现在位置:首页 > 学术报告

Robust Generalization Requires Exponentially Large Models
【2023.5.12 3:00pm, N109】

【打印】【关闭】

   2023-5-4 

  Colloquia & Seminars 

  

  Speaker

王立威教授,北京大学智能学院

  Title

Robust Generalization Requires Exponentially Large Models

  Time

5月12日15:00

  Venue

N109

  Abstract

  It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this talk, I will provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension d. This result holds even if the data is linear separable.

  Affiliation

  王立威 北京大学智能学院教授。长期从事机器学习研究。在机器学习理论方面取得一系列成果。在机器学习国际权威期刊会议发表高水平论文200余篇。担任人工智能权威期刊TPAMI编委。获ICLR 2023 Outstanding Paper Award。曾入选AI’s 10 to Watch,是该奖项自设立以来首位获此荣誉的亚洲学者。

  

  

欢迎访问国家数学与交叉科学中心 
地址:北京海淀区中关村东路55号 邮编:100190 电话: 86-10-62613242 Fax: 86-10-62616840 邮箱: ncmis@amss.ac.cn