Deep learning has changed the way we do artificial intelligence (AI) and is poised to change the way we do science. At the same time, it is generally perceived to be a collection of techniques or even tricks without a solid theoretical foundation. In this talk, we will try to address three questions: What is the magic behind neural network-based machine learning? How can we use deep learning to solve challenging problems in science and scientific computing? Can we formulate more general and maybe mathematically more natural models of machine learning? The main message is that (deep) neural networks provide an effective tool for approximating high dimensional functions. This allows us to attack many difficult problems that are known to suffer from the curse of dimensionality. We will discuss the theoretical progress that has been made so far along these lines, and highlight the most pressing unsolved mathematical and practical issues. |