An Acceleration Strategy for Randomize-Then-Optimize Sampling via Deep Neural Networks

Authors

  • Liang Yan School of Mathematics, Southeast University, Nanjing, 210096, China
  • Tao Zhou CMIS & LSEC, Institute of Computational Mathematics and Scientific\/ Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China

DOI:

https://doi.org/10.4208/jcm.2102-m2020-0339

Keywords:

Bayesian inverse problems, Deep neural network, Markov chain Monte Carlo.

Abstract

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular, we propose to draw the training points for the DNN-surrogate from a local approximated posterior distribution \u2014 yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.

Published

2021-11-19

Issue

Section

Articles