DGM

DGM: a deep learning algorithm for solving partial differential equations. High-dimensional PDEs have been a longstanding computational challenge. We propose to solve high-dimensional PDEs by approximating the solution with a deep neural network which is trained to satisfy the differential operator, initial condition, and boundary conditions. Our algorithm is meshfree, which is key since meshes become infeasible in higher dimensions. Instead of forming a mesh, the neural network is trained on batches of randomly sampled time and space points. The algorithm is tested on a class of high-dimensional free boundary PDEs, which we are able to accurately solve in up to 200 dimensions. The algorithm is also tested on a high-dimensional Hamilton-Jacobi-Bellman PDE and Burgers’ equation. The deep learning algorithm approximates the general solution to the Burgers’ equation for a continuum of different boundary conditions and physical conditions (which can be viewed as a high-dimensional space). We call the algorithm a “Deep Galerkin method (DGM)” since it is similar in spirit to Galerkin methods, with the solution approximated by a neural network instead of a linear combination of basis functions. In addition, we prove a theorem regarding the approximation power of neural networks for a class of quasilinear parabolic PDEs.


References in zbMATH (referenced in 54 articles , 1 standard article )

Showing results 1 to 20 of 54.
Sorted by year (citations)

1 2 3 next

  1. Carmona, René; Laurière, Mathieu: Convergence analysis of machine learning algorithms for the numerical solution of mean field control and games. I: The ergodic case (2021)
  2. Dolgov, Sergey; Kalise, Dante; Kunisch, Karl K.: Tensor decomposition methods for high-dimensional Hamilton-Jacobi-Bellman equations (2021)
  3. Ito, Kazufumi; Reisinger, Christoph; Zhang, Yufei: A neural network-based policy iteration algorithm with global (H^2)-superlinear convergence for stochastic games on domains (2021)
  4. Kharazmi, Ehsan; Zhang, Zhongqiang; Karniadakis, George E. M.: \textithp-VPINNs: variational physics-informed neural networks with domain decomposition (2021)
  5. Laakmann, Fabian; Petersen, Philipp: Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs (2021)
  6. Lu, Lu; Meng, Xuhui; Mao, Zhiping; Karniadakis, George Em: DeepXDE: a deep learning library for solving differential equations (2021)
  7. Nakamura-Zimmerer, Tenavi; Gong, Qi; Kang, Wei: Adaptive deep learning for high-dimensional Hamilton-Jacobi-Bellman equations (2021)
  8. Parand, K.; Aghaei, A. A.; Jani, M.; Ghodsi, A.: A new approach to the numerical solution of Fredholm integral equations using least squares-support vector regression (2021)
  9. Pham, Huyên; Warin, Xavier; Germain, Maximilien: Neural networks-based backward scheme for fully nonlinear PDEs (2021)
  10. Wang, Rui-Qi; Ling, Liming; Zeng, Delu; Feng, Bao-Feng: A deep learning improved numerical method for the simulation of rogue waves of nonlinear Schrödinger equation (2021)
  11. Zhang, Lei; Cheng, Lin; Li, Hengyang; Gao, Jiaying; Yu, Cheng; Domel, Reno; Yang, Yang; Tang, Shaoqiang; Liu, Wing Kam: Hierarchical deep-learning neural networks: finite elements and beyond (2021)
  12. Zhuang, Xiaoying; Guo, Hongwei; Alajlan, Naif; Zhu, Hehua; Rabczuk, Timon: Deep autoencoder based energy method for the bending, vibration, and buckling analysis of Kirchhoff plates with transfer learning (2021)
  13. Arridge, S.; Hauptmann, A.: Networks for nonlinear diffusion problems in imaging (2020)
  14. Beck, Christian; Hornung, Fabian; Hutzenthaler, Martin; Jentzen, Arnulf; Kruse, Thomas: Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations (2020)
  15. Benzakour Amine, M.: Linearized implicit methods based on a single-layer neural network: application to Keller-Segel models (2020)
  16. Darbon, Jérôme; Langlois, Gabriel P.; Meng, Tingwei: Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures (2020)
  17. E, Weinan; Ma, Chao; Wu, Lei: Machine learning from a continuous viewpoint. I (2020)
  18. Geneva, Nicholas; Zabaras, Nicholas: Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks (2020)
  19. Geng, Zhenglin; Johnson, Daniel; Fedkiw, Ronald: Coercing machine learning to output physically accurate results (2020)
  20. Gühring, Ingo; Kutyniok, Gitta; Petersen, Philipp: Error bounds for approximations with deep ReLU neural networks in (W^s , p) norms (2020)

1 2 3 next