DeepONet
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. While it is widely known that neural networks are universal approximators of continuous functions, a less known and perhaps more powerful result is that a neural network with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data. However, the theorem guarantees only a small approximation error for a sufficient large network, and does not consider the important optimization and generalization errors. To realize this theorem in practice, we propose deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset. A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensors xi,i=1,…,m (branch net), and another for encoding the locations for the output functions (trunk net). We perform systematic simulations for identifying two types of operators, i.e., dynamic systems and partial differential equations, and demonstrate that DeepONet significantly reduces the generalization error compared to the fully-connected networks. We also derive theoretically the dependence of the approximation error in terms of the number of sensors (where the input function is defined) as well as the input function type, and we verify the theorem with computational results. More importantly, we observe high-order error convergence in our computational tests, namely polynomial rates (from half order to fourth order) and even exponential convergence with respect to the training dataset size.
Keywords for this software
References in zbMATH (referenced in 50 articles )
Showing results 1 to 20 of 50.
Sorted by year (- Brunton, Steven L.; Budišić, Marko; Kaiser, Eurika; Kutz, J. Nathan: Modern Koopman theory for dynamical systems (2022)
- Burkovska, Olena; Glusa, Christian; D’Elia, Marta: An optimization-based approach to parameter learning for fractional type nonlocal models (2022)
- Chen, Zhen; Churchill, Victor; Wu, Kailiang; Xiu, Dongbin: Deep neural network modeling of unknown partial differential equations in nodal space (2022)
- Cui, Tao; Wang, Ziming; Xiang, Xueshuang: An efficient neural network method with plane wave activation functions for solving Helmholtz equation (2022)
- Gao, Yihang; Ng, Michael K.: Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks (2022)
- Goswami, Somdatta; Yin, Minglang; Yu, Yue; Karniadakis, George Em: A physics-informed variational DeepONet for predicting crack path in quasi-brittle materials (2022)
- Gupta, Rachit; Jaiman, Rajeev: A hybrid partitioned deep learning methodology for moving interface and fluid-structure interaction (2022)
- Henkes, Alexander; Wessels, Henning; Mahnken, Rolf: Physics informed neural networks for continuum micromechanics (2022)
- Herrmann, Lukas; Opschoor, Joost A. A.; Schwab, Christoph: Constructive deep ReLU neural network approximation (2022)
- Hu, C.; Martin, S.; Dingreville, R.: Accelerating phase-field predictions via recurrent neural networks learning the microstructure evolution in latent space (2022)
- Hu, Jia-Wei; Zhang, Wei-Wei: Mesh-Conv: convolution operator with mesh resolution independence for flow field modeling (2022)
- Hu, Pipi; Yang, Wuyue; Zhu, Yi; Hong, Liu: Revealing hidden dynamics from time-series data by ODENet (2022)
- Hutter, Clemens; Gül, Recep; Bölcskei, Helmut: Metric entropy limits on recurrent neural network learning of linear dynamical systems (2022)
- Kim, Youngkyu; Choi, Youngsoo; Widemann, David; Zohdi, Tarek: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder (2022)
- Kontolati, Katiana; Loukrezis, Dimitrios; Giovanis, Dimitrios G.; Vandanapu, Lohit; Shields, Michael D.: A survey of unsupervised learning methods for high-dimensional uncertainty quantification in black-box-type problems (2022)
- Kovacs, Alexander; Exl, Lukas; Kornell, Alexander; Fischbacher, Johann; Hovorka, Markus; Gusenbauer, Markus; Breth, Leoni; Oezelt, Harald; Yano, Masao; Sakuma, Noritsugu; Kinoshita, Akihito; Shoji, Tetsuya; Kato, Akira; Schrefl, Thomas: Conditional physics informed neural networks (2022)
- Lu, Lu; Meng, Xuhui; Cai, Shengze; Mao, Zhiping; Goswami, Somdatta; Zhang, Zhongqiang; Karniadakis, George Em: A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data (2022)
- Margenberg, Nils; Lessig, Christian; Richter, Thomas: Structure preservation for the deep neural network multigrid solver (2022)
- Mattey, Revanth; Ghosh, Susanta: A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations (2022)
- Meng, Xuhui; Yang, Liu; Mao, Zhiping; del Águila Ferrandis, José; Karniadakis, George Em: Learning functional priors and posteriors from data and physics (2022)