Novel and general discontinuity-removing PINNs for elliptic interface problems
这是一篇关于处理变系数边界问题的 PINN 论文。
摘要
摘要原文:
This paper proposes a novel and general framework of the discontinuity-removing physicsinformed neural networks (DR-PINNs) for addressing elliptic interface problems. In the DR-PINNs, the solution is split into a smooth component and a non-smooth component, each represented by a separate network surrogate that can be trained either independently or together. The decoupling strategy involves training the two components sequentially. The first network handles the non-smooth part and pre-learns partial or full jumps to assist the second network in learning the complementary PDE conditions. Three decoupling strategies of handling interface problems are built by removing some jumps and incorporating cusp-capturing techniques. On the other hand, the decoupled approaches rely heavily on the cusp-enforced level-set function and are less efficient due to the need for two separate training stages. To overcome these limitations, a novel DR-PINN coupled approach is proposed in this work, where both components learn complementary conditions simultaneously in an integrated single network, eliminating the need for cusp-enforced level-set functions. Furthermore, the stability and accuracy of training are enhanced by an innovative architecture of the lightweight feedforward neural network (FNN) and a powerful geodesic acceleration Levenberg-Marquardt (gd-LM) optimizer. Several numerical experiments illustrate the effectiveness and great potential of the proposed method, with accuracy outperforming most deep neural network approaches and achieving the state-of-the-art results.
摘要翻译:
本文提出了一种新型且通用的断续性消除物理信息神经网络(DR-PINNs)框架,用于解决椭圆型界面问题。在 DR-PINNs 中,解被分解为光滑部分和非光滑部分,每个部分由独立的神经网络代理表示,这些代理可以单独训练或共同训练。解耦策略涉及依次训练这两个部分。第一个网络处理非光滑部分,并预先学习部分或全部跳跃,以协助第二个网络学习互补的偏微分方程(PDE)条件。通过去除部分跳跃并结合尖点捕获技术,构建了三种处理界面问题的解耦策略。另一方面,解耦方法高度依赖于尖点强制水平集函数,且由于需要两个独立的训练阶段而效率较低。为克服这些局限性,本文提出了一种新型 DR-PINN 耦合方法,其中两个组件在集成单一网络中同时学习互补条件,消除了对尖点强制水平集函数的需求。此外,通过轻量级前馈神经网络(FNN)的创新架构和强大的几何加速 Levenberg-Marquardt(gd-LM)优化器,训练的稳定性和准确性得到提升。多个数值实验验证了所提方法的有效性和巨大潜力,其精度显著优于现有方法。