Skip to main content

6 posts tagged with "Thesis Study Notes"

View All Tags

An efficient neural-network and finite-difference hybrid method for elliptic interface problems with applications

· 5 min read
Tanger
Academic rubbish | CV Engineers | Visual bubble | compute math | PINN | Mathematical model

这是一篇变系数的波动方程,提出了一种新的格式。

摘要

摘要原文:

A new and efficient neural-network and finite-difference hybrid method is developed for solving Poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. Since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. Here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. The key idea is to decompose the solution into singular and regular parts. The neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard five-point Laplacian discretization is used to obtain the regular solution with associated boundary conditions. Regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for Poisson equation, making the hybrid method easy to implement and efficient. The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. As an application, we solve the Stokes equations with singular forces to demonstrate the robustness of the present method.

摘要翻译:

一种新型高效的神经网络与有限差分混合方法被开发用于求解具有嵌入式不规则界面跳变不连续性的规则域中的泊松方程。由于解在界面处具有较低的正则性,当对该问题应用有限差分离散化时,必须采用额外处理以考虑跳变不连续性。本文旨在通过机器学习方法将此额外处理简化,以提升实现效率。

核心思想是将解分解为奇异部分和规则部分。神经网络学习机制结合给定的跳变条件求解奇异解,而标准五点拉普拉斯离散化用于获得满足边界条件的规则解。无论界面几何如何,这两个任务仅需监督学习进行函数逼近和快速直接求解器求解泊松方程,使混合方法易于实现且高效。二维和三维数值结果表明,本混合方法可保持解及其导数的二阶精度,且与文献中传统的浸入式界面方法相当。作为应用示例,我们通过求解带有奇异力的斯托克斯方程,验证了本方法的鲁棒性。

A new analytical formula for the wave equations with variable coefficients

· 14 min read
Tanger
Academic rubbish | CV Engineers | Visual bubble | compute math | PINN | Mathematical model

这是一篇变系数的波动方程,提出了一种新的格式。

摘要

摘要原文:

This article presents a new analytical formula for the Cauchy problem of the wave equation with variable coefficients, which is a much simpler solution than that given by the Poisson formula. The derivation is based on the variation-of-constants formula and the theory of pseudodifferential operator. The formula is applied to an example to illustrate the feasibility.

摘要翻译:

本文提出了一个新的解析公式,用于求解具有变系数的波方程的柯西问题,该公式比泊松公式给出的解要简单得多。该公式的推导基于常数变换公式和伪微分算子理论。该公式被应用于一个例子,以说明其可行性。

Novel and general discontinuity-removing PINNs for elliptic interface problems

· 25 min read
Tanger
Academic rubbish | CV Engineers | Visual bubble | compute math | PINN | Mathematical model

这是一篇关于处理变系数边界问题的 PINN 论文。

摘要

摘要原文:

This paper proposes a novel and general framework of the discontinuity-removing physicsinformed neural networks (DR-PINNs) for addressing elliptic interface problems. In the DR-PINNs, the solution is split into a smooth component and a non-smooth component, each represented by a separate network surrogate that can be trained either independently or together. The decoupling strategy involves training the two components sequentially. The first network handles the non-smooth part and pre-learns partial or full jumps to assist the second network in learning the complementary PDE conditions. Three decoupling strategies of handling interface problems are built by removing some jumps and incorporating cusp-capturing techniques. On the other hand, the decoupled approaches rely heavily on the cusp-enforced level-set function and are less efficient due to the need for two separate training stages. To overcome these limitations, a novel DR-PINN coupled approach is proposed in this work, where both components learn complementary conditions simultaneously in an integrated single network, eliminating the need for cusp-enforced level-set functions. Furthermore, the stability and accuracy of training are enhanced by an innovative architecture of the lightweight feedforward neural network (FNN) and a powerful geodesic acceleration Levenberg-Marquardt (gd-LM) optimizer. Several numerical experiments illustrate the effectiveness and great potential of the proposed method, with accuracy outperforming most deep neural network approaches and achieving the state-of-the-art results.

摘要翻译:

本文提出了一种新型且通用的断续性消除物理信息神经网络(DR-PINNs)框架,用于解决椭圆型界面问题。在 DR-PINNs 中,解被分解为光滑部分和非光滑部分,每个部分由独立的神经网络代理表示,这些代理可以单独训练或共同训练。解耦策略涉及依次训练这两个部分。第一个网络处理非光滑部分,并预先学习部分或全部跳跃,以协助第二个网络学习互补的偏微分方程(PDE)条件。通过去除部分跳跃并结合尖点捕获技术,构建了三种处理界面问题的解耦策略。另一方面,解耦方法高度依赖于尖点强制水平集函数,且由于需要两个独立的训练阶段而效率较低。为克服这些局限性,本文提出了一种新型 DR-PINN 耦合方法,其中两个组件在集成单一网络中同时学习互补条件,消除了对尖点强制水平集函数的需求。此外,通过轻量级前馈神经网络(FNN)的创新架构和强大的几何加速 Levenberg-Marquardt(gd-LM)优化器,训练的稳定性和准确性得到提升。多个数值实验验证了所提方法的有效性和巨大潜力,其精度显著优于现有方法。

DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators

· 16 min read
Tanger
Academic rubbish | CV Engineers | Visual bubble | compute math | PINN | Mathematical model

这是一篇开山之作提出了一个深度学习框架 DeepONet 用于求解偏微分方程的求解器,这篇论文介绍了原理。

摘要

摘要原文:

While it is widely known that neural networks are universal approximators of continuous functions, a less known and perhaps more powerful result is that a neural network with a single hidden layer can approximate accurately any nonlinear continuous operator [5]. This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data. However, the theorem guarantees only a small approximation error for a sufficient large network, and does not consider the important optimization and generalization errors. To realize this theorem in practice, we propose deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset. A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensors xi=1,...,mx_i = 1, ... , m (branch net), and another for encoding the locations for the output functions (trunk net). We perform systematic simulations for identifying two types of operators, i.e., dynamic systems and partial differential equations, and demonstrate that DeepONet significantly reduces the generalization error compared to the fully-connected networks. We also derive theoretically the dependence of the approximation error in terms of the number of sensors (where the input function is defined) as well as the input function type, and we verify the theorem with computational results. More importantly, we observe high-order error convergence in our computational tests, namely polynomial rates (from half order to fourth order) and even exponential convergence with respect to the training dataset size.

摘要翻译:

尽管神经网络是连续函数的通用逼近器这一事实广为人知,但一个较少为人所知且可能更强大的结果是:具有单个隐藏层的神经网络能够精确逼近任何非线性连续算子。这一通用逼近定理暗示了神经网络在从数据中学习非线性算子方面的潜在应用。然而,该定理仅保证在网络规模足够大时存在较小的逼近误差,并未考虑重要的优化误差和泛化误差。为了在实践中实现这一定理,我们提出**深度算子网络(DeepONets)**以从相对较小的数据集准确高效地学习算子。一个 DeepONet 由两个子网络组成:一个用于在固定数量的传感器上编码输入函数 xi=1,...,mx_i = 1, ..., m(分支网络),另一个用于编码输出函数的位置(主干网络)。我们通过系统性模拟识别两种类型的算子,即动态系统和偏微分方程,并证明 DeepONet 相较于全连接网络显著降低了泛化误差。我们还从理论上推导了近似误差与传感器数量(即输入函数定义的传感器数量)以及输入函数类型之间的依赖关系,并通过计算结果验证了该定理。更重要的是,我们在计算测试中观察到高阶误差收敛,即多项式收敛率(从半阶到四阶)甚至与训练数据集大小相关的指数收敛。

Automated and Context-Aware Repair of Color-Related Accessibility Issues for Android Apps

· 12 min read
zqqqj
super bug engineer 4 nlp,robot,cv,ml and ds

1. 摘要

约 15% 的全球人口受到各种残障或视力障碍的影响,但许多移动端的用户体验(UX)设计师和开发者在开发 App 时并未重视可访问性问题。这意味着每七个人中就有一个用户在使用 App 时面临不平等的体验,这不仅影响用户,也可能违反相关法规。实际上,如果 App 开发时考虑可访问性,不仅能提升整体用户体验,还能提升商业价值。因此,已有不少研究和检测工具被提出用于识别可访问性问题。

然而,与检测相比,修复工作明显滞后,尤其是“颜色相关的可访问性问题”——比如文字对比度不足和图片对比度不佳,这类问题极大地影响了低视力用户和老年用户的使用体验,而当前的修复方法对此无能为力。

为此,我们提出了 Iris:一种自动化且具备上下文感知能力的方法,用于修复颜色相关的可访问性问题。该方法通过设计一致性的颜色替换策略和属性定位算法,在修复问题的同时保持 UI 风格的一致性。实验显示,Iris 可达到 91.38% 的修复成功率,且效率较高。用户调研也表明其结果令人满意,开发者反馈积极。我们在 GitHub 上提交的 40 个 Pull Request 中已有 9 个被合并,另有 4 个正在积极沟通后续修复。Iris 工具现已开源,旨在推动移动可访问性修复领域的进一步研究。

Physics Informed Deep Learning (Part I) Data-driven Solutions of Nonlinear Partial Differential Equations

· 11 min read
Tanger
Academic rubbish | CV Engineers | Visual bubble | compute math | PINN | Mathematical model

这是一篇关于使用数据驱动方法实现的 Physics-Informed Deep Learning(PINN)经典论文。

论文的来源

    首先,本人通过搜索很多 PINN 的论文,发现许多论文都在引用这篇论文,在好奇心的驱使下就在 google 学术上搜索了这篇论文,我们可以看到出现了两个版本,从标题名上看大致相同,作者也没变化。据开组会时,覃老师介绍说可能是因为前面这个版本是相当于没有正式发表还处于一个草稿阶段,后面那篇是经过整理并发表到了比较好的期刊中,我们可以从引用量(比较粗的红线)以及 easyScholar (比较细的红线)打上的标签还有作者希望我们引用这项工作的论文排名(作者更希望我们引用 2019 年正式分布的那篇)中看到区别,但不妨碍这几篇论文的优秀性,总的来说 M Raissi 等人的工作是非常出色的。