Home | Program |
8:00-8:15 AM ET | Optimize Hydraulic Conductivity Estimation using Fourier Neural Operator |
2B1G |
8:15-8:30 AM ET | Physics-Informed Fourier Neural Operators for Photonic Device Simulation and Optimization |
CompOpt |
8:30-8:45 AM ET | Predicting Ground Temperatures and the Active Layer Thickness for Permafrost |
The Natural Disasters |
8:45-9:00 AM ET | Modeling the FitzHugh-Nagumo System with Scientific Machine Learning |
Bio Team |
9:00-9:15 AM ET | Experimental Investigation of Water Wave Scattering by a Vertical Plate |
Magdalini Koukouraki (ESPCI) | |
I will discuss the experimental methods and data analysis techniques used to determine the reflection and transmission coefficients for a plane wave incident on a vertical submerged plate in a water channel. First, I will present the Fourier Transform Profilometry, a technique which allows to recover the full two dimensional profile of the deformed water surface at each time instant. Then, by fitting a mathematical solution into our experimental data, the reflected and transmitted waves are extracted and are compared with our theoretical predictions. |
8:00-8:15 AM ET | DTowards discovery of minimal structure of Physics-Informed Neural Networks |
Physics GP |
8:15-8:30 AM ET | Aeroacoustic Noise Predictions Using Physics-Informed Neural Networks |
AIRO |
8:30-8:45 AM ET | Scientific Machine Learning for Cardiac Electrophysiology Simulations |
Physics Team 3 |
8:45-9:00 AM ET | EP-PINN Reduced FHN modeling in Detailed Cardiac Potentials |
Team 8 |
9:00-9:15 AM ET | The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains |
Eric Qu (UC Berkeley) | |
Scaling has been a critical factor in improving model performance and generalization across various fields of machine learning. Despite successes in scaling other types of machine learning models, the study of scaling in Neural Network Interatomic Potentials (NNIPs) remains limited. The dominant paradigm in this field is to incorporate numerous physical domain constraints into the model, such as symmetry constraints like rotational equivariance. We contend that these increasingly complex domain constraints inhibit the scaling ability of NNIPs, and such strategies are likely to cause model performance to plateau in the long run. In this work, we take an alternative approach and start by systematically studying NNIP scaling properties and strategies. Our findings indicate that scaling the model through attention mechanisms is both efficient and improves model expressivity. These insights motivate us to develop an NNIP architecture designed for scalability: the Efficiently Scaled Attention Interatomic Potential (EScAIP). EScAIP leverages a novel multi-head self-attention formulation within graph neural networks, applying attention at the neighbor-level representations. Implemented with highly-optimized attention GPU kernels, EScAIP achieves substantial gains in efficiency---at least 10x speed up in inference time, 5x less in memory usage---compared to existing NNIP models. EScAIP also achieves state-of-the-art performance on a wide range of datasets including catalysts (OC20 and OC22), molecules (SPICE), and materials (MPTrj). Link to paper. |