Han
Liu
*a,
Zijie
Huang
b,
Samuel S.
Schoenholz
c,
Ekin D.
Cubuk
c,
Morten M.
Smedskjaer
d,
Yizhou
Sun
b,
Wei
Wang
b and
Mathieu
Bauchy
*e
aSOlids inFormaTics AI-Laboratory (SOFT-AI-Lab), College of Polymer Science and Engineering, Sichuan University, Chengdu 610065, China. E-mail: happylife@ucla.edu
bDepartment of Computer Science, University of California, Los Angeles, California 90095, USA
cBrain Team, Google Research, Mountain View, California, 94043, USA
dDepartment of Chemistry and Bioscience, Aalborg University, Aalborg 9220, Denmark
ePhysics of AmoRphous and Inorganic Solids Laboratory (PARISlab), Department of Civil and Environmental Engineering, University of California, Los Angeles, California 90095, USA. E-mail: bauchy@ucla.edu
First published on 23rd June 2023
Many-body dynamics of atoms such as glass dynamics is generally governed by complex (and sometimes unknown) physics laws. This challenges the construction of atom dynamics simulations that both (i) capture the physics laws and (ii) run with little computation cost. Here, based on graph neural network (GNN), we introduce an observation-based graph network (OGN) framework to “bypass all physics laws” to simulate complex glass dynamics solely from their static structure. By taking the example of molecular dynamics (MD) simulations, we successfully apply the OGN to predict atom trajectories evolving up to a few hundred timesteps and ranging over different families of complex atomistic systems, which implies that the atom dynamics is largely encoded in their static structure in disordered phases and, furthermore, allows us to explore the capacity of OGN simulations that is potentially generic to many-body dynamics. Importantly, unlike traditional numerical simulations, the OGN simulations bypass the numerical constraint of small integration timestep by a multiplier of ≥5 to conserve energy and momentum until hundreds of timesteps, thus leapfrogging the execution speed of MD simulations for a modest timescale.
New conceptsModeling atom dynamics is key to facilitate the discovery of new glasses with tailored dynamical and transport properties. However, the physics complexity often renders it challenges to simulate complex dynamics of realistic glasses by traditional molecular dynamics (MD) simulations. Here, we introduce an observation-based graph network (OGN) framework to address the issue, by watching to simulate glass dynamics solely from their static structure, namely, “bypassing all physics laws”. As a major outcome of this work, our results establish the OGN simulation as an efficient paradigm to emulate many-body simulations featuring complex dynamics (and complex physics) for a modest timescale, which, in turn, unveils the predictive power of static structure in dynamical evolution of disordered phases. |
To mitigate the issues, graph neural network (GNN) has been recently proposed as an attractive ML model for dynamics prediction.11,15 Unlike traditional ML models requiring human-defined structural descriptors,13,14 the GNN model directly takes as inputs the static structure and passes messages between atoms, so as to (i) keep the structural information inherently relational during prorogation15,20 and (ii) automatically identify key structural features (if any) relevant to the dynamics.11,21 Despite its predictive power in structural dynamics—as recently revealed in a few toy models,20–26 the potentiality of GNN remains largely untapped in simulating materials or complex interacting systems (e.g., glass dynamics),11,21,27 which echoes a long-standing debate about whether particle dynamics is in some way encoded in their static structure in disordered phases.10,11,28 As such, it remains elusive whether GNN could watch to simulate complex atom dynamics solely from their static structure. This question is a manifestation of a more general, grand challenge of ML in “learning complex physics from pure observations”.6,29,30 Indeed, the underlying physics laws, no matter how complex they are, are encoded into the phenomenal observations6,31—from which ML may decode the laws into a surrogate ML model,6,7 which, in turn, may reduce the computational expense of physics laws32–34 (i.e., the entire formula set of atomic force-fields and Newton's laws of motion herein35). However, little is known about GNN's capacity to “bypass all physics laws” to simulate complex atom dynamics,15,20 let alone its capacity to accelerate the simulations.4,23
Here, based on an archetypal category of GNN termed message-passing neural network (MPNN),36,37 we introduce an observation-based graph network (OGN) framework to “bypass all physics laws” to simulate complex dynamics of realistic glasses for a modest timescale, as exemplified by molecular dynamics (MD) simulations ranging over different families of complex atomistic systems that exhibit distinct types of bonds,38 including (i) binary Lennard-Jones (LJ) liquid and its melt-quenched glass,39 (ii) ionocovalent silica liquid,40 (iii) covalent silicon liquid,41 and (iv) metallic Cu64.5Zr35.5 liquid,42 which unveils the predictive power of static structure in microscopic-timescale atom trajectories (e.g., ≥5 timesteps per prediction for LJ liquid or potentially much longer timescale using giant OGN architecture) and, iteratively, in the short-term dynamical evolution of disordered phases up to a few hundred timesteps (e.g., ≥100 timesteps for LJ liquid) and, furthermore, allows us to explore the capacity of OGN simulations that is potentially generic to complex many-body dynamics. Importantly, by predicting ≥5 times longer timestep per prediction, we demonstrate that the OGN engine is computationally efficient to simulate for a short timescale of a few hundred MD steps the complex systems that are otherwise computationally expensive (or forbidden), ready to accelerate and enrich traditional simulation toolkit built upon physics laws within the scope of a modest timescale.
![]() | ||
Fig. 1 Graph analogy to molecular dynamics (MD) simulation. (A) Schematic illustrating a molecular dynamics (MD) simulation that computes atomic motions by a numerical algorithm obeying Newton's law of motion5,35 (see text for details), wherein the trajectory of each atom is governed by its interaction with its neighbor atoms within a cutoff distance (i.e., neighbor-list35,45). (B) Illustration of constructing a surrogate graph network simulation engine to predict atomic motions, wherein the neighbor-list of each atom is converted into an atomic graph built with nodes and edges representing the atoms and the interactions, respectively. Relying on message-passing neural network (MPNN)36,37—i.e., a graph network that takes as inputs the atomic graphs and is trained by observed atomic motions (see text for details), the model learns to update the input graphs (i.e., edge update followed by node update) to predict the graph dynamics and the atomic motions thereof. |
In analogy to the MD simulator, we build herein a surrogate graph network simulation engine solely driven by the observed structural evolution (i.e., the time-dependent atom positions and velocities) to replace the entire four-step loop of MD algorithm, as illustrated in Fig. 1B, so termed observation-based graph network (OGN). Similar to the MD simulator, the OGN is comparably driven by a four-step computation loop to predict atom dynamics, including (i) converting the neighbor-list of each atom i into an atomic graph Gi with nodes {ni} and edges {eij} representing the atoms and their interactions, respectively (see Fig. 1B), (ii) updating the edges {eij}, (iii) subsequently updating the nodes {ni}, and, finally, (iv) decoding the nodes {ni} to update the atom positions and velocities. Fig. 2A shows the architecture of OGN built to watch atom dances and to simulate glass dynamics, where the OGN simulation engine yields the next-step configuration through 4 consecutive component layers.11,27 Details about the four-component OGN architecture are provided in the Methods section. Notably, the OGN entirely bypasses the MD algorithm that follows Newton's law of motion5,35 and, consequently, offers a physics-blind, closed-loop simulation engine between the present input configuration and the next-step output configuration, allowing iteratively naive prediction of atom positions and velocities as a function of time (i.e., atom dynamics).
![]() | ||
Fig. 2 Observation-based graph network (OGN). (A) Schematic illustrating the architecture of observation-based graph network (OGN), which predicts the next-step change of atom positions and velocities in an input atomistic configuration, by taking the example of a binary Lennard-Jones (LJ) A80B20 liquid.39 The OGN model consists of 4 consecutive component layers,11,27 namely, (i) the input graph layer that takes the input configuration to build atomic graphs, (ii) the encoder layer that encodes graphs, (iii) the message-passing neural network (MPNN) layers that update graphs (10 successive MPNN layers herein), and finally, (iv) the decoder layer that decodes graphs to obtain the next-step configuration (see text for details). (B) True (left panel) versus predicted (right panel) 100-steps atomic trajectories for randomly selected atoms in a test 265-atoms A80B20 configuration under NVE ensemble. LJ unit is applied. The box side length is 6.038, the neighbor-list cutoff is set as 3.0, and the timestep is set as 0.005.39,46 The configuration has been relaxed to an equilibrium liquid temperature T ≈ 3.0. (C) Density scatter plot of the predicted versus true atom positions (left panel) and velocities (right panel) (along x-, y-, and z-axis) in the test configuration at the last step. The y = x line (grey dash) is added as a reference. |
Unlike the MD simulator that computes particle-based dynamics, the OGN predictions purely rely on graph transformation that embeds the information of atom motions. This graph-based dynamics presents a key advantage of message-passing neural network36,37 (MPNN) architecture adopted by the OGN (see Fig. 2A), which excels at updating graph geometry relationally through message-passing between each interconnected edge and node and, in an automatic manner, identifying the pivotal, hidden structural patterns relevant to graph dynamics11,27—making the OGN potentially a graph analogy to MD simulation but solely informed by the static structure, namely, bypassing all physics laws to simulate atom dynamics.
We then use the well-trained OGN to simulate atomic motions over time (or steps) in a test configuration, as compared to the ground-truth MD simulation (see Movie S1 in ESI†). Fig. 2B shows the predicted versus true 100-steps atomic trajectories of randomly selected atoms in the test configuration. Notably, the predicted trajectories exhibit an excellent agreement with that computed by the ground-truth simulation. Further, Fig. 2C shows the density scatter plot of the predicted versus true atom positions and velocities (along x-, y-, and z-axis) in the test configuration at the last step. We find that both the position and velocity data points are well located in the vicinity of y = x identity line. The root mean square error (RMSE) of position per atom is computed as 0.03, significantly smaller than the length scale of cage effect47 in the LJ system (∼0.4, see Section S2 in ESI†), which suggests that OGN simulations offer accurate predictions of not only the long-range atom migrations between vacancies48 but also the short-range atom vibrations within a vacancy known as cage effect.47,48 Similarly, the RMSE of velocity per atom is calculated as 0.32, an order of magnitude smaller than the velocity scale of the LJ system (i.e., √3.0 herein).39 Note that the velocity scale of an atomistic system is defined herein as the standard deviation of atom velocities, considering the fact that the distribution of atom velocities is approximately a Gaussian distribution with a zero mean and a standard deviation of √(kBT/m) along x-, y-, and z-axis,49 where kB is the Boltzmann constant, T is the system temperature, and m is the average atom mass. Note that, since the error will accumulate over prediction steps and lead to spurious effect in long-term dynamics21,26,43 (up to a few hundred timesteps herein, see Section S3 in ESI†), we restrict herein the scope of OGN to predict the near-future atomic trajectories. Although the error accumulation surges at particle level in hundreds of timesteps, we nevertheless find that the OGN model exhibits some extent of error tolerance up to thousands of timesteps for certain system-level quantities, such as mean square displacement (MSD) and system energy (see Section S4 in ESI†). Overall, these results demonstrate that, without any prior physics knowledges, OGN can learn complex atom dynamics from pure observations of structural evolution and enables accurate predictions of near-future atomic trajectories in the LJ system.
![]() | ||
Fig. 3 Simulating complex atom dynamics by OGN. (A) Snapshots of three complex atomistic systems exhibiting distinct types of bonds, including (i) ionocovalent silica (SiO2) liquid governed by radial 2-body interactions,40 (ii) covalent silicon (Si) liquid governed by both angular and radial interactions,41 and (iii) metallic Cu64.5Zr35.5 liquid governed by many-body interactions42 (see text for details). The configuration built for SiO2, Si, and Cu64.5Zr35.5 contains 363, 128, and 245 atoms, respectively, and the box side length is set to match their experimental density. (B) True (left panel) versus predicted (right panel) 100-steps atomic trajectories for randomly selected atoms in a test configuration under NVE ensemble for SiO2, Si, and Cu64.5Zr35.5, respectively. The liquids of SiO2, Si, and Cu64.5Zr35.5 have been relaxed to an equilibrium temperature around 3600 K, 2000 K, and 1500 K, respectively, and the timestep is set as 1 fs. Note that, due to its low atom diffusivity, we extend the trajectory of Cu64.5Zr35.5 to 400 steps for visibility. (C) Density scatter plot of the predicted versus true atom positions (left panel) and velocities (right panel) (along x-, y-, and z-axis) in the test configuration at the last step for SiO2, Si, and Cu64.5Zr35.5, respectively. The y = x line (grey dash) is added as a reference. |
Using the observations of these complex atom dynamics, we now examine the learning capacity of OGN to simulate these systems. Similar to LJ system, the loss function L quickly reduces to a miniscule level during training (see Section S1 in ESI†), so that the OGN offers an accurate prediction of next-step atomic motions for each of these complex systems. We then use the well-trained OGN to predict atomic motions in these complex systems as a function of time (see Movies S2–S4 in ESI,† for SiO2, Si, and Cu64.5Zr35.5, respectively). Fig. 2B provides the true versus predicted 100-step atomic trajectories for randomly selected atoms in a test configuration for SiO2, Si, and Cu64.5Zr35.5, respectively, wherein the settings of MD simulations and OGN architectures remains the same as that for the LJ system (see Methods section). Notably, we find that, regardless of the nature of the interatomic interactions, OGN is able to offer an accurate prediction of near-future atomic trajectories in excellent agreement with that computed by the ground-truth simulations. Moreover, Fig. 3C shows the density scatter plot of the predicted versus true atom positions and velocities (along x-, y-, and z-axis) in the test configuration at the last step for SiO2, Si, and Cu64.5Zr35.5, respectively, wherein all the datapoints are well located in the vicinity of y = x identity line to illustrate the high accuracy of OGN predictions. Further, we compute the RMSE of position and velocity for each of these systems (see Fig. 3C), which turn out to be a very miniscule error that is 1-to-2 orders of magnitude smaller than, respectively, the length scale associated with their cage effect47,48 (see Section S2 in ESI†) and the velocity scale associated with their system temperature (i.e., √(kBT/m), see Section 2.2),49 suggesting that OGN simulation is able to capture the fine details of complex atom vibration modes.48 Overall, these results establish the conclusion that OGN is a powerful framework to simulate different systems exhibiting distinct types of bonds and is potentially generic to complex many-body dynamics. Besides that, we have also demonstrated that the OGN is a versatile tool to train efficiently by small configurations but easily generalize to simulate very large, complex systems, such as systems at different size (see Section S5 in ESI†), temperature (see Section S6 in ESI†), and density (see Section S7 in ESI†).
![]() | ||
Fig. 4 Accelerating MD simulations by OGN. (A) Schematic illustrating the runtime acceleration of molecular dynamics (MD) simulation by observation-based graph network (OGN), wherein one OGN prediction step can span over k MD steps to enable the speedup of MD execution. (B) The evolution of kinetic, potential, and total energy (upper panel) and momentum along x-, y-, and z-axis (lower panel) with respect to MD steps for a test 265-atoms LJ configuration using a “Fast-OGN” by setting k = 5 MD steps per prediction (red). The MD simulation results by setting k = 1 (black) and k = 5 (orange) are added for comparison. (C) Runtime comparison between MD simulation (black square) and Fast-OGN (red circle) after a rollout of 100 MD steps, as a function of system size N for LJ, SiO2, Si, and Cu64.5Zr35.5, respectively. All computations are performed on Nvidia GPU P100 using float32 data format in Google Colab environment.53 The lines are guides for the eyes. Note that, due to the absence of certain neighbor-list packages in the state-of-the-art version of JAX-MD,51 the computation cost of MD simulation for Si and Cu64.5Zr35.5 shows a quadratic scaling with respect to N51,55 (rather than a linear scaling45). (D) The rollout runtime of MD simulation (square) and Fast-OGN (circle) at N = 10![]() |
Fig. 4B provides an example of the evolution of system energy and momentum with regard to MD steps for a test LJ configuration using a “Fast-OGN” by setting k = 5 MD steps, where (i) the kinetic, potential, and total energy and (ii) the momentum along x-, y-, and z-axis are computed separately to compare with their MD counterparts. Notably, despite its long timestep, the Fast-OGN remains energy and momentum conservation during a rollout of 100 MD steps, while, in contrast, MD simulation using the same long timestep (i.e., k = 5 MD steps) destabilizes energy and momentum and faces some spurious effect after only a few MD steps (see Fig. 4B). Note that we restrict herein the scope of prediction to near-future atomic trajectories to avoid the spurious effect of error accumulation over iterations (see Section S3 in ESI†). It is worth to mention that, for each type of atomistic systems, we finely tune the Fast-OGN to best balance its prediction accuracy and execution speed (see Section S1 in ESI†), by (i) minimizing the number of MPNN layers until the model accuracy deteriorates severely (herein we select 2 MPNN layers, see Section S8 in ESI†), and, concurrently, (ii) maximizing the k MD steps per prediction before the input configuration loses its predictivity (herein we select k = 5 for LJ and 10 for other systems, see Section S9 in ESI†).
We now apply the Fast-OGN to make a runtime comparison with MD simulation. Fig. 4C provides the runtime comparison between MD simulation and Fast-OGN after a rollout of 100 MD steps, as a function of system size N for the LJ, Si, SiO2, and Cu64.5Zr35.5 system, respectively. As expected, the runtime cost tc is linearly proportional to N (i.e., tc ∝ N),45,51 where the slope represents the intrinsic runtime cost of computing all pairwise distances within a neighbor-list, and the positive intercept may arise from the inevitable computation cost of code execution in the programming platform.52,53 We find that, except for LJ system, Fast-OGN yields a smaller slope than MD simulation so as to enable simulation acceleration when extrapolated to large systems. Notably, when the system size increases up to N = 10000 atoms, it becomes evident that Fast-OGN can outperform MD simulation with 2–10 times faster runtime for the different systems (except for the LJ system39—a too simple model).
Moreover, Fig. 4D shows the rollout runtime of MD simulation and Fast-OGN at N = 10000 atoms, as a function of the interaction complexity index—which is defined herein as the ratio of the time used to compute the empirical potential energy for a 100-atoms configuration, with respect to the time used for a reference 100-atoms LJ configuration. Obviously, since Fast-OGN is purely driven by observed atomic motions, its runtime cost is independent of the underlying complexity of interatomic interactions. In contrast, the execution speed of MD simulation greatly relies on the computational complexity of empirical potential interactions, and from the simple LJ interaction to more complex many-body interaction (see Methods section), finer interaction descriptions are added empirically to augment the computation burden of MD simulation.54,55 Overall, these results highlight the ultrafast execution speed of OGN simulations, which bypass all physics laws—including (i) the complexity of interatomic interactions and (ii) the numerical constraint of small integration timestep, readily accelerating interaction-complex and large-scale simulations that are otherwise computationally expensive (or forbidden).
Overall, by leveraging auto-diff programming,56 we pioneer to build, integrate, and compare physics simulator and its surrogate ML counterpart (i.e., MD simulation51versus OGN) on the same platform “JAX”,50 which benefits us in several aspects. First, compared to traditional programming platforms that rely on handwritten derivatives,57 auto-diff platforms excel at computing on-the-fly the backward gradient of any quantities (e.g., force calculation in MD algorithm) with no additional computation burden associated with differentiation50—an operation that widely exists in ML and simulations,35,58 so as to accelerate the execution speed of ML and simulations.51 Second, the same programming language removes communication barriers between ML and simulations, facilitating their seamless integration.5 Third, the auto-diff JAX platform enables naive “just-in-time (JIT)” compilation of ML and simulations on high-performance hardware accelerators,50,51 and moreover, by following the same JIT rules of compilation mode and parallelization scheme,52,55 ML and simulations accelerate their code execution in the same fashion. Finally, this allows us to make a “fair” runtime comparison between OGN and MD simulation—which is essentially a computationally-efficient reference. As such, it is remarkable that the OGN exhibits the “genuine” power to leapfrog the execution speed of MD simulations.
![]() | ||
Fig. 5 One-step predictivity of Fast-OGN using liquid- versus glassy-state static structure. (A) Comparison of root mean square displacement between liquid- and glassy-state dynamics as a function of MD time, by taking the example of binary Lennard-Jones (LJ) A80B20 liquid and its melt-quenched glass.39 (B) Test set loss L as a function of the number of training epochs for liquid- and glassy-state dynamics, respectively. The Fast-OGN timestep (denoted as k (dt) herein) is set as k = 20 MD steps per prediction. (C) Final loss L with respect to the Fast-OGN timestep k (dt) for liquid- and glassy-state dynamics, respectively. The lines are guides for the eyes. |
Moreover, we evaluate the one-step predictivity limit of static structure for both the LJ liquid and glass, by training Fast-OGN models over a wide range of timestep (see Section S9 and S10 in ESI†). Fig. 5C shows the prediction loss L of, respectively, liquid- and glassy-state static structure in the test set as a function of Fast-OGN timestep. Compared to liquid-state dynamics model, we find that the glassy-state model can predict roughly 2× longer timestep per prediction, that is, from k = 5–10 MD steps (under liquid state) to k = 10–20 MD steps (under glassy state) per prediction, well before their prediction error increases exponentially with longer timestep and becomes evidently unsatisfactory. Note that, as the one-step prediction error would accumulate over iterations (see Section S3 in ESI†), the Fast-OGN is restricted to predict short-term atom trajectories, that is, up to ∼100 MD steps for LJ liquid and ∼200 MD steps for LJ glass. It is worth pointing out that the timescale reached by the iterative OGN prediction fully depends on the magnitude of one-step prediction error—which can be reduced by (i) increasing the model complexity such as the number of message-passing layers and (ii) simplifying the functional mapping such as incorporating larger neighbor list relevant to the central atom's motion during the prediction step. These model settings have been optimized to minimize the one-step prediction error (see Methods section), and we expect more endeavor in that direction to extend the prediction timescale. Overall, these results demonstrate the enhanced predictive power of static structure in glassy-state atom trajectories up to tens of MD steps per prediction and iteratively up to hundreds of MD steps, roughly 2 times longer timestep and timescale of the liquid-state atom trajectories.
It is worth mentioning that, since OGN is essentially a math operation to transform graph pattern, the theoretical implication of OGN is not simply to replace Newton's equations, but to infer the pivotal structural patterns that govern atom dynamics.11,59 Those hidden patterns synthesized in OGN a posterior validate that the atom dynamics is largely encoded in their static structure, which echoes the recent finding that the topography of local energy landscape is largely encoded in the static structure.10,28 Then the next question is: What timescale of atom dynamics can be reached by the predictive power of their static structure? Ideally, this reachable timescale refers to all timescales associated with atom reorganization in this local energy landscape of the static structure, that is, a wide spectrum of relaxation time between liquid- and glassy-state atom dynamics.11 However, without using a giant model architecture (e.g., hundreds of deep and wide MPNN layers), the present OGN is still far from fully harnessing the predictive power of static structure. If the computational resource is unlimited, a giant-OGN architecture with considerably deep and wide MPNN layers would transform the initial graph in a very flexible and serialized manner, theoretically able to emulate much longer dynamics in one prediction step. We expect more endeavor in that direction to extend the prediction timescale. Overall, it is remarkable that, regardless of physics laws, the OGN simulation can predict near- (and potentially far-) future dynamics in one prediction step using solely the information of initial static structure—which makes OGN fundamentally different from physics-driven toolkits using infinitesimal timestep and presents a new paradigm of dynamics modeling.
Note, however, that the present shallow OGN architecture is designed to balance model accuracy and execution speed, so that the one-step predictivity become limited to restrict OGN to short-term dynamics applications. In that regard, by sacrificing execution speed, a giant-OGN architecture with deep layers of graph transformation theoretically holds the promise to predict much longer timestep per prediction step and extend to longer-term dynamics. Taking the present OGN as a basis, it remains a largely unexplored opportunity that more advanced, sophisticated OGN architecture can be developed to build a machine learning simulation engine that can extend to the targeted longer-term dynamics with a reasonable computational cost, such as the coupling of the shallow OGN module with a deep OGN module aiming to denoise the particle-level error accumulation. Although it seems unlikely to fully eliminate the propagation of errors, a delicate design of OGN architecture and machine learning strategy (e.g., reinforcement learning to train multiple particle-level agents that can denoise particle-level errors) is likely to extend OGN to the targeted longer-term dynamics. We expect that the present work would modestly stimulate new development in that direction. Moreover, despite the requirements of long timescale in most dynamics studies, the practical applications of OGN in short-term dynamics can still intrigue some impactful outcomes. For instance, when integrating with some interpretable machine learning techniques,60,61 the OGN model is likely to offer some insights into the physics laws that governs atom dynamics—which is generally independent of timescale, such as developing an empirical forcefield from the numerous atomic trajectories in short timescale.
![]() | (4) |
The initial configuration adopts a cubic box with periodic boundary condition, and the side length is set as 2 × rc so as to build small-size configurations to accelerate the training of graph networks,26 where rc is the neighbor-list cutoff and is defined as the sum of the empirical potential cutoff and the neighbor-list bin size,35,45i.e., rc = 2.5 + 0.5 (bin). The number of atoms in the configuration is set to match a preset number density of atoms ρ0 = 1.2 with a deduced glass transition temperature Tg ≈ 0.3,39,46i.e., system size N = 265 atoms. The atoms are randomly placed into the cubic box without any overlap. The atom velocities along x-, y-, and z-axis in the initial configuration are initialized as a normal distribution with a zero mean and a standard deviation of √(kBT/m) = √3.0 to set the system temperature as T = 3.0,49 where kB is the Boltzmann constant, and m is the average atom mass. All simulations are conducted under NVE ensemble. The timestep is set as 0.005 to satisfy the numerical constraint of small integration timestep for energy conservation.39,43 The initial configuration is relaxed to an equilibrium liquid temperature around 3.0 by iteratively rescaling the distribution of atom velocities to T = 3.0 at each timestep until convergence, that is, multiplying each velocity by √(EK/EK0) at each timestep until EK ≈ EK0,49 where EK and EK0 are the system's current and initial average kinetic energy per atom, respectively. This equilibrium liquid is then relaxed at T ≈ 3.0 under NVE ensemble for 10000 steps to obtain the atomic trajectories. Finally, the melt-quenched glass is prepared by quenching the equilibrium liquid to a low temperature T = 0.5 in 10
000 steps under NVT ensemble with a fictive temperature Tf > 0.5 (see Section S11 in ESI†). The glass is then relaxed at T ≈ 0.5 under NVE ensemble for 1 million steps to obtain the atomic trajectories. All simulations are conducted using the JAX-MD package.51
![]() | (5) |
![]() | (6) |
![]() | (7) |
![]() | (8) |
![]() | (9) |
(i) the input graph layer that builds atomic graphs {Gi} by converting the neighbor-list of each atom i into a geometric graph Gi comprising nodes {ni} and edges {eij}, where the node representation of atom i is ni = [Ai, vi] (i.e., the atom type and velocity) and the edge representation between atom i and j is eij = [rj − ri] (i.e., a directional distance between the two atoms).
(ii) the encoder layer that encodes graphs, where the encoder contains a node-MLP (i.e., multilayer perceptron58) function fn,encoder and an edge-MLP function fe,encoder that compute, respectively, the embedding n0i of each node ni (i.e., n0i = fn,encoder(ni)) and the embedding e0ij of each edge eij (i.e., e0ij = fe,encoder(eij)).
(iii) the successive MPNN layers that update graphs, where the l-th MPNN layer (l = 0, 1, 2, …) updates the edges {elij} and nodes {nli} from previous layer by a sequential operation of edge update followed by node update,11,27 namely, first using an edge-MLP function fle to compute the edge update el+1ij, that is,
el+1ij = fle(elij, nli, nlj) | (10) |
where the information of the two end nodes nli and nlj are passed into the edge elij, and then using a node-MLP function fln to compute the node update nl+1i, that is,
![]() | (11) |
(iv) the decoder layer that decodes graphs, where the decoder is a node-MLP function fn,decoder that transforms the updated nodes {ni} into the next-step change of atom positions {dri} and velocities {dvi}, i.e., [dri, dvi] = fn,decoder(ni), so as to yield the next-step configuration. More details about the model settings are described in the following section.
Footnote |
† Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d3mh00028a |
This journal is © The Royal Society of Chemistry 2023 |