Past Computational Research:

Molecular Dynamics

Parallel Interactive MD Simulation

Molecular dynamics makes use of classical equations of motion to study atomic and molecular systems and their properties. The basic concept is to chose a mathematical model for the interactions between system constituents and solve the equations of motions to generate particle trajectories. Then many system properties such as its structure, equation of state, transport coefficients, and non-equilibrium response (i.e. due to deformation) can be studied.

In a group project for my Particle Based Simulations course, we developed a parallelized (using Message Passing Interface) interactive real time simulation of molecular dynamics using the Lennard Jones potential, meant for use on a Raspberry Pi supercomputer with an XBox One controller. I wrote most of the molecular dynamics calculations portion of the code, including the MPI paralelization. As a group, we linked the MD calculations with OpenGL for real-time visualization and also with an XBox One controller which controlled the position, sign, and magnitude of an electric potential. We demonstrated running this entire code on a Raspberry Pi supercomputer (8 Raspberry Pi's connected together). Our code is open-sourced and available at the link below.

See the code

Technical Details about the Implementation

The model was implemented in an object-oriented C++ code. All equations were scaled to Lennard- Jones units in order to simplify the computations. For testing of the model, the system was initialized as a simple cubic 2D lattice of 2700 particles with initial velocities determined by the Maxwell-Boltzmann distribution. Then the initial velocities were adjusted to ensure that the intial system momentum is zero. The system’s temperature was controlled by velocity rescaling calculated from the equipartition theorem. Since the purpose of this project is to make a science demonstration, we decided to use reflecting boundary conditions to eliminate the possible confusion which periodic BCs can cause to a person not familiar with them. For the parallel implementation, one processor is used as the master and only gathers and prepares for output the simulation results from the rest of the nodes. The system geometry is automatically split in 1D among the rest of the processors. Benchmark runs were performed using between one and six nodes. A speed up of almost 15x was found between the one processor and six processor runs. Also temperature conservation and energy stability were confirmed.

We use domain decomposition method to realize the MPI parallelization. In particular, we partitioned the whole rectangle simulation box into several smaller rectangular domains. Each domain and its particles are assigned to a processor. For each iteration, there will be two sets of communications between adjacent domains/processors: particle migration and ghost particles. If a particle moves to the neighboring domain after updating its position, its information needs to be migrated to the corresponding processor. To properly calculate the force of the particles close to the domain boundaries, the particle positions on the other side of the boundaries (ghost particles) are also needed from neighboring domains. Our procedure for these two communications are similar:

1. pack the outgoing information into a vector
2. use non-blocking Isend/Irecv to send/receive number of particles in the vector
3. in each domain, resize vectors for receiving particles
4. use Isend/Irecv to send out the packed information
5. unpack the incoming vector