Skip to content
Snippets Groups Projects
Commit 6584df4b authored by Kurt A. O'Hearn's avatar Kurt A. O'Hearn
Browse files

Merge branch 'master' of https://gitlab.msu.edu/SParTA/PuReMD.

parents 12a107bf 5c8e4d4b
No related branches found
No related tags found
No related merge requests found
## Introduction
[Introduction](https://gitlab.msu.edu/SParTA/PuReMD#introduction) |
[Documentation](https://gitlab.msu.edu/SParTA/PuReMD/doc) |
[Wiki](https://gitlab.msu.edu/SParTA/PuReMD/wikis/home)
Files from the [Purdue Reactive Molecular Dynamics](https://www.cs.purdue.edu/puremd) (PuReMD) project.
# Introduction
## Relevant Papers
This repository contains the development version of the
[Purdue Reactive Molecular Dynamics](https://www.cs.purdue.edu/puremd) (PuReMD) project.
Roughly by target platform
# Build Instructions
## Developer
To build, the following versions of software are required:
- git
- Autoconf v2.69+
- Automake v1.15+
- OpenMP v4.0+ compliant compiler (OpenMP versions only)
- MPI v2+ compliant library (MPI versions only)
- CUDA v6.0+ (CUDA versions only)
Instructions:
```bash
git clone https://gitlab.msu.edu/SParTA/PuReMD.git
cd PuReMD
git submodule init
git submodule update
autoreconf -ivf
./configure
make
```
To build tarball releases after configuring a specific build target, run the following:
```bash
make dist
```
## User
```bash
# Download release tarball
tar -xvf puremd-1.0.tar.gz
cd puremd-1.0
./configure
make
```
By default, the shared memory version with OpenMP support will be built. For other build targets,
run ./configure --help and consult the documentation. An example of building the MPI+CUDA version
is given below.
```bash
./configure --enable-openmp=no --enable-mpi-gpu=yes
```
# References
Shared Memory:
- [Serial](https://www.cs.purdue.edu/puremd/docs/80859.pdf)
- [MPI (message passing interface)](https://www.cs.purdue.edu/puremd/docs/Parallel-Reactive-Molecular-Dynamics.pdf)
- [CUDA (single GPU)](http://dx.doi.org/10.1016/j.jcp.2014.04.035)
- [Charge Method Optimizations with OpenMP](https://doi.org/10.1109/ScalA.2016.006)
Distributed Memory:
- [MPI (message passing interface)](https://www.cs.purdue.edu/puremd/docs/Parallel-Reactive-Molecular-Dynamics.pdf)
- [CUDA+MPI (multi-GPU)](https://www.cs.purdue.edu/puremd/docs/pgpuremd.pdf)
......@@ -488,11 +488,11 @@ static void Init_Charge_Matrix_Remaining_Entries( reax_system *system,
for ( i = 0; i < system->N; ++i )
{
H->j[*Htop] = i;
H->val[*Htop] = -1.0;
H->val[*Htop] = 1.0;
*Htop = *Htop + 1;
H_sp->j[*H_sp_top] = i;
H_sp->val[*H_sp_top] = -1.0;
H_sp->val[*H_sp_top] = 1.0;
*H_sp_top = *H_sp_top + 1;
}
......@@ -510,11 +510,11 @@ static void Init_Charge_Matrix_Remaining_Entries( reax_system *system,
H_sp->start[system->N + i + 1] = *H_sp_top;
H->j[*Htop] = i;
H->val[*Htop] = -1.0;
H->val[*Htop] = 1.0;
*Htop = *Htop + 1;
H_sp->j[*H_sp_top] = i;
H_sp->val[*H_sp_top] = -1.0;
H_sp->val[*H_sp_top] = 1.0;
*H_sp_top = *H_sp_top + 1;
for ( pj = Start_Index(i, far_nbrs); pj < End_Index(i, far_nbrs); ++pj )
......@@ -583,11 +583,11 @@ static void Init_Charge_Matrix_Remaining_Entries( reax_system *system,
for ( i = system->N + 1; i < system->N_cm - 1; ++i )
{
H->j[*Htop] = i;
H->val[*Htop] = -1.0;
H->val[*Htop] = 1.0;
*Htop = *Htop + 1;
H_sp->j[*H_sp_top] = i;
H_sp->val[*H_sp_top] = -1.0;
H_sp->val[*H_sp_top] = 1.0;
*H_sp_top = *H_sp_top + 1;
}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment