Skip to content
Snippets Groups Projects
Select Git revision
  • dummy-atom-support
  • energy_minim
  • main default protected
  • new_setup
  • pg-puremd-charge-solver-opt-sdsc-hackaton
  • qeq-nonzero-net-charge
  • restraint_support
  • tensorflow_update
  • tensorflow_update_v2
  • v1.0-rc1
10 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.07Aug65129Jul226129Jun252423171252112May107430Apr201513610Mar53226Feb191017Jan1687618Dec111064325Nov8429Oct25242017111086324Sep2312109529Aug1764130Jul271221Jun1210330May29282724235430Apr18151318Mar17141264128Feb27252120191814137629Jan282726252321181510986543231Dec282723222119181716151312107430Nov27262523221298765329Oct252422211564128Sep2220191715131211876530Aug282622191531Jul1514Jun30May2928242322211816151430Apr191715113227Mar2622211312126Feb23201918151413128742129Jan26252417830Nov2721326Oct20191623Sep1910627Aug252423222126Jul252416151122Jun2017137626May119Apr1827Mar221827Feb26252321161123Dec865325Nov2422211613Oct119Sep87643131Aug2623187653229Jul282520130Jun282721201817161412987331May301918161326Apr23181225Mar131124Feb17162129Jan282614PG-PuReMD: small clean-ups to QEq dual solver code.PG-PuReMD: change CUDA kernel error checking to only make calls to cudaDeviceSynchronize when in debugging mode (non-debug builds thus have better performance with kernel asynchronous launch behavior in single default stream). Change file I/O to only flush when in debugging mode (non-debug builds allow the I/O operations to be grouped for better performance).PG-PuReMD: add dual QEq BiCGStab solver.PG-PuReMD: add support for symmetric, half stored format (SYM_HALF_MATRIX) of the sparse matrix for the charge model (add initialization routines and fix-up solver).PG-PuReMD: rework custom reduction functions to use less shared memory and to have the correct thread and block counts. Fix bug in charge matrix initialization (under-allocated space previously). Split charge matrix and bonds/hydrogen bonds memory management routines to mirror the similar splitting of the initialization routines. Code clean-up related to half vs. full list and sparse matrix formats.PG-PuReMD: change NaN floating point checks on energies to also include infinity values.PG-PuReMD: improve performance by conditionally re-running initialization kernels within Cuda_Init_Forces based on out-of-memory conditions (for charge matrix, bonds list, hydrogen bonds list).Tools: fix issue with merge for parse_results command (total line count error).Tools: merge updated parse_results command with support for different run types.Tools: merge updated parse_results command with support for different run types.Tools: add PuReMD custom geometry replication to geo_tool.py. Remove older awk scripts. Fix issue with silica 6000 atom PDB file (CRYST1 lines not 70 characters).Update .gitignore (restart files). Add missing updates to m4 files (CUDA).Tools: update run_sim.py to generate more robust Slurm and Torque/PBS job scripts with the MPI+CUDA code (added extra option for specifying additional flags to be used with the command used to invoking the mpi+x code).PG-PuReMD: fix performance logging code around GPU code. Rearrange header files to allow some preprocessor definitions to be defined via options in configure script. Rework Autoconf and Automake code to allow passing flags directly to nvcc compiler wrapper (for CUDA code). Enable C11 and C++11 standard targets during compilation.PG-PuReMD: fix compilation error (remove variable). Fix issue with dual CG solver for QEq (local arithmetic in SpMV was incorrect). Revert CG solver convergence criterion to use norm of the preconditioned residual vector.PG-PuReMD: small fix for initializtion routine performance logging (timers not reset after outputting to log file).PG-PuReMD: corrections for utilizing a SAI preconditioner in MPI code (errors introduced during previous code merges).PG-PuReMD: revert CG solver convergance criterion to used preconditioned residual norm. Re-enable dual charge solver for QEq.PG-PuReMD: small corrections to performance logging code.PG-PuReMD: small corrections to performance logging code.PG-PuReMD: ensure that all processors log performance timings. Fix timing reduction.PG-PuReMD: rework performance logging code in linear solver routines to avoid excessive MPI communications.PG-PuReMD: adjust performance logging to compute mean timings across all processors. Other code clean-up.PG-PuReMD: fix issue with certain data structures not being reallocated when the local number of atoms owned by a processor increases after exchanging messages with neighbor processors. Fix an issue where the MPI send message buffer may be overwritten by a received message (Coll). Small fix to not overwrite the program status return value with local function return values. Other code clean-up.Tools: fix geometry file extension auto-detection for run_md_custom. Add missing restart-related control file keywords.PG-PuReMD: fix out-of-bounds mmeory accesses and unitialized data usage in energy and force tabulation routines for van der Waals and Coulomb interactions.PG-PuReMD: fix type mismatches for SAI preconditioner code.PG-PuReMD: fix issue with some interaction lists not being initialized on first step of simulations from restarted runs. Fix typos in C++ code for utilizing C code (__cplusplus). Clean up C code for utilizing C++ code (unconditional extern's for unmangled member naming). Ensure that CUDA thread and block sizes are correctly set. Tweaks to memory transfers and allocation logic in integration code (GPU). Fix logic error with charge matrix allocation in GPU code. Other code clean-up.Tools: fix geo_format for run_md_custom (geo file is mandatory, so the file type extension should always happen).PG-PuReMD: add BiCGStab solver for GPU code. Corrections to CUDA block and thread sizes for some kernels. Fix some variables being used with uninitialized values.Build: default to -O2 optimization.PG-PuReMD: fix issue with charge solver preconditioner refactoring rate causes issues with reneighboring actions (preconditionering rate was previously coupled with reneighoring rate for SAI but this causes issues for Jacobi, etc.). Be more greedy with memory allocation sizes to decrease reallocation frequency (MPI buffers, etc.). More GPU code clean-up.PG-PuReMD: fix MPI buffer allocations sizes. Ensure that nonblocking MPI messages have completed for each dimension before continuing. Rework reallocation checks in integration routines. Temporarily disable CUDA-aware MPI code paths (need to perform packing/unpacking first on device before handing off pointers). Other code clean-up.PG-PuReMD: fix host-device transfers for charge solver code (SpMV data transfer sizes). Remove unused code. Other general code clean-up.PuReMD: fix compile errors with newer compilers due to function prototype mismatches.PG-PuReMD: re-write MPI code to dynamically allocate buffer sizes. Other general code refactoring.PG-PuReMD: fix issue with divergent MPI_Reduce calls by multiple MPI processes (timing logging code). Add run-time MPI routine error checking. Fix issue with upper limit of hydrogen atoms allowed being hard-coded in GPU code (use dynamic memory allocation instead). Other code clean-up.PG-PuReMD: clean-up MPI custom datatype initialization and error reporting. Avoid truncations in buffer size calculations. Other code clean-up.sPuReMD: finalize corrections for pressure calculations. Change output units from GPa to ATMs. Other formatting changes.PG-PuReMD: fix issues with MPI_Reduce calls using MPI_IN_PLACE for non-source processors.
Loading