Skip to content
Snippets Groups Projects
Select Git revision
  • dummy-atom-support
  • energy_minim
  • main default protected
  • new_setup
  • pg-puremd-charge-solver-opt-sdsc-hackaton
  • qeq-nonzero-net-charge
  • restraint_support
  • tensorflow_update
  • tensorflow_update_v2
  • v1.0-rc1
10 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.021Apr1811628Feb232125Jan19622Dec1716718Nov9125Oct228716Aug1210526Jul2320142129Jun2824232221982124May131211107429Apr28262019161514930Mar26181615543126Feb23191817151143129Jan282218116416Dec15141020Nov22Oct22Sep2016920Aug1714121110765129Jul226129Jun252423171252112May107430Apr201513610Mar53226Feb191017Jan1687618Dec111064325Nov8429Oct25242017111086324Sep2312109529Aug1764130Jul271221Jun1210330May29282724235430Apr18151318Mar17141264128Feb27252120191814137629Jan282726252321181510986543231Dec282723222119181716151312107430Nov27262523221298765329Oct252422211564128Sep2220191715131211876530Aug282622191531Jul1514Jun30May2928242322211816151430Apr191715113227Mar2622211312126Feb23201918151413128742129Jan26252417830Nov2721326Oct20191623Sep1910627Aug252423222126Jul252416151122Jun2017137626May119Apr1827Mar221827Feb26252321161123Dec865325Nov2422211613Oct119Sep87PG-PuReMD: standardize and clean-up CUB usage. Add wrapper for non-CUB reductions (rvec, rvec2). Other code clean-up.PG-PuReMD: add support for using cuBLAS routines for dense linear algebra (mainly level-1 routines in sparse linear solver). Clean-up solver code for using streams.PG-PuReMD: split van der Waals and Coulomb force and energy computation kernels and execute in seperate streams. Change stream of Coulomb-related kernels (init, charge solver, Coulomb). Use math function for cubic root calculations. Other code clean-up.PG-PuReMD: default to using bundled CUB library witn recent CUDA SDK version (>= v11.x), while retaining older CUB submodule for older CUDA capabilities (use NVCCFLAGS to manually include CUB in this case). Refactor timing code to more accurately measure kernel timing. Refactor stream logic to expose more parallelism (valence+torsion branch) and to perform fewer synchronizations. Other code clean-up and refactoring.Tools: update Python example driver code.sPuReMD: refactor contiguous and custom charge constraint code. Make applicable API functions available to all interfaces.sPuReMD: fix issue with API functions (setup2 / reset2 -> cleanup) regarding new contiguous and custom charge constraint implementation (identified and suggested fix by Cagri Kaymak).sPuReMD: fixes for custom charge constraint specification for QM/MM simulations.sPuReMD: fix uninitialized variable compilation warning.sPuReMD: do not compute hydrogen bond interactions when there are no valid force field parameters available for the given triplet of atom types. Allow arbitrary numbers of H-bond interactions per atom.sPuReMD: fix issue with sparse charge matrix (for preconditioning) not being reallocated upon out-of-memory condition. Fix issues where charge matrix and bond/H-bond list entries were reading from and potentially writing to invalid memory locations. Remove unused code for grid. Other code clean-up.sPuReMD: fix uninitialized variable issue with API functions (charge constraints => setup2, reset2).sPuReMD: add support for custom charge constraints with EEM for QM/MM simulations.sPuReMD: add support for setting charge computation frequency (charge_freq).sPuReMD: add error checks around I/O routines to silence warnings regarding disregarded return values.PG-PuReMD: fix potential MPI message collision issue in SAI preconditioner (same tags to same source rank => use different tags). Reorder communications to allow message size detection and buffer reallocation if needed.PG-PuReMD: fix queue usage in SAI preconditioner (memory leak, previously fixed size).Merge branch 'master' of https://gitlab.msu.edu/SParTA/PuReMD.Merge branch 'fix-issue-9' into 'master'Fix memory allocation issues related to API usage (setup => cleanup but no simulate) as detailed in Issue #9.PG-PuReMD: port memory allocation checks. Begin fixing SAI code.sPuReMD: fix hbond list initialization when empty.sPuReMD: fix another reallocation issue. Fix list index initialization issue. Other code clean-up.sPuReMD: fix reallocation issue.sPuReMD: rework memory management logic to better match PG-PuReMD code. Adapt new memory management logic for multiple simulation logic (as motivated by memory issues observed with QM/MM code for AMBER integration work). Other code clean-up.PG-PuReMD: fix bugs in the SAI preconditioner for the charge solver (memory leak, incorrect row numbers).PG-PuReMD: adjust energy conversion constant to match sPuReMD code.PG-PuReMD: port SAI preconditioner from MPI codebase. Rework charge solver preconditioner code. Other code clean-up.sPuReMD: change dummy atom behavior to participate only in Coulomb interactions (exclude from van der Waal interactions in this changeset).PG-PuReMD: fix compilation issue with MPI code (from SDSC hackathon branch merge).Merge branch 'master' into pg-puremd-charge-solver-opt-sdsc-hackaton.PG-PuReMD: fix cross-stream race conditions on intermediary force calculation variables (CdDelta, Cdbo, etc.). Rework valency and torsion calculations to minmize the number of atomic operations performed. Split lone pair and over-/under-coordination kernels. Fix small size mismatch in allocation routines (over-allocation). Other code clean-up.pg-puremd-charg…pg-puremd-charge-solver-opt-sdsc-hackatonPG-PuReMD: fix adding keyword for polarization energy.sPuReMD, PG-PuReMD: add control file keyword (include_polarization_energy) to toggle polarization energy calcuation in ReaxFF.Merge branch 'master' into pg-puremd-charge-solver-opt-sdsc-hackaton.sPuReMD: ensure matrix rows are sorted.sPuReMD: fix charge matrix storage estimate for molecular charge constraints.sPuReMD: rework charge solver for QMMM to not require explicit masking (MM atom fixed charges are not respected). Other code clean-up.PG-PuReMD: add missing dual QEq solver implementations for MPI.PG-PuReMD: refactor MPI charge solver code to better align with shared-memory and MPI+CUDA codes.
Loading