DIY
A block-parallel library for writing scalable parallel algorithms
Area: Data and visualization
CASS member: RAPIDS
Description
DIY is a block-parallel library for writing scalable distributed- and shared-memory parallel algorithms that can run both in- and out-of-core. The same program can be executed with one or more threads per MPI process and with one or more data blocks resident in main memory. The abstraction enabling these capabilities is block-parallelism; blocks and their message queues are mapped onto processing elements (MPI processes or threads) and are migrated between memory and storage by the DIY runtime. Complex communication patterns, including neighbor exchange, merge reduction, swap reduction, and all-to-all exchange, are implemented in DIY.
Target audience
DIY is perfect for developers wishing to quickly implement block-parallel analysis algorithms that can scale on HPC platforms.
Package links
- Spack: diy