Autotuning HPC Codes and ML Pipelines with GPTune
GPTune is a Gaussian process-based Bayesian optimization framework for performance autotuning of HPC codes and ML pipelines. GPTune was originally developed under the Exascale Computing Project (ECP) and later leveraged by several FASTMath and RAPIDS teams for performance autotuning and uncertainty quantification. GPTune relies on several advanced tuning algorithms including multitask learning, transfer learning, multi-fidelity and multi-objective tuning, and support for large sample counts with sparse kernels, low-rank kernels and distributed-memory parallelization. This BoF session will provide a high-level introduction to GPTune along with interactive code demos, as well as engagement of the audience to collect application needs and user feedback.
Presenters
- Yang Liu (Lawrence Berkeley National Laboratory)
- Xiaoye Sherry Li (Lawrence Berkeley National Laboratory)