Autotuning of OpenCL Kernels with Global Optimizations

Investor logo

Warning

This publication doesn't include Faculty of Arts. It includes Institute of Computer Science. Official publication website can be found on muni.cz.
Authors

FILIPOVIČ Jiří PETROVIČ Filip BENKNER Siegfried

Year of publication 2017
Type Article in Proceedings
Conference 1st Workshop on Autotuning and Adaptivity Approaches for Energy Efficient HPC Systems (ANDARE'2017)
MU Faculty or unit

Institute of Computer Science

Citation
Web https://dl.acm.org/citation.cfm?doid=3152821.3152877
Doi http://dx.doi.org/10.1145/3152821.3152877
Field Informatics
Keywords autotuning; OpenCL; CUDA; global optimization
Description Autotuning is an important method for automatically exploring code optimizations. It may target low-level code optimizations, such as memory blocking, loop unrolling or memory prefetching, as well as high-level optimizations, such as placement of computation kernels on proper hardware devices, optimizing memory transfers between nodes or between accelerators and main memory. In this paper, we introduce an autotuning method, which extends state-of-the-art low-level tuning of OpenCL or CUDA kernels towards more complex optimizations. More precisely, we introduce a Kernel Tuning Toolkit (KTT), which implements inter-kernel global optimizations, allowing to tune parameters affecting multiple kernels or also the host code. We demonstrate on practical examples, that with global kernel optimizations we are able to explore tuning options that are not possible if kernels are tuned separately. Moreover, our tuning strategies can take into account numerical accuracy across multiple kernel invocations and search for implementations within specific numerical error bounds.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.