GPU Concurrency: Weak Behaviours and Programming Assumptions

Abstract

Concurrency is pervasive and perplexing, particularly on graphics processing units (GPUs). Current specifications of languagesand hardwareare inconclusive;thus programmers often rely on folklore assumptions when writing software. To remedy this state of affairs, we conducted a large empirical study of the concurrent behaviour of deployed GPUs. Armed with litmus tests (i.e. short concurrent programs),we questioned the assumptions in programmingguides and vendor documentation about the guarantees provided by hardware. We developed a tool to generate thousands of litmus tests and run them under stressful workloads. We observed a litany of previously elusive weak behaviours, and exposed folklore beliefs about GPU programming -— often supported by official tutorials — as false.

As a way forward, we propose a model of Nvidia GPU hardware, which correctly models every behaviour witnessed in our experiments.The model is a variant of SPARC Relaxed Memory Order (RMO), structured following the GPU concurrency hierarchy.