GPU computing
Using Graphics Processing Units for general-purpose computation, exploiting massive parallelism for data-parallel workloads. GPUs have thousands of cores enabling high-throughput computation. CUDA and OpenCL enable GPU programming.
Real World
OpenAI trained GPT-4 using clusters of NVIDIA A100 GPUs, each containing 6,912 CUDA cores, because the matrix multiplications in neural network training are massively data-parallel tasks perfectly suited to GPU architecture.
Exam Focus
Explain why GPUs suit specific workloads — marks come from linking thousands of simple cores to data-parallel tasks, not just stating GPUs are fast.
How well did you know this?