At AFDS (AMD Fusion Developer Summit) in his keynote, Herb Sutter, Microsoft’s principal architect for Native Languages, announced C++ AMP (Accelerated Massive Parallelism), a minimal extension to C++, which enables a software developer to implement data parallel algorithms in C++. C++ AMP is fully integrated with Visual Studio.
To me this is a significant step forward in the era of accelerated parallel processing, and yet repeatedly I am asked “But why is that a good thing Mark? Don’t you want everyone to use OpenCL? Surely that’s the best way to access the power of AMD’s GPUs?”
While OpenCLTM is an excellent choice for GPU compute, my goal is to ensure that as many developers as possible are creating as many applications as possible that leverage the awesome potential for compute provided by a contemporary GPU.
While OpenCLTM is an excellent choice for GPU compute, my goal is to ensure that as many developers as possible are creating as many applications as possible that leverage the awesome potential for compute provided by a contemporary GPU. OpenCL has many benefits: it is an open standard, and applications that use OpenCL have the potential to be used across a variety of operating systems and devices. Using OpenCL, it is also possible to achieve highly effective use of GPU and CPU capabilities. While these attributes are very important, we need to also understand that they are not the only important attributes for many developers.
On a CPU, best performance, for a short piece of code executing on a single core, can be obtained by writing in assembler. There are developers that still do this. However, many more developers use C++. Programmer productivity is higher with languages like C++, and performance can be a little lower, but without languages such as C++, many applications are simply not feasible. A similar argument can be made for high-level languages such as Java or the .NET family.
AMD GPUs support both OpenCL™ and DirectCompute, and C++ AMP is an important new capability bringing the power of GPU compute to a broad array of potential applications. I foresee a world where many current popular programming paradigms, perhaps with extensions, will leverage GPU compute, but in a way that minimizes programming complexity, and yet still obtains a useful boost from the GPU. We already see this in Aparapi, an API for expressing data parallel workloads in Java, and others are developing additional novel approaches for other popular programming languages/environments.
With C++ AMP, Microsoft is illuminating a path to GPU compute for everyone.
Please also see “Another tool in the parallelism toolbox – Microsoft C++ AMP” by Robin Maffeo.
Mark Ireton is the Product Manager for Compute Solutions at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.