Graphics Processing Units (GPUs) have been used in High-Performance Computing for a decade now. On paper, they offer great advantages in terms of energy efficiency, but the problem of porting applications to these kinds of nodes has not yet been fully resolved. It’s important to realize that it typically takes around ten years for a company to develop the code for a simulation program, with a view to using that code on their chosen architecture for anything between ten and 20 years. Some organizations are making the effort to port their code, using solutions such as MSC Software’s Nastran; but the sheer size of this task is limiting the actual number of large-scale exercises of this kind.
Today, around 95% of all computing power is still carried out using CPUs, admittedly with some GPUs being used for testing. But when it comes to all the bids being submitted by Bull, in all cases we offer a pure CPU option and a hybrid CPU/GPU configuration. Personally, I believe that by around 2016-2017, the vast majority of installed nodes (probably around 80%) will still consist of generic x86 set-ups.
It’s possible that we will see ARM 64-bit nodes capturing some market share, but that will all depend on their efficiency and the available development environments. But I think that in the future will belong to multi-core CPUs such as the Intel® Xeon® Phi. With around a hundred cores per slot (and, eventually, even more), this is obviously the most likely way forward. Which does nothing to alleviate the difficulties of porting applications to these new architectures…
Article published in HighPerformanceComputing, OCTOBER 2013