The modern graphics processing unit (GPU) started out as an accelerator for Windows video games, but over the last 20 years has morphed into an enterprise server processor for high-performance computing and artificial-intelligence applications.

Now GPUs are at the tip of the performance spear used in supercomputing, AI training and inference, drug research, financial modeling, and medical imaging. They have also been applied to more mainstream tasks for situations when CPUs just aren’t fast enough, as in GPU-powered relational databases.

As the demand for GPUs grows, so will the competition among vendors making GPUs for servers, and there are just three: Nvidia, AMD, and (soon) Intel. Intel has tried and failed twice to come up with an alternative to the others’ GPUs but is taking another run at it.

The importance of GPUs in data centers

These three vendors recognize the demand for GPUs in data centers as a growing opportunity. That’s because GPUs are better suited than CPUs for handling many of the calculations required by AI and machine learning in enterprise data centers and hyperscaler networks. CPUs can handle the work; it just takes them longer.https://imasdk.googleapis.com/js/core/bridge3.516.0_en.html#goog_19144902190 seconds of 27 secondsVolume 0% 

Because GPUs are designed to solve complex mathematical problems in parallel by breaking them into separate tasks that they work on at the same time, they solving them more quickly. To accomplish this, they have multiple cores, many more than the general-purpose CPU. For example, Intel’s Xeon server CPUs have up to 28 cores, while AMD’s Epyc server CPUs have up to 64. By contrast Nvidia’s current GPU generation, Ampere, has 6,912 cores, all operating in parallel to do one thing: math processing, specifically floating-point math.

Performance of GPUs is measured in how many of these floating-point math operations they can perform per second or FLOPS. This number sometimes specifies the standardized floating-point format in use when the measure is made, such as FP64.

So what does the year hold for server GPUs? Quite a bit as it turns out. Nvidia, AMD, and Intel have laid their cards on the table about their immediate plans, and it looks like this will be a stiff competition. Here’s a look at what Nvidia, AMD, and Intel have in store.

Nvidia

Nvidia laid out its GPU roadmap for the year in March with the announcement of its Hopper GPU architecture, claiming that, depending on use, it can deliver three to six times the performance of its previous architecture, Ampere, which weighs in at 9.7 TFLOPS of FP64. Nvidia says the Hopper H100 will top out at 60TFLOPS of FP64 performance.

Share:

administrator

ahmedaljanahy Creative Designer @al.janahy Founder of @inkhost I hope to stay passionate in what I doing

Leave a Reply

Your email address will not be published. Required fields are marked *