What coefficients should I use to select a server CPU?

Here’s how good it used to be: for servers, there was a single Pentium Pro processor, which you didn’t even need to choose. With the advent of multi-core processors, it became clear that the more computing blocks, the better, but only until marketers stepped in and divided one CPU into ten sockets with ten cache options and ten frequency characteristics, so today the choice of CPU in the server is hell even for a trained technician. In this article, we will compare the comparison coefficients themselves, which marketers like to give in their documentation.

1. The cost per core (Price per Core)

The simplest and most understandable coefficient, which is obtained by simply dividing the price of the processor by the number of physical cores. Due to the fact that it does not take into account parameters such as processor architecture, cache size, memory controller type, and architecture, it is the most useless metric, which is currently only profitable for AMD, which flaunts its 64-core processors in front of Intel.

CPU Price, $ Price per core, $
Intel Xeon Phi 7290, 72C, 1.5-1.7 GHz 3213 44.6
Intel Xeon Gold 5320H, 20C, 2.4-4.2 GHz 1555 77.75
Intel Xeon Bronze 3204, 6C, 1.9 GHz 220 36.6
AMD EPYC 7662, 64C, 2.0-3.3 GHz 7300 114
AMD EPYC 7272, 12C, 2.9 GHz 690 57,5
AMD EPYC 7401P, 24C, 2.0 - 3.0 GHz 1048 43,6
AMD EPYC 7251, 8C, 2.1 - 2.9 GHz 478 59,75

Of course, it’s time to think about the Intel Xeon Phi series, which has x86-64 processors with up to 72 cores, with an extremely low frequency from 1 to 1.7 GHz. In General, Intel likes such low frequencies, and even in the initial line of universal Xeon Bronze processors, you can buy 6-core models with a frequency of 1.6 to 1.8 GHz, depending on the generation, and the price for the core there is also very low. And if you look at a random sample of server processors, the price for the core does not mean anything at all!

In general, when you are considering a processor for a server, you should already be aware of the load that the machine will bear, and if a 64-core processor has the optimal core price, but your cloud is more than enough for a 16-core machine, you will agree that you do not want to pay extra for unnecessary cores.

2. Price per megahertz

The optimal method is to multiply the number of cores by their base frequency, since modern operating systems have already learned to transfer tasks between cores without losing performance. The idea that you are evaluating your entire server, your entire cluster in terms of the total number of megahertz is actively promoted by VMware, and it looks very reasonable, especially when you compare the load of virtual machines with the frequency capacity of the server or cluster. For example, if the average virtual machine consumption is 500 MHz with a peak of up to 1.7 GHz, then you can roughly say that an 8-core processor with a frequency of 3 GHz will pull about 30 virtual machines, depending on how synchronously their consumption changes.

The disadvantages of this method are also enough: first, we have a question with HyperThreading technology, which offers the operating system from 1 to 4 virtual cores for each physical one. As a rule, they are useless in general tasks, and VMware generally recommends disabling this technology in the BIOS, but if you take into account the frequency of virtual cores, the comparison will not be fair.


The second drawback is Turbo Boost, which can increase the frequency for both 1 core out of 32 and for all 32, while the processor documentation may not specify overclocking restrictions. Comparing two generations of EPYC and one Threadripper, we found that one processor “Turboboost” only 2 cores, the other - 4, and the third - 32. With this arrangement of forces, it is logical to take into account the nominal frequency of cores in a system with good cooling, meaning that the processor will not go into trottling.

And yet, with all the disadvantages, this is the best coefficient for choosing a processor for private cloud and virtualization today.

3. The ratio of cores per Watt

In European countries with high electricity prices, IT companies sometimes use a ratio between the total TDP and the number of cores. This metric is not suitable for comparing AMD vs Intel, because there may be a situation when AMD’s processor is a SoC, and Intel also needs to take into account the power consumption of the chipset. This metric is not suitable for comparing “1 socket vs 2 sockets”, because an additional processor socket usually requires more serious cooling by powerful fans, which will kill all the benefit.

This metric is only suitable for one rare case with AMD processors, when a CPU with different architecture can be installed on the same server: EPYC Napples, which has 32 cores, or EPYC Rome, which has 64 cores, while all other components remain unchanged. Intel company does not spoil us with such gifts and on the contrary likes to change sockets for and without, so I recommend not to bother with questions of the ratio of power consumption per core.

4. The benchmark points on the dollar

It would seem that the most logical ratio looks like the performance that some test gives for every dollar invested, but there are drawbacks here. First, you need to clearly understand how your loads are parallelized, and does it make sense to test a 64-core CPU on a task that can’t run more than 8 threads?

Secondly, the more diverse the load in the cloud, the more difficult it will be for you to choose a test that takes into account the CPU load, the intensity of reading from memory, and the load on disk or network systems at the same time. In the end, you will be comparing servers, not processors.

Third, there are typical cases when cheap processors that occupy the last rows in the performance table will win at the expense of a low price using such comparative coefficients.

Fourth, different processors may have different technologies for speeding up typical tasks: this may be an interconnect that is installed directly in the CPU, hardware encryption offloading technology, various algorithms for improving system security, which in principle may not be available at all on the current version of the software, but will appear in the next updates, or Vice versa will be disabled, which is also not uncommon.

Of course, if your main task is rendering on the CPU, then Cinebench points per dollar is your main metric, but again, with the change in the type of renderer, it may let you down.

Why complex coefficients like “W/core/channel/memory/cache” are not suitable?

In principle, if you are required to make a comparison of processors in the justification of the contract price, no one will be offended if you start entering your own coefficients for frequency ratios, caches, the number of cores and the number of memory channels. Any parameter that has a quantitative characteristic, even the number of legs, can be put both in the numerator and in the denominator of the ratio. The main thing is to show the importance of this ratio with all your appearance.

In fact, neither the number of memory channels, nor the number of cores, their frequency, or cache size will tell you how well a particular processor fits your needs.


For a company’s private cloud, use the Megahertz-to-dollar ratio, but pay attention only to the physical cores and be sure to take into account the actual operation of the turbo-boost: how many cores can work at what frequency. This information can be obtained from reviews and tests. For render farms or similar pre-defined tasks that servers are allocated for, you can use the ratio of benchmark points per price.

All other coefficients, such as “core price”, “core/TDP ratio”, or “core frequency”, are so specific and applicable in such rare cases that you can come up with any of them yourself and flaunt hitherto unknown numbers in your presentations and technical documents.