sviko.com

Guide for migration from Intel Xeon to AMD EPYC: debunking myths, avoiding pitfalls

Consider a typical situation: in your company, it’s time to expand your data center, or replace outdated equipment, and your supplier suggests that you consider a server on an AMD EPYC (ROME) processor. It does not matter whether you have already made a decision or are just thinking, this article is for you.

A friend of mine, a stock trader, likes to repeat that the biggest money is earned on the change of trend, and a successful stock player should be able not only to see this change (which is actually simple), but also to accept it for yourself, changing the style of work under the new realities (which is actually difficult).

AMD-fever in full swing

Today, it is safe to say that astrologers have announced 2019 - the year of AMD. The company celebrates one success after another, Ryzen processors for desktop PCs are knocked out in sales leaders among enthusiasts, the chiplet layout allows you to release a 1-socket server with 64 cores, EPYC server processors with a Rome core set one world performance record after another, recently replenished with the results of the Cinebench rendering benchmark, and all such records have already accumulated more than a hundred (this, by the way, is a topic for a separate article). Here are some curious comparisons from AMD’s marketing materials:

Even VMware in its blog confirms that 1-socket configurations have become powerful enough for certain tasks.

AMD shares have doubled since the beginning of the year, and flipping through the headlines, you feel like in another reality: about AMD-only good, but what about Intel?

Yesterday’s it trendsetter is today forced to fend off security concerns by slashing prices on senior processors and investing a crazy $ 3 billion in dumping and marketing to stand up to AMD. Intel shares are not rising, the market is captured by AMD fever.

What does AMD offer?

First of all, AMD entered the market with a serious proposal: “1-socket server instead of 2-socket”. For an ordinary customer, this is a kind of promise, meaning that AMD processors have so many cores and they are significantly cheaper, so if you previously bought a 2-processor machine for 64 threads, today in a 1-processor machine you can have 128 computational threads (64 physical cores with Hyperthreading) in addition to the mass of memory strips, disks and expansion cards. In general, even in relation to the first generation of EPYC it is-tens of thousands of dollars of savings from each server. Let’s use the calculation of last year’s HPE Proliant server configurations as an example.

At the server configuration stage, the difference between 1-processor and 2-processor configurations is about 30%, and it will be the greater the simpler your server. For example, if you do not install 2-terabyte SSDS in each server, the price difference between Intel and AMD will be 1.5-fold. Now the most interesting thing is the cost of software licensing for the servers configured by us. Today, most software server platforms are licensed per processor socket.

Comparison of software licensing costs for selected servers

1 x AMD EPYC (HPE ProLiant DL325 Gen10 4LFF) 2 x Xeon Platinum (HPE ProLiant DL360 Gen10 4LFF)
Citrix Hypervisor Standard Edition 763 $ 1526 $
Citrix XenServer Enterprise Edition 1525 $ 3050 $
Red Hat Virtualization with standard support 2997 $ 2997 $
Red Hat Virtualization with advanced support 4467 $ 4467 $
SUSE Linux Enterprise Virtual Machine Driver, Unlimited 1890 $ 3780 $
VMWare vSphere Standard with standard support 1847 $ 3694 $
VMWare vSphere Standard with advanced support 5 968 $ 11 937 $

Further-more! When using 64-core EPYC Rome, you can get 256 threads in one AMD server with two CPUs, and 112 threads in an Intel server using Xeon Platinum 8276 - 8280 processors. We do not consider Xeon Platinum 9258 processors with 56 physical cores for the reasons discussed in this article. And if, for example, the workload implies 2000 threads, then you will have enough 8 dual-processor machines on AMD EPYC 7742, or 18 similar servers on a pair of Xeon Platinum 8280. With fewer physical servers, you start saving on host licenses, power consumption, and even maintenance personnel. There is a reasonable question, and what with productivity? First, let’s look at AMD’s marketing slides.

Looking at the presentation slides, you still need to realize that in your case, both the speed of Intel and the speed of AMD may be different, and the picture for Epyc (Rome) will be less rosy on single-threaded applications that depend on high frequency. However, the criteria for data security for both platforms are exactly the same. And AMD not only has not discredited itself in this area, but on the contrary has technologies such as SEV for physical isolation of virtualok and containers within a single host, without the need to recompile applications. We have talked in detail about this technology, and we recommend that you read our article. But despite all of the above, pardon the pun, but there remains a sense of understatement. Ordinary sysadmins and large IT-Directors, when it comes to AMD, sometimes ask completely childish questions, and we answer them.

Question No. 1 - the compatibility of the existing stack at AMD

Needless to say, all modern operating systems, including Windows Server, VMware ESX, Red Hat Enterprise Linux, and even FreeBSD support AMD EPYC processors for both application startup and virtualization. But some IT professionals have questions: will the virtual machines created on the server with Intel Xeon under AMD EPYC work? Let’s just say that if everything were simple and smooth, this article would not arise, and even considering that the communication of the virtual machine with the processor and I / o components is carried out through the layer in the form of a hypervisor, there are enough questions. At the time of writing compatibility with major operating systems when installed on bare iron looks like this:

Comparison of software licensing costs for selected servers

OS/version Compatibility with AMD EPYC 1 (Naples) Compatibility with AMD EPYC 2 (Rome)
Red Hat Enterprise Linux 7.4.6 (Kernel 3.10)
8.0 (Kernel 3.10)
7.6.6
8.02
Ubuntu Linux 16.04 (Kernel 4.5)
17.04 (Kernel 4.10)
17.10 (Kernel 4.12)
18.04 (Kernel 4.15)
19.10 (Kernel 5.3)
18.04 (Kernel 4.21)
19.10 (Kernel 5.3)
Microsoft Windows Server 2012 R2 (2013-11)
2016 (2016-10)
2019
2012 R2 *
2016 *
2019
VMware 6.7 6.5 U3
6.7 U3
FreeBSD - -

*At Microsoft, it is possible to tell directly, all as usual: the newest Windows Server 2019 (release after October, 2019) supports new processors as they say, “from a box”. The first generation AMD EPYC is generally supported by all three versions of Windows Server without restrictions, and to install Windows Server 2016 on 64-core AMD EPYC 77x2 in the BIOS, you must first disable SMT, and if you decide to install Windows Server 2012 R2, then you need to disable X2APIC in the BIOS. With EPYC processors with 48 cores or less, everything is much easier: nothing in the BIOS is not necessary to disable, but if you suddenly decide to roll Windows Server 2012 on a machine with two 64-core EPYC-AMI, then remember that in 2012, no one thought that one server can be stuffed 256 logical cores, and this operating system supports only 255 logical cores, so that one core is disposed.

Absolutely everything is sad only for FreeBSD: in the most recent stable release 11.3 from July 2019, there is not a word about AMD EPYC, even the first generation. I have a very biased attitude towards FreeBSD: an operating system that does not support many network cards, which is updated every six months, and to which publishers simply can not make a normal boot .The ISO image freely recorded by Rufus or Balena Etcher personally tells me that you must have very good reasons in 2019 to use “FREEEEEBSD” on bare iron. And I would dismiss this operating system from accounts, but on FreeBSD such popular gateways of safety as PFSense and OPNSense, and also storage systems with ZFS (Nexenta, FreeNAS) work. Let’s check if this means that you will not be able to use these distributions on bare metal?

Testing of compatibility of FreeBSD and AMD EPYC

OS/version Compatibility with AMD EPYC 1 (Naples) Compatibility with AMD EPYC 2 (Rome)
PFSense 2.4.4 P3
Works fine
Works fine
OPNSense 19.7
Works fine
Works fine
FreeBSD 11.3
Does’t worked at all
Does’t worked at all
FreeNAS 11.2 U6
Works fine
Works fine

In General, what does “run” mean with respect to FreeBSD? The Intel X550 network card integrated into the motherboard, on the move I saw and configured only FreeNAS on DHCP, with network gateways dances with a tambourine were coming.

The representative of Nexenta in the letter recommended us not to be engaged in nonsense, and to start their virtual product Nexenta VSA in a virtual machine under ESXi especially as they have even a plug-in for vCenter facilitating storage monitoring. In General, since the 11th version of FreeBSD works fine under ESXi, KVM, and Hyper-V, and before we move on to virtualization, let me summarize an intermediate result that will be important for our further research: when installing the operating system on AMD EPYC 2 (Rome), you need to use the latest distributions of your operating systems compiled after September 2019.

Question No. 2: what about compatibility with VMWare vSphere

Since AMD EPYC’s native environment is “clouds”, all cloud-based operating systems support these processors without complaints and without limitations. And it’s not enough to just say that it " starts and works." Unlike Xeon-s, EPYC processors use chiplet layout. In the case of the first generation (7001 series), there are four separate chips with their cores and a memory controller on the common CPU body, and a situation can occur when the virtual machine uses computing cores belonging to one NUMA domain, and the data lies in memory strips connected to the NUMA domain of another chip, which causes an extra load on the bus inside the CPU. Therefore, SOFTWARE manufacturers have to optimize their code for the features of the architecture. In particular, VMWare has learned to avoid such distortions in the allocation of resources for virtual machines, and if you are interested in details, I recommend reading this article. Fortunately, EPYC 2 on the Rome kernel does not have these layout subtleties due to the layout features, and each physical processor can be initialized as a single NUMA domain.

Those who are beginning to be interested in AMD processors often have questions: how will EPYC interact with the products of competitors in the field of virtualization? After all, in the field of machine learning, Nvidia still reigns Supreme, and in network communications - Intel and Mellanox, which is now part of Nvidia. I want to give one screenshot, which shows the devices that are available for passing through in the virtual machine environment, bypassing the hypervisor. Given that AMD EPYC Rome has 128 PCI Express 4.0 lines, you can install 8 graphics cards in a server and throw them into 8 virtual machines to speed up Tensorflow or other machine learning packages.

Let's take a little lyrical digression and set up our mini-data center for machine learning needs with Nvidia P106-090 video cards that do not have video outputs and are designed specifically for GPU computing. And let evil tongues say that this is a "mining stub", for me it is a "mini Tesla", perfectly coping with small models in Tensorflow. Assembling a small workstation for machine learning, installing desktop video cards in it, you may notice that a VM with one video card runs fine, but to make this whole design work with two or more GPUs that are not designed to work in the data center, you need to change the initialization method of the PCI-E device in The VMware ESXi configuration file. Enable access to the host via SSH, connect under the root account:
vi /etc/vmware/passthru.map

and in opened file we should find

# NVIDIA

be sure it has (instead ffff it will be your ID’s of devices)

10de ffff d3d0 false

10de ffff d3d0 false

Then overload the host, add a video card in the guest operating system and include it. Install / run Jupyter for remote access “a La Google Colab”, and make sure that the training of the new model is running on two GPUs.

Once I had to quickly count 3 models, and I ran 3 VM Ubuntu, throwing in each one GPU, and accordingly on one physical server at the same time considered three models, which without virtualization with desktop graphics cards can not be done. Just imagine: for one task you can use a virtual machines with 8 GPUs, and for another-8 virtualok, each of which has 1 GPU. But do not choose gaming video cards instead of professional ones, because after we changed the initialization method to bridge, as soon as you turn off the Ubuntu guest OS with discarded video cards, it will not start again until the hypervisor restarts. So for home / office this solution is still tolerable, but for Cloud-data center with high requirements for Uptime-no longer.

But these are not all pleasant surprises: since AMD EPYC is a SoC, it does not need a South bridge, and the manufacturer delegates to the processor such pleasant functions as probros to the SATA controller virtualizer. Please: there are two of them, and you can leave one for the hypervisor and give the other to the virtual software data warehouse.

Unfortunately, I can’t show SR-IOV working in a live example, and there is a reason for that. I will leave this pain “for later” and pour out my soul further down the text. This feature allows you to physically throw a single device, such as a network card into several virtual machines at once, for example, Intel X520-DA2 allows you to share a single network port on 16 devices, and Intel X550 - on 64 devices. You can physically throw one adapter in one VM several times to poshamanit with several VLANs. But somehow this feature does not find much use even in cloud environments.

Question #3: what about virtual machines, created on Intel?

I do not like simple answers to complex questions, and instead of just writing “it will work”, I will complicate my task as much as possible. First, I’ll take a host on an Intel Xeon-D 2143IT, and put ESXi 6.7 U1, not the most recent version of the hypervisor, on it, while an AMD machine will have ESXi 6.7 U3 installed. Secondly, as a guest operating system I will use Windows Server 2016, and this operating system I am even more biased than FreeBSD (you remember, above the text I expressed my “Fi”). Third, under Windows Server 2016, I will run Hyper-V using nested virtualization, and install another Windows Server 2016 inside. In fact, I am emulating a multi-tenant architecture in which a Cloud provider leases part of its server to a hypervisor, which can also be leased or used under a VDI environment. The difficulty here is that VMware ESXi passes processor virtualization functions to Windows Server. It something reminds probros GPU in VM, only instead of PCI of a Board-any quantity of CPU cores, and it is not necessary to reserve memory: charm, and only. Of course, somewhere there, behind the scenes, I have already transferred from the Intel host to EPYC and pfsense gateway (FreeBSD), and several Linux-s, but I want to say to myself: “if this design works, then everything will work.” Now the most important: I install all updates on Windows Server 2016, and I switch off a VM, opening VMware VSCA. Very important - the V-world it is necessary to turn to Off, because if that, the far VM in Hyper-V when Windows Server shutdown will remain in status “Saved”, on the AMD server, it will not start and will need to turn it off by pressing the “Delete state”, which can lead to data loss.

I turn off the virtual machine and go to VCSA for migration. I prefer to clone instead of migrate, and copy Window Server 2016 to a host with an EPYC processor. After passing compatibility validation, you can add a few virtualized processor cores and make sure that the check boxes for nested virtualization are enabled. The process takes place in a few minutes, turn on the virtual machine already on the AMD server, wait until Windows slowly boots, and in the Hyper-V Manager turn on the guest operating system. Everything works (more details about why everything works, HPE wrote in its document on the transition from Intel to AMD, available at the link).

And if you do not take some border states of container virtualization, then everything is the same with Docker and Kubernetes. For clarity, I will transfer Ubuntu 18.04, on which this website works inside Docker in a container. As soon as the transfer is finished, I disconnect the network from the new virtual machine and start it, waiting for everything to boot. After waiting for the redis caches to load on the forum clone, I quickly disconnect the network on the old virtual machine and turn it on on the new one. So the offline timeout I have is about 10-15 seconds, but there is a time gap from the start of the copy to the start on the new host that even between the two SSDS fits somewhere in 5 minutes. If someone in for these 5 minutes created subject or wrote the answer-he will remain in the past, and futile to explain anonymously, that we are gone with 8-core Xeon-and on 48-core EPYC: our forum, as and the entire Docker, works in 1 core, users not forgive us 5 minutes gap.

Question #4: what about live migration?

Both VMware and Red Hat have live VM migration technologies between hosts. In this process, the virtual machine does not stop for a second, and during the transfer, the hosts synchronize the data until they are convinced that there is no time gap in the state of the original and the clone, and the data is consistent. This technology allows you to balance the load between nodes in the same cluster, but at least you can break yourself-between Intel and AMD virtual machines “profit” are not transferred.

  • VMware vSphere contains functionality that allows you to mask (in fact, make inaccessible to virtual machines) extended instruction sets, leading to a common denominator all servers included in the cluster vSphere, which makes it possible to do vMotion loads between servers with processors of different generations. In theory, you can mask all instructions that are not included in the AMD64 specification, which will allow you to transfer virtual machines “live” between any servers with AMD64-compatible processors. The result of this scenario is the maximum reduced capabilities of processors, all additional sets of commands (AES-NI, SSE3 and above, AVX in any form, FMA, INVPCID, RDRAND, etc.) become unavailable for virtual machines. This will result in a very noticeable drop in performance or even inactivity of applications running in virtual machines running on such servers.

  • For this reason, VMware vSphere does not support vMotion between servers with different processor manufacturers. Although technically it can be done in certain configurations, the official support for vMotion is only between servers of the same manufacturer (processors of different generations are allowed).

To put it bluntly, the thing is that the processor driver translates extended CPU instructions into the virtual machine, and Intel and AMD have a different set of instructions. And the working virtual machine, it is possible to tell, counts on these instructions even if them and does not use, and having started, we will assume with support of AES-NI (Intel), you at it will not replace this function with AES (AMD). For the same reason, live migration from new processors to old ones even between Intel and Intel or AMD and AMD may not be supported.

  • In order for vMotion to work between servers with different generations of processors from the same manufacturer, the EVC (Enhanced vMotion Compatibility) function is used. EVC is enabled at the server cluster level. At the same time, all servers included in the cluster are automatically configured in such a way that only those instruction sets that correspond to the processor type selected when EVC is enabled are presented to the virtual machine level.

it is possible to disable advanced instructions, limiting the access of the virtual machine to the capabilities of the CPU, but first, it can greatly reduce the performance of the virtual machine, and secondly, the virtual machine itself will have to stop, change the configuration file and start again. And if we can stop it, then to what to us these dances with configs-transfer it in the switched-off state and all.

If to rummage in KVM documentation, it is possible to find with surprise that migration between Intel and AMD is declared though it is not explicitly registered that it is carried out “profit”, and this question periodically POPs up in a network. Here, for example, on Reddit the user claims that under Proxmox (hypervisor on the basis of Debian) at it virtualki “on hot” were transferred. But for example, the company Red Hat is categorical in its vision: “there is no live migration - and that’s it.”

  • On the Red Hat Virtualization (RHV) platform, this operation is technically impossible because the Intel and AMD extended instruction sets differ, and servers on Intel and AMD cannot be added to the same RHV cluster. This limitation is introduced in order to maintain high performance: the virtual machine runs fast precisely because it uses features of a particular processor that do not overlap between Intel and AMD.

  • On other virtualization platforms, migration within a cluster is theoretically possible, but it is usually not officially supported, as it causes a serious impact on performance and health…

Of course, there are crutches, such as Carbonite. Judging by the description, the software takes snapshots of the path followed by synchronization, which allows it to migrate a VM is not something that between hosts with different processors, and even between clouds. The documentation puts it mildly that the migration manages to achieve “almost zero downtime”, that is, the machine still shuts down, and you know what is surprising to me personally? What is around this problem with the lack of live migration is a complete information vacuum: no one shouts and demands from VMware: “Give us vMotion compatibility between Intel and AMD, and do not forget ARM!” and this is due to the fact that uptime at the level of “five nines” need only one category of consumers.

Question #5: why does everyone care about question #4?

Let’s say we have a Bank or airline that has seriously decided to reduce OPEX by saving on licenses, going from Xeon E5 to EPYC 2. In such companies, fault tolerance is critical, and is achieved not only through redundancy by clustering, but also through the application. This means that in the simplest case, Some MySQL runs on two hosts in Master/Slave mode, and a distributed NoSQL database generally allows the dump of one of the nodes without stopping its work. And here there is no problem to stop the backup service, move it where you want and download again. And the larger the company, the more important fault tolerance is for it, the more flexibility its IT resources allow. That is, where we treat IT as a service, we reserve the service software itself, no matter where and on what platform it works: in Chicago on VMware or in Nicaragua on Windows.

Absolutely another matter-cloud providers. For them, the product is a working virtual machine with uptime of 99.99999%, and to turn off the client VM for the sake of moving to EPYC is nonsense. But they are not tearing their hair out from the lack of live migration between Intel and AMD, and the point here is in the philosophy of building a data center.

Data Centers are built on the principle of “cubes” or “Islands”. Let’s say one “cube” is a computing cluster on absolutely identical servers + storage that serves it. Inside this cube there is a dynamic migration of virtualok, but the cluster never expands and virtualki from it to another cluster do not migrate. Naturally, a cluster built on Intel will always remain so, and at one time it will either be upgraded (all servers will be replaced with new ones), or disposed of, and all VMS will move to another cluster. But upgrades/replacements of configurations inside the “cube” do not occur.

  • I see a competent approach to sizing based on the definition of a “cube” for a compute node and a data store (storage node). A hardware platform with a certain number of CPU cores and a given amount of memory is selected as the computing cubes. Based on the features of the virtualization platform, estimates are made of how many “standard” VMS can be run on one “cube”. From such" cubes " clusters/pools where VM with certain requirements to performance can be placed are typed.

  • The basic principle of sizing is that all servers within the resource pool-the same capacity, respectively, and their cost is the same. Virtual machines do not migrate out of the cluster, at least when they are running in automatic mode.

And if the cloud provider wants to save on licenses and just on iron in terms of CPU core, it will just put a separate “cube” of a dozen or two servers on AMD EPYC, configure the storage and put into operation. So it doesn’t matter even to cloud users that the client virtual machine can’t be transferred from Intel to AMD: they just don’t have such tasks.

Additional question: do you have a fly in the ointment? We have a bucket

Let’s be honest: all the time, while AMD has not been in the server market, Intel has earned itself the status of bulletproof solutions provider who work at least you count on the head teshi, and AMD not rested on its laurels, not sitting on the defensive and just skipped all those lessons that have carried the market, and when in the Smoking room managers curse the old Opteron’s, AMD had nothing to respond, and entrenched in the mind fears we are pushing to ensure that any problem on the AMD server is a AMD problem, even if you just took off the SSD, turned off the electricity, or a new update hung the host - all the same “it was necessary to take Intel”. We need Moses to lead the people through the desert for 40 years so that we can get rid of these prejudices, but it’s too early for him to come.

Let’s start “for health”. AMD always tries to maintain a long life of processor sockets, because unlike Intel, they do not force to throw out motherboards with each next generation of processor. And despite the fact that in the Enterprise segment it is not customary to make processor upgrades, this makes it possible to unify the server fleet with planned purchases spaced in time. And if Dell EMC and Lenovo launched new servers under EPYC Rome without support for the first generation EPYC 7001, and apparently will support the next generation EPYC Milan, then HPE, though with restrictions, but already allows the installation of the first two generations of EPYC in their DL325 and DL385 10 Gen servers.

From the point of view of circuitry, the motherboard under AMD EPYC is just a socket(s) and slots with wiring: there is no South bridge, there are no PCI Express dividers, all additional chips are peripheral controllers for Ethernet, USB 3.x and BMC. There is nothing to break on the motherboard, it is simple as a drum, and the concept of a powerful single-processor AMD server is especially impressive. But, if you look at the market of self-Assembly solutions, as if specifically AMD could not pass by a specific puddle. The fact is that EPYC self-Assembly boards are not always compatible with EPYC 2, and it’s not about supporting DDR3200 memory or the new PCIe standard. To support EPYC ROME requires a BIOS chip of at least 32 MB, and on platforms under the first generation of EPYC very often 16-megabytes. Fortunately, the flash chip to change - it’s not a socket to solder, and it’s easy to do, but still the situation itself is unpleasant. Therefore, if you look closely, in addition to the BIOS chip replacement procedures, Supermicro and second-tier manufacturers are quietly launching either new boards and platforms, or updating revisions of previously released boards.

In addition, opening the Known Issues section in the vSphere 6.7 U3 release, we see that we have two problems with EPYC ROME 2. Of course, Intel is also not infallible: the SR-IOV function with the ixgben driver (our test NetWorker X520-DA2) can fail, which leads to a reboot of the host. Bravo! It’s not a palm-sized processor that’s a week old-it’s a 10-year-old card that stands in 4 out of 5 servers with 10-Gigabit networks.

For me, all of the above means that if we look at the Trinity of “Intel, AMD, VMware”, then there is no good boy here, and 100% confidence that the stack working today will work after the update, no one can guarantee on Intel, AMD or Arm. Well, if we live in a world where any question of reliability is solved by reserving at the application level, then what difference does it make, the company beat the trash for 10 years in the server market, or built the image of a mega-supplier, which collapsed with the first announcement of Meltdown/Spectre, and continues to fly into the abyss - only the wind in your ears!

Conclusion: Recommendations to IT Directors

For a very long time on forums, blogs and groups in social networks, experienced professionals with certificates and completed profiles will say that it is too early to switch to AMD, and whatever failures Intel pursues-behind them two decades of dominance in the data center-Ah, overwhelming market share and absolute compatibility. It is good that most people follow the imposed model of behavior, they do not see that the market has turned around and it is necessary to change the strategy. My friend, a stock trader, said that the more people put on the decline, the stronger the asset will shoot up, so all that should do a competent specialist - is to pick up a calculator and count.

AMD is interesting in a simple comparison of the cost of the server, its performance in modern multithreaded applications can significantly consolidate servers and even migrate to single-socket servers, reducing the overall power consumption of racks. The situation with OPEX at annual royalties for licenses of many applications is also interesting for the analysis.

Marketing at AMD is inefficient, and cloud-market there is absolutely not played a card of the increased security of virtual machines by means of isolation of memory of VM about what we wrote earlier.That is, there are already opportunities to get on EPYC advantages at the level of price and functionality both when consolidating the data center and when building a cloud from scratch.

After you count the prices, of course, you need to test “hands". To do this, you do not need to take the server to the test. You can rent a physical machine from a cloud provider, and deploy your applications on bare metal. When testing non-compute applications, note that all cores in AMD’s Turboboost run at frequencies above 3GHz. Plus, if there is iron at hand, try to set in the BIOS increased or decreased thresholds of CPU power consumption, this is useful for large data Centers function.