You have probably seen many times the epic photos of data centers, in which the horizon go rack clogged with servers, installed powerful generators and air conditioning systems. Many people like to brag about such personnel, but I have always wondered how data center operators choose and configure their equipment? Why, for example, do they install server “A” instead of “B”, what do they rely on - VM speed or the potential number of them in the rack, and how do they design computing power, especially in today’s world of hybrid clouds and artificial intelligence?
What is put at the head of the corner: speed new path or a number of VMS on the counter?
Speed VM without any specifics is an abstract concept that can not be evaluated without entering the criteria by which it will be evaluated. For example, typical evaluation criteria are the response time of an application that is running on a VM or the response time of a disk subsystem that hosts virtual machine data. If the customer wants to deploy all of us favorite SAP on a virtual machine, he needs gigahertz on the principle of"the more, the better." For such tasks, clusters with high-frequency processors with a small number of cores are allocated, since multithreading and performance of memory and disk subsystem are not as important for SAP as for a loaded DBMS.
The “right” service provider allocates different hardware platforms and disk resources into separate clusters and pools and places “typical” customer virtual machines in the right pools.
It is impossible to predict on what quantity of virtual machines SAP will work and where disk resources will be intensively used and it is necessary to plan Flash storage, in advance. This comes with time, with the receipt of certain statistics during the operation of the cloud, so providers working in the market for a long time are valued higher than startups. Each provider has its own platform for VM deployment, its own approach to expansion planning and procurement.
I see a competent approach to sizing based on the definition of “cube” for the computing node (compute node) and data storage (storage node). A hardware platform with a certain number of CPU cores and a specified amount of memory is selected as the computing cubes. Based on the features of the virtualization platform, estimates are made of how many “standard” VMS can be run on one “cube”. From these" cubes " are recruited cluster/pools where VM can be placed with certain performance requirements.
A “standard” virtual machine is a VM of a certain configuration, for example, 2 vCPU + 4 GB RAM, 4 vCPU + 32 Gb RAM, and also takes into account the reserve of RAM resources on the hypervisor, for example, 25% and the ratio of the number of vCPU to the total number of CPU cores in the cube (CPU over-provisioning). Once the reserve boundary is reached, equipment purchase planning begins within the pool.