sviko.com

Huawei OceanStor 2200 V3 review - storage system with entry-level block virtualization

Huawei data storage systems have become widespread relatively recently, and today the Chinese manufacturer successfully competes with brands such as Dell/EMC, Netapp and HPE. Following the principle of “more for less money”, Huawei even in the entry-level storage uses technology that competitors are found only in the top models.

Huawei_oceanstor_2200_v3_0

The OceanStor series is designed to store large amounts of enterprise data, and even the youngest model of OceanStor 2200 V3 is already scalability up to 300 drives with a total capacity of 2.4 Pb, it is file and block access, it is FC, FCoE and Ethernet interfaces with iSCSI support, it is SAS 12 Gbps for connecting head units with shelves, it is a 2-controller design without a single point of failure and support for block virtualization of hard drives, a technology that Huawei proudly calls RAID 2.0+.

RAID 2.0+ - block HDD virtualization

A traditional RAID array can be compared to a wall where each brick is a hard drive or SSD. Bricks it is desirable to have one type and the size and if to pull out one or two - the wall will be shaken and if three - will collapse even RAID 6, having taken with itself all stored data. In traditional RAID-arrays it makes no sense to combine disks at the same time on 7200 and 10 000 RPM, when replacing the “brick” array recovery can take up to several days, inside the array all the disks are equivalent, and if you distribute the data to “cold”, “warm” and “hot”, then please allocate SSD for “hot” data and create a separate RAID array on 10/15K SAS HDD - for “warm” data. Of course, you still need to reserve 2 or 3 disks of different types for hot swap, but these are the realities of traditional RAID-s, and this has to be tolerated even in very expensive storage.

It is quite another thing when you take completely different hard drives and/or SSDS and divide their space into equal blocks, well, let’s say 128 kilobytes. And already from these blocks you collect RAID array on any of the traditional schemes - from a simple mirror RAID 1 to RAID 60. In dunnam case, the controller operates not physical media, but the space inside them, and it opens truly unlimited possibilities. The simplest option is to create multiple simultaneous RAID arrays of different types on a single hard disk pool. This could be useful for creating a small fast disk group, but the technology has gone further - the Huawei OceanStor 2200 V3 controller itself is able to divide data into “hot” and “cold” and move them inside one RAID array to faster carriers - 15K HDD or SSD. In the developer’s terminology, RAID 2.0+ divides disk group space into chunks, chunk groups, and extents. In the settings of the storage these terms are not found, so if you are interested in how it works, we offer you slides from the manufacturer’s presentation.

The reliability of the collected arrays is a matter of pride for Huawei: imagine - not only that the recovery of a 12-disk array takes 30-40 minutes, not only that you can do without a Hot Spare disk (as a hot swap is not a physical disk, but the free space of the pool), so also the traditional RAID 5 will withstand the fall of two, three, four, or… nine hard disks - there would be free space. Each time the HDD fails, the controller OceanStor 2200 V3 automatically redistributes data to unoccupied areas of hard drives, and after 30-40 minutes your array is alive and well again, and having lost one HDD, it will survive the failure of the next hard drive. Let’s test this feature of the survivability of the array!

To our lab Huawei OceanStor 2200 V3 came in configuration with 12 hard drives NL-SAS volume of 2 TB. For testing, all disks were combined into a common pool, within which RAID 5 was built with one virtual Hot Spare disk. The useful volume of the array was 20 TB, within which 10 LUNs were created with a total volume of 9.6 TB. We consistently disabled one hard drive, measuring the recovery time of the array.

Test1

The system sustained failure of 6 disks in 12-disk RAID 5, and after breakage of the fifth HDD, the array was restored to the online state, and on the sixth disconnected hard drive the free space ended, and rebild occurred only partially, having left the array in the degraded, but working condition.

What to do in this case? The answer is obvious: by removing one of the LUNs, we made additional room and the reconstruction process started further. On this it was decided to cease testing because of failure of even 50% of the hard disks in a storage system in practice is not found ever, but the Huawei OceanStor 2200 V3 will be able to survive such a scenario, if only the discs were not flying at the same time.

Automatic reduction of Thin Luns

Another interesting technology is SmartThin Data Shrinking, optimizing the space occupied by LUNs. In general, this is a development of the Thin Provision principle for logical volumes, with the only difference that the storage controller automatically detects zero blocks of data and excludes them from the logical disk. We created a 10 GB test LUN to see how it works, then wrote 9.5 GB of data to it and deleted it immediately.

Even after emptying the contents of a LUN, it takes up only 10% of its maximum volume on disk space. If you remember that traditional LUNs like Thin Provision even after removing their content tend to grow to the maximum specified volume, the result of Huawei OceanStor 2200 V3 is impressive, because once “inflate” the logical drive, and then removing the excess information from it, you do not need to recreate the LUN again to free up space on the storage. The work of Data Shrinking is not “on the fly”, and requires some time. In our case, the space was released at a speed of about 1 GB/min.

Mounted snapshots

Today, all storage vendors have Snapshot features, and it would be possible not to focus on Hypersnap technology, if not for one difference. In OceanStor 2200 M3, you can mount each snapshot as a full-fledged LUN without affecting the original data. This is very convenient for the development of forks of the current application. Well, let’s say you want to make a revision of some software: create one, or better two snapshots, connect one image as a LUN and work inside it, as if you were working with the original data on a logical disk. All your changes inside the snapshot-and will not affect other snapshots or the original LUN.

When the task is done - just delete the unnecessary snapshot or roll it back to the original LUN to make your fork the main data source for the updated application.

The three functions listed above are an example of how the software simply changes our understanding of the capabilities of the data storage system. I confess, they impressed me so much that I violated the usual order of preparation of the review and put the results of some tests above the description of the hardware of the device, and now it’s time to correct the deed and see what is OceanStor 2200 V3 “in the hardware”.

The Design Of The OceanStor 2200 V3

Structurally, Huawei OceanStor 2200 V3 is made according to the traditional two-controller active-active scheme for SAN devices with full duplication of components. The chassis can have 12 compartments for 3.5-inch media and 25 compartments for 2.5-inch drives or SSDS. The storage system only works with Huawei branded hard drives, and the vendor currently only offers SAS-12G drives, including NL-SAS at 7200 RPM large volume and SSD.

At the configuration stage, it is worth considering that for OceanStor 2200 V3 there are no 3.5" SAS hard drives with a spindle speed of 10 and 15 thousand rpm - these drives are available only in 2.5-inch format, so if the head unit has 3.5" compartments, for high-speed SAS drives you will have to buy an SFF expansion shelf. As for SSDS, they are available in both formats with the same volume.

By default, OceanStor 2200 V3 comes with two controllers that have 1 active slot for PCI Express 4x 3.0 host adapter modules. The controller itself has on Board 4 ports 1Gbps Ethernet with support for aggregation of channels, which in itself already gives a good channel speed “out”, and allows you to abandon additional interfaces, if performance is not critical.

Controllers use 16-core processors, but neither the type of CPU, nor their characteristics, the manufacturer does not disclose. For non-volatile RAM installed CBU - modules containing powerful supercapacitors. In case of a power failure, the contents of the cache are written to the SSD-disk in the controller and can be stored there as long as necessary until the next system power on. CBU modules are designed for the entire life of the device and do not require maintenance. The amount of memory is also important: OceanStor 2200 V3 can work as a NAS, using file access protocols NFS, CIFS, HTTP and FTP. So, to implement this functionality requires that the total amount of memory storage was 32 gigabytes, so if you need file access - order Huawei OceanStor 2200 V3 with the maximum amount of memory.

To control the storage on each controller installed as much as 3 RJ45 ports. The first two are for accessing the web interface and via the console from the command line. They differ in that on the ground you can change the IP address, and on the second it is hard-coded so that the engineer wasn’t looking for it when servicing. And the third is a COM-port for direct connection to the terminal (RJ45-DB9 cable is supplied).

If the project is a high-speed connection, Huawei for you there are 8 types of modules for the host adapters for both fiber and copper environments:

  • 4-Port 1Gb ETH I/O module (BASE-T)
  • 4 Port Smart I/O module (SFP+, 8Gb FC)
  • 4 Port Smart I/O module (SFP+, 10Gb ETH/FCoE (VN2VF) FC/Scale-out)
  • 4 Port Smart I/O module (SFP+, 16 Gb FC)
  • 2 Port Smart I/O module (SFP+, 10Gb ETH/FCoE (VN2VF) FC/Scale-out)
  • 2 Port Smart I/O module (SFP+, 16Gb FC)
  • 4 Port 10Gb ETH I/O module (RJ45)
  • 8 Port 8Gb FC I/O module (with built-in transceivers)

In our test storage 2 host adapters with part number V3L-SMARTIO8FC were installed, in the list above they are highlighted in bold. All Smart I/O adapters support 8FC/16FC and 10G Ethernet, and speed is limited to complete transceivers only. That is, once you purchase an 8-Gigabit Smart I/O host adapter, you can later change the SFP modules to 16-Gigabit and get a higher interface performance, but keep in mind that when you connect via FibreChannel, all ports of the host card must work at the same speed. When connected via iSCSI, direct access to iWARP/RDMA memory is supported, as the VMware ESXi hypervisor happily reports when installing storage.

All components are cooled by two twin fans installed in the power supply units. The absence of additional fans has reduced the depth of the case to 488 mm. there Is concern about the environment and energy bills, but more importantly, Huawei OceanStor 2200 V3 is so quiet that it can be installed in the same space with the staff in a closed telecommunications cabinet.

The front panel is nothing interesting, and the best thing you can do is to cover it with a decorative overlay, leaving a status display, a power button and a glowing device ID, which is very useful when you are standing in front of several identical devices and can not distinguish them from each other. But really - head devices and disk shelves Huawei look like twin brothers, and since we have remembered about them, let’s talk about scaling.

Vertical scaling

Up to 13 disk shelves can be connected to the OceanStor 2200 V3 head unit via the SAS 12 Gbps interface, for which two MiniSAS HD ports are installed on each controller. Supports 24-disk LF expansion module with a height of 4U and 25-disk SFF shelves with a height of 2U. As already mentioned, the total number of disks per head unit can be 300 units, and the total volume - 2.4 Pb.

Scale-out is not available in the OceanStor 2200 V3 series, and the most you can expect is to install duplicate storage between which you will be configured to mirror logical volumes or replicate data, including in real time.

Web interface

Web-interface works with Adobe Flash, so everything is beautiful and modern, and many menu items are conveniently called the right mouse button. We will connect the storage via iSCSI, but the FC configuration mechanism is the same.

We start with the creation of a disk pool, within which it will be possible to define RAID-arrays to cut them into LUNs. We give all 12 disks in the General pool and we put RAID 5 on it, allocating 1 virtual disk under Hot Spare. Next, you should decide on the binding of LUNs to customers, and here it is necessary to understand the intricate hierarchy, which client is given to which LUN.

Logical disks and their snapshots are grouped here, and they are already presented to hosts. In storage terminology, a host is a logical grouping of different iSCSI initiators that can belong to different physical or virtual servers. It is the initiator’s iSCSI ownership of the host that determines whether the client sees the LUN or not. At the “host” level, it would be possible to finish the hierarchy, but Huawei decided that the hosts should also be defined in groups, and these groups of hosts and LUNs should be linked.

Each physical port of the SmartIO controller is a separate iSCSI target with a specified IP address. To bind a LUN to the iSCSI target, you need to create a host, a host group to add the active iSCSI initiators customers in the host to create a group of LUNs-s and in the Mapping tab to configure some routing. By the way, the specified route can be rigidly tied to a physical port, or rather again to a group of ports. If you used to work with simpler storage systems, you will have to work with your brain a little bit, but taking this hierarchy as a given, you can create a hundred LUNs in batch mode in a few seconds and bind them to your client machines with almost no restrictions.

Very pleased with the built-in performance monitoring, which displays the intensity of requests to the physical port, disk pool, RAID array and each LUN separately. Moreover, a separate tool for predicting the use of storage will show you the trends in the filling of disk space and changing the load on the machine. This will allow you to plan the necessary upgrades in advance to avoid sudden bottlenecks in your IT infrastructure.

When you activate the licenses for the advanced software features, you open up a new tab with the programs for setting up mirroring and migration at the LUN-s, setting up a failover configuration and administration of multi-tiered storage (tiering). Let’s look at some of these paid features.

LUNs can be mirrored within the same storage, but on different disk pools. This feature is needed to improve the reliability of critical applications, but given how well OceanStor 2200 V3 resists RAID failure, it’s hard for me to say why it might be needed in real life. But the LUN migration will allow the installation of a new storage in the existing heterogeneous infrastructure to transfer logical drives from storage Dell, IBM or HP without interrupting the work of clients with the preservation of the ID-number of the logical drive. We tested this feature, trying to transfer LUN from Synology DS1511 via iSCSI Protocol, but without success, storage systems did not see each other, and Dell or HP products were not at hand.

Such things as “cloning LUN” or remote replication, today is no surprise, and except for the little things, Huawei has no special achievements, but what should be paid attention to is smartmotion and SmartQoS. The SmartMotion algorithm allows you to dynamically add space to LUNs from pools, as well as to recall it.

Smart QoS is a traffic priority system defined for each LUN. The principle of operation here is the same as in network switches: with a high number of competitive requests, operations with a higher priority LUN are initially processed, they are allocated more CPU resources, cache, network interfaces and, of course, disk performance. That is, you can use SmartQoS to adjust the performance of the LUN,so that it does not interfere with others, or Vice versa to guarantee the highest performance of any important LUN.

One of the most popular features in disk systems is Tiering, multi-level data storage. A special algorithm collects statistics of access to chunks and transfers the most popular data to SSD or high-speed HDD. Today, many people have it, but not every storage system will allow you to set the time during which you need to collect data. Huawei understands that at night or on weekends, your infrastructure may have high backup traffic, which is located on NL-SAS drives, so you can use the calendar to specify the days of the week and the hours when Tiering is trained and when it is not.

But what is missing in the Web-interface is information about the load of processors and RAM, as well as the temperature of the components of the storage. When looking for performance bottlenecks, CPU and RAM loading is the first thing to start with, especially when you have so many software functions working with data blocks. Well, since we’re talking about speed, it’s time to move on to testing.

Testing

In our configuration with NL-SAS drives, Huawei OceanStor 2200 V3 does not claim the title of champion in performance, so we make a correction in advance for the lack of SSD cache and more or less fast magnetic media.

Test_stand

Test stand

For testing, we used a test bench of the following configuration:

  • Server 1
    • IBM System x3550
    • 2 x Xeon X5355
    • 20 GB RAM
    • VMWare ESXi 6.0
    • RAID 1 2x SAS 146 Gb 15K RPM HDD
    • Intel X520-DA2
  • Server 2
    • IBM System x3550
    • 2 x Xeon X5450
    • 20 GB RAM
    • VMWare ESXi 6.0
    • RAID 1 2x SAS 146 Gb 15K RPM HDD
    • Mellanox ConnectX-2
  • Storage system Huawei OceanStor 2200 V3:
    • 16 GB RAM
    • 2 x smart 4 SFP+ FC 16 Mbps / ETH 10 Gbps
    • 12x HDD NL-SAS 7200 RPM 2 Tb
    • RAID 5
  • Software
    • Debian 9 “Stretch” without the patch, Intel Meltdown/Spectre
    • VDBench 5.04.6

Servers were connected directly to two host adapters of storage by cables of type Direct Attach, Intel XDACBL3M. On test servers running VMWare ESXi, 8 virtual machines with Debian 9 x64 guest systems were deployed, for which 8 identical LUNs were allocated on the storage system. Each virtual machine was connected via iSCSI to its own LUN, using a 10-Gigabit network port as an uplink of the VMKernel Switch virtual switch. On each virtual machine was running VDBench benchmark, which was managed with a dedicated device.

Before starting the main test, the LUNs are pre-populated for 120 minutes to eliminate the impact of new clean HDDs and fragmentation of data on speed. We start with the 4K Random Access test with a different number of threads to determine the number of threads on which we will conduct the main testing.

4k-7030 4k-read 4k-write 4k-write-latency4k-iops 4k-write 4k-7030-iops

The results show that with a load of 8 clients of 64 threads (total 512 threads), the speed begins to decrease slightly and the queue depth grows strongly. So, we will test 8 virtual machines with 64 threads on each.

The speed of random access is quite predictable, with all write operations clearly fall into the cache of the controller, and practically do not depend on the load. Let’s look at random access at a transaction size of 64 KB, a test that is recommended for Microsoft SQL Server.

64k-read 64k-write 64k-7030

Unexpectedly pleased with the reading test, which showed only a 30% reduction in speed compared to the test of random reading 4Kb blocks.

Let’s see what happened to sequential reading.

Seq-read Seq-write Seq-7030

Serial access is a bit frustrating: 12 NL-SAS drives can and should yield more than 1 Gigabyte per second, but neither read nor write and don’t come close to that figure. I assume that this is due to block virtualization, or rather to the distribution of tanks on physical disks.

Patterns in Real World tasks

Let’s move from synthetic tests to emulation of real problems. Test package allows VDbench to run patterns, removed programs I/O treysing with real problems. Roughly speaking, the special software records how the application, whether it is a database or something else, works with the file system: the percentage of writing and reading with a different combination of random and sequential operations and different size of the block of writing and reading. We used the patterns taken by Pure Storage specialists for four cases: VSA (Virtual Storage Infrastructure), VDI (Virtual Desktop Infrastructure), SQL and Oracle Database. The test was conducted at 16 threads for each virtual machine, which created a query depth of approximately 64.

Here is how our pattern for SQL tasks looks like:

Sql-pattern

Here are results for real world patters.

Pattern-VDI Pattern-VSI Pattern-oracle Pattern-SQL

Nl-SAS type hard drives are certainly not designed for databases and virtual applications, and Huawei OceanStor 2200 V3 in our configuration can be used for databases if the disk system load is about 800 IOPS, further greatly increasing the latency.

Extended warranty packages

The standard warranty for the device is 3 years, this period can be extended by purchasing the appropriate service packages until June 30, 2025 (end of Support date for this model). You can also use extended warranty packages:

  • 9x5 Next Business Day with departure to the installation site
  • 24x7x4H with departure to the place of installation
  • 24х7х2Н with departure to the place of installation

Service is carried out by authorized service centers and branches of the company.

Economic-efficiency

And due to its low price, the Huawei OceanStor 2200 V3 shows a very high cost-per-Gigabyte ratio for the head unit.

Conclusions

Huawei OceanStor 2200 V3 is an example of how for a relatively small amount of money you can buy a very functional storage for SAN applications. Perhaps the most important advantage of this model is the RAID 2.0+ technology, thanks to which you can sleep peacefully when one hard disk flies out of RAID 5: in 40 minutes the array will be in a “Healthy” state, and if free space allows, you can slowly wait for a new disk from the warranty repair, without fear for the safety of data. Such a fault-tolerant solution that duplicates all components at the hardware level, and at the software level solves long-term problems of RAID arrays fits perfectly into the concept of a file archive or storage for software developers. Since this is still the initial level, and the price here is of great importance, to store backups of OceanStor 2200 V3 can be used even without host adapters: eight Gigabit ETH ports will be enough to reserve a small office for iSCSI software.

Of course, in the initial price segment can not do without drawbacks, and to them I can count the lack of horizontal scaling, file access support in the 16-Gigabyte version and inadequately expensive license for replication and cloning LUNs, because these functions are usually free even in cheaper devices.

Fortunately, you can do without replication and cloning of LUNs or, if necessary, assign these tasks to some virtual server with Rsync installed. And the money saved is better to buy an extended warranty package, because not every data storage system worth 8K$ can provide official service with round-the-clock departure of a specialist in the data center. For many state-owned enterprises, the extended warranty is not only the personal peace of mind of the head of the IT Department, but also a great way to limit the offer of analogues when purchasing equipment at auction, and apparently Huawei is well known.