Synology Flash FS3017 review: testing flash array in virtual environments

In our review of the Synology VMM hypervisor, we concluded that the company has discovered the way to the world of Hyper-converged storage, where a potential customer can use one device not only to access data, but also to run resource-intensive applications such as virtual desktops, databases and modeling systems and so on. Support for replication, snapshots and clustering allows you to build not just systems with reliability at the level of “five nines”, but very cheap systems with the highest reliability, where duplication of functions occurs at the level of the device itself, roughly speaking - the storage box itself.

Three Synology ideas distinguishing their storage from competitors

  • High Availability system-level redundancy. Ideology Synology has always been somewhat different than other manufacturers of storage systems and offering solutions for cloud-based systems, the company says: “you do not have to pay for duplication of all that are installed inside the one storage system - no need to duplicate the cables, controllers, host adapters, and so on. Our storage is affordable for any customer, and if you need maximum reliability, you just use the N+1 scheme at the level of the device itself. In any case, you save space in the data center, reduce the cost of electricity and network connections, but you achieve reliability at the expense of our software development, and not through the purchase of unused components, the only purpose of which is to maintain the system in rare cases of something out of order, because even load balancing can be done programmatically.” For experienced IT professionals, this approach seems strange, but only as long as you do not compare the cost of two or three top Synology solutions with a single “fully hardware” module from Dell/EMC or HPE.

  • Using iSCSI as the most promising interface. In 2015, the head of Mellanox, a leader in the production of host controllers and interconnects, said in an interview that the FC interface has no future. In his corporate blog Mellanox gives 7 reasons why Fibre Channel should be forgotten (in English) as a nightmare and move to iSCSI to work with block access.
    The advantages of iSCSI are: the ability to use the storage at any distance from the client via TCP, and the speed of the interconnect, which today can be 200 GB / s per port, and a very low price of the host controllers and switches, and the ability to use the existing network infrastructure to work on iSCSI. Practically, choosing the iSCSI Protocol, the customer does not need to build a SAN network, no need to lay additional cables, inflating the budget to the heavens: a single network interface, single switch, single copper cable, which sends the traffic to the IPC, LAN and SAN, plus the ability to specify the priority of traffic at the host level.
    Yes, three years after Mellanox sent Fibre Channel to the dustbin of history, this interface still remains in demand in the industry, mostly where there is already a SAN infrastructure, although there are already customers switching to iSCSI.

  • The use of software storage technologies. Today, storage is not so much hardware technology as software, and Synology says: "we do not have hardware RAID controllers, because the power of modern Intel Xeon processors is so high that the CPU is not something that copes with the calculation of XOR, but simply does not notice it. It is much more important to support direct transmission of iSCSI packets to save CPU resources not only of the storage itself, but also of the clients connected to it, so we have support for RDMA over Ethernet (see below). description of RDMA over Ethernet in English)."That is, it is much more important to provide fast data transfer outside than to take care of how it is arranged inside the storage, and you can not argue with this, although not every server today has support for RDMA over Ethernet.

Needless to say that Synology in their top-end storage system integrates all the technologies that have been acquired by it in the past. This file system btrfs with the function of snapshots, and built-in programs for remote replication, including Active Backup for Server, thanks to which the storage itself enters the server and copies data from it via RSync Protocol, and the possibility of replication between the storage via the Internet, so you can somewhere in a remote office to put a desktop NAS and on the weekends to do it off-site copies in encrypted form on an encrypted channel, or copy them to the cloud. And all this - without additional licenses, in a single web-interface, where you can work at least from your smartphone.

Key features of Synology FlashStation FS3017

  • Format - 2U
  • CPU: 2x Intel Xeon E5-2620v3 (6-core, 12 threads, 2.4 GHz up to 3.2 GHz, 15Mb Cache)
  • Support AES-NI
  • 64GB DDR4 ECC RDIMM (up to 512Gb)

Storage subsystem:

  • 24 bays for 2.5 SSD/HDD with hot-swap

  • SAS-1200/SATA-600 interface


  • Connection of 2 disk shelves on 24 bays (RX2417SAS) or 12 bays (RX1217SAS)
  • To connect disk shelves, you must buy a card Synology FXC17
  • Interface connection to the disk shelves, SAS 12Gbps
  • The maximum Raw capacity of internal storage - 96 TB
  • Maximum Raw capacity with disk shelves - 288 TB
  • Internal array file systems: Btrfs, EXT4, scheduled btrfs defragmentation
  • The SSD as a cache
  • SSD Trim
  • RAID: F1, Basic, JBOD, 0, 1, 5, 6, 10
  • RAID Migration: Basic to RAID 1, Basic to RAID 5, RAID 1 to RAID 5, RAID 5 to RAID 6


  • 128 iSCSI targets
  • 512 iSCSI LUNs
  • ISCSI at the file level with Thin provision support
  • ISCSI block-level

Network connection:

  • 2 RJ45 10Gbps ports
  • Support for Ling Aggregation / LACP
  • RDMA / iWARP (ISER) support with expansion cards


  • 2x800 W Redundand PSU
  • 321w in access mode
  • 156 watts in hibernation mode, the HDD/SSD

Synology VMM:

  • Maximum number of virtual machines on a Synology VMM native hypervisor: 24

Of the characteristics that draws attention. First of all, the network connection: it uses 1/10-Gigabit RJ45 ports, that is, FS3017 is ready for installation in the existing network infrastructure without additional cables. We will not talk about what is more promising - copper or optics, since Synology offers two 10 Gbps network cards for optics, and I recommend buying a 2-port E10G17-F2 anyway, since it supports RDMA over Ethernet). Synology does not have 10 Gbps network cards with RJ45 ports, but it is always possible to install a card from another manufacturer. In the compatibility list on the Synology website - all modern HCA from Intel, Emulex and Mellanox, including 40-Gigabit.

Well, the warranty, one of the most important criteria for choosing a device, here by default is 5 years, though the possibility to buy a package extension of this period from Synology of course, the Synology in their top-end storage system integrates all the technologies that have been acquired by it in the past. This file system btrfs with the function of snapshots, and built-in programs for remote replication, including Active Backup for Server, thanks to which the storage itself enters the server and copies data from it via RSync Protocol, and the possibility of replication between the storage via the Internet, so you can somewhere in a remote office to put a desktop NAS and on the weekends to do it off-site copies in encrypted form on an encrypted channel, or copy them to the cloud. And all this - without additional licenses, in a single web-interface, where you can work at least from your smartphone.

Design of Synology FlashStation FS3017

Synology FS3017 has a body height of 2U, the entire front part of which is reserved for storage compartments.


Disks or SSDS are installed in plastic sleds, each of which is equipped with a lock to prevent accidental removal of the drive.


On the reverse side of Synology FS3017 looks empty: only 2 x 10G ports, 2 USB 3.0 and RS232 ports for service needs. In the photo in the storage board installed E10G17-F2, with two SFP+ slots.


By the way, the expansion Board E10G17-F2 is Mellanox ConnectX-3 Pro (description of the ConnectX-3 series in English), even with native markings. The ConnectX-3 series has a very good iSCSI packet unloading engine at the hardware level, so looking ahead I will say that in our testing we faced the fact that we rested on the performance of the test bench, but the load of Synology FS3017 processors did not rise above 20%.


And another interesting point - the chip installed on ConnectX-3 Pro supports data transfer at speeds up to 40 Gbps per port, and since support for this series of network cards is already available in Synology DiskStation Manager, perhaps the manufacturer will release a 40-Gigabit expansion card for FS3017.


Two Delta DPS-800AB-30A units provide power to the storage in fault-tolerant mode. They have a power of 800 W and a certificate of efficiency 80Plus Platinum, but the measurements showed a relatively low power factor (PFC) - only 0.83 instead of 0.9.


For cooling, 4 fans Sanyo Denki size 80x80x32 mm (9700 RPM, 86.5 CFM) with the function of easy replacement, that is, in the case of failure of one of them, the repair will take a few minutes, but the storage will still have to turn off.


Structurally, Synology FS3017 is not much different from typical 2-processor servers. Two Xeon E5-2620v3 processors are hidden under large radiators and cooled by a common airflow directed by a massive air duct. The motherboard has 16 DDR4 memory slots, of which 4 slots are occupied by 16-Gigabyte Samsung modules.


To connect SSD/HDD drives, 3 LSI SAS 9300-8i host adapters are used, supporting data transfer rates up to 12 GB/s for SAS devices and up to 600 GB/s for SATA.

By default, Synology FS3017 has 2 free slots:

  • PCI Express 16x for SAS controller FXC17 required for connecting disk shelves
  • PCI Express 8x used for network cards

If you face the need to expand the storage, Synology FS3017 will suggest you to use two shelves to choose from: RX2417SAS with 24 Bay 2.5-inch RX1217SAS with 12 compartments of the format of 3.5 inches. In total, you can use two expansion shelves, both of the same type and different, and in any of them to install SSD drives. The technology of Global Hot Spare is supported, thanks to which, the hot swap disk can be both in the head unit and in the disk shelf.


The connection uses the SAS 12 Gb/s interface, which connects the two shelves in series and connects to the controller installed on the head unit. There is no duplication at the cable level, but we talked about this at the beginning of the article - fault tolerance is provided by connecting two or more storage systems and the High Availability software package built into the Disk Station Manager operating system.

A little bit about DiskStation Manager

We have many times talked about the features of DiskStation Manager Synology: one and the same operating system at this manufacturer and is used for Junior and for senior storage systems, thanks to which he who once became friends even with the most simple 2-Bay NAS, easy to set top FlashStation. In order not to overload the reader, we will tell only about some features of DSM, useful for use in the world of fast flash arrays.

First, I want to pay attention to the creation of RAID-group. Within a single storage, you can create multiple RAID arrays, such as HDD or SSD, and create your own partitions on each of them. You may not create a partition on a RAID array if you plan to use iSCSI moons at the block level, but in this case you will not be able to use the dynamic moon magnification feature, Thin Provision. Using iSCSI moons at the file level within a shared partition looks like a simpler solution, but according to the manufacturer, it may work a little slower than at the block level. We’ll check it out in the test phase.

Built-in resource monitoring will show you the performance of not only each network interface, but also each iSCSI target, both in megabytes per second and in IOPS, which can be useful in assessing the need to scale the storage.

Moreover, you can set alerts about the lack of performance when latency at the access layer across the network or on the level of access drives exceed a certain threshold. Once triggered, such an alert can be sent by e-mail to the administrator, or in the form of a Push-notification. Agree - this is invaluable information that no application writes to Log-files, and get it otherwise than from the storage, nowhere else.

And, of course, Virtual Machine Manager with the opportunity to highlight virtualcom twenty processor cores and 60 GB memory, and now looks quite Mature. With it we will begin our testing.


For testing, we used a test bench of the following configuration:

#1 - IBM System x3550

  • 2 x Xeon X5355
  • 16 GB RAM
  • VMWare ESXi 6.0
  • 2x15K SAS 146 Gb HDD
  • Intel X520-DA2

#2 - IBM System x3550

  • 2 x Xeon X5450
  • 16GB RAM
  • VMWare ESXi 6.0
  • 2x15K SAS 146 Gb HDD
  • Mellanox ConnectX-2

Synology FS3017:

  • 64 GB RAM
  • E10G17-F2
  • 14x SSD Samsung MZ-7KM480E, 480 Gb, SATA-600
  • RAID F1
  • Btrfs

Test servers were connected directly to the storage with DirectAttach, Intel XDACBL3M cables. The test bench running VMWare ESXi deployed from 4 to 16 virtual machines for different tests with Debian 9 x64 guest systems. Virtual machines were managed from the command line via a 1-Gigabit network interface.

On the Synology Flashstation FS3017 was a 14 SSD Samsung SM863 with a volume of 480 GB combined into RAID F1. Each SSD promises up to 98,000 IOPS for reading and up to 19,000 IOPS for writing, 510 MB/s for reading and 485 MB/s for writing. Power consumption - from 1.3 W in idle mode to 2.8 W when recording.

The test was conducted with two types of iSCSI LUNs for 4 iSCSI purposes: first, a partition with the Btrfs file system was created, inside which 16 iSCSI LUNs were created at 100 GB each with Thin Provision support, then the partition was deleted and the same 16 iSCSI LUNs were created at 100 GB each in the unmarked area of the RAID group. Each guest was connected to its own iSCSI target, and although the network connectivity topology did not provide for cross-penetration of traffic between 10 Gigabit ports, iSCSI Multipath was enabled.

For testing, we used The vdbench package developed by Sun (now Oracle). This is a scalable benchmark that uses JAVA and allows you to run tests in batch mode on multiple virtual machines, using block-level access, without being bound to the file system, which allows you to test the speed of the storage at the block level. Running the server part of the benchmark on a 16 VMs, we get the aggregated indicators of performance of the storage system with 16 clients in that form in which it will be in the real world. From test to test, the number of virtual machines varies to maximize the potential of the storage.

Before starting the main test, pre-filling storage to eliminate the influence of the “new pure SSD”. There are no unambiguous recommendations for the “pre-filling” process, and for example Storage Performance Council in some tests SPC-1 spends up to 1000 hours on this procedure. We do not have 1000 hours, moreover, at a write speed of 900 MB/s, theoretically, our entire array will be written in 100 minutes, and taking into account the optimization of SSD firmware, I believe that the disk will write each time to a fresh sector, so pre-filling is done within 120 minutes.

The first test is the traditional 4K Random Read 100%, 32 thread per dev. Synology declares FS3017 performance level of 500k IOPS. We understand in advance that this is the theoretical maximum of the sum of four 10-Gigabit interfaces possible in FS3017. Two of them have copper medium - RJ45, two - optics, SFP+. In our test stand only optics, and processors, to put it mildly, not the freshest, and Intel X520-DA2 has no hardware unloading iSCSI and iser support, which Synology has reached such a speed, so 500 thousand IOPS-s we do not expect.


In this test, Synology FS3017 processors showed a load of less than 40%, which is typical for iSCSI Offload at the network expansion card level, and in principle, there is no reason to doubt that 500K IOPS for this storage is a real figure.


The following tests - 4K Random 70% read, 30% write, traditional 70/30 and 100% record, which we see that the delay has not changed, which means we are very far from the top features of FlashStation FS3017.


The test recording shows a delay growth of up to 4ms and rests on the capabilities of the test equipment.

Real World Patterns

Let’s move from synthetic tests to emulation of real problems. Test package allows VDbench to run patterns, removed programs I/O treysing with real problems. Roughly speaking, the special software records how the application, whether it is a database or something else, works with the file system: the percentage of writing and reading with a different combination of random and sequential operations and different size of the block of writing and reading. We used the patterns taken by Pure Storage specialists for three cases: VSA (Virtual Storage Infrastructure), VDI (Virtual Desktop Infrastructure) and SQL Database. The test was performed on 16 threads for each virtual machine, which created a queue depth of approximately 64.


In terms of the maximum latency that our test bench was able to provide, the performance is very high. Roughly speaking, 64-x single-thread VDI connections storage system does not even notice.


The situation is similar when using storage as a virtual storage.



When working with databases, the iSCSI latency curve of volumes at the file level soars vertically, so for these tasks you should definitely use only iSCSI at the block level.

The response time in all tests was less than 4 MS, about 5 times lower than the recommended threshold, above which it is necessary to think about the upgrade of the storage system.

Sequential access

We tested sequential access with standard read / write ratios of 100/0, 70/30 and 0/100 with 16 threads creating a queue depth of about 64 and different sector sizes.
Writing at the block level is much faster. This does not seem to be related to caching, but rather to the btrfs file system.

Based on the results of the speed measurements, the following conclusions can be made: FS3017 performance is sufficient to maintain host virtual machines and store files and database logs. When testing the fs3017 processors do not notice the load, so I have no reason to doubt that the storage will show performance in the area of 500k IOPS 4k and 6.4 GB/s.

Energy efficiency and environmental friendliness

When working Synology FS3017 is noisy enough, so install this storage in the same room with the working staff should not. The head unit clearly lacks any super-quiet mode of operation, because in the storage mode, if you do not run virtual machines on Synology VMM, only expansion cards are heated here.

Typical electricity consumption FS3017 in the tested configurations are listed in the chart below.

The energy efficiency of Synology FS3017 is amazing: with typical file operations, storage consumes less than 180 Watts. Of course, the power consumption will be higher when running virtual machines on the storage, but when using in the main mode FS3017 will not require even some more or less powerful UPS.

Cost of ownership and economic performance

The retail price of Synology FS3017 is 880 thousand rubles, another 6 thousand rubles will have to be spent on guides for installation in the rack. In the field of Flash-storage is considered that “free”. The cost of purchase and ownership including electricity for the warranty period, 5 years following:
Consider the relative economic performance of the device, taking into account the Real-world applications in the tested configuration.
Relative efficiency of reading 4Kb transaction amounted to 13.2 IOPS/$.

Purchase class

Synology FS3017 though is a top product, but at the same time refers to the Run Rate equipment, which means that the company’s distributors will maintain availability in warehouses, and the price will be fixed and publicly available. This means that if you need Flash storage, you simply contact the company that sells Synology storage and purchase it with delivery in 1-2 days. You do not need to communicate with the sales Manager, who initially tries to set you a triple price from the ceiling and then will make discounts. Transparent pricing and availability allow you to purchase storage of this class “right now” and install 2 hours in production. For this type of storage, it is a rarity and a huge plus for the customer.


FlashStation FS3017 is the first All-flash array for enterprises from Synology, which allows you to fully unlock the potential of SSD, as well as virtualization technology Virtual Machine Manager. Storage has excellent performance in terms of both speed and cost of purchase and operation. The declared performance of 500 thousand IOPs suggests that FS3017 can be used as a head unit in your own virtual infrastructure for dozens of virtual applications.

Only two network interfaces in the basic configuration I consider a disadvantage, but it is easily eliminated by the purchase of expansion cards that are not proprietary, and therefore have a cost comparable to typical host interfaces. And, of course, the strange habit of not completing Rack-device guides for installation in the rack, I still do not understand.

At the same time, FlashStation FS3017 on economic indicators is one of the leaders among those proposals that are present on the market, even taking into account the duplication of the head unit when using High Availability. Economic efficiency will be even higher, the more modern technologies will be used by the customer: ISER protocols for iSCSI, virtualization inside the storage, Off-site copying of important data to the cloud, etc. All this we have seen in Synology products for small businesses, and now it is available to larger customers.

Hi Torwald
I see you used direct connect, do you have any recommendations on a suitable 10Gb switch to interconnect hosts to the NAS?

We are looking at a build very similar to that which you have outlined except it will be ESXi 6.7, using ConnectX-4 Lx EN MCX4121A-XCAT both in the FS3017 and the servers x2
And we will have two FS3017 for replicas using Veeam etc.

So there would be a need for two switches with at least 4 suitable ports on each
I was thinking of the Ubiquiti EdgeSwitch and use Ubiquiti UF-MM-10G modules


Hi, Marcus!

For Veeam you will need almost nothing, you can connect both servers directly to FS3017 without any switches using DAC or fibre cables.

From my experience I not recommend you to use Ubiquiti 10G hardware at all - it has problems with stability. Even D-Link DXS-1210-12SC or Netgear XSM7224S should be suitable for Veeam.

BTW, maybe you will not need even Veeam, because Synology Active Backup for Business is very well