Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1369
Article Thumbnail

NVMe RAID 0 Performance in Windows 10 Pro

Written on July 1, 2019 by William George


The acronym RAID stands for "redundant array of independent disks", or sometimes "inexpensive disks", and really it should probably be "drives" these days since diskless solid-state drives (SSDs) are so widespread now. I guess that is what happens with an acronym that is over 30 years old, in an industry that changes rapidly and is on the cutting edge of technology.

There are many variations of RAID which are denoted by different numbers. We have an article covering that, so I'm not going to go into it in detail here, but it is worth noting that not all RAID versions actually provide redundancy... despite it being the first word in the name.

In fact, one of the most popular types of arrays in workstations is RAID 0, which is also called "striping". It takes all of the data and splits it into equal parts to be spread across each drive in the array. Not only does that do nothing to provide redundancy, it actually puts your data at greater risk because if just one drive in the array fails you lose a portion of every file, in effect destroying everything stored there.

So why do folks like it so much? Speed. By splitting things across two or more drives, you can read and write smaller amounts of data to each drive and do it in parallel (communicating with all of the drives at the same time) - which results in much faster transfer speeds.

As SSDs have gotten faster, especially with the advent of NVMe technology, the vast majority of users don't need to worry about RAID 0. However, there are still some niche applications where combining the speed of multiple, very fast SSDs is helpful - so in this article we are going to look at the current state of NVMe RAID solutions on a variety of modern platforms from Intel and AMD.

Highpoint 7101A NVMe RAID Controller with Samsung 983 DCT SSDs

Test Methodology & Hardware

There are different ways to approach the configuration of a drive array, some purely using software while others employ hardware-level tricks either in the BIOS on a motherboard or via an add-in controller card. Those different approaches can impact both how easy it is to set up and maintain an array, as well as how fast it really ends up being. We included the following methods in our recent testing:

  • Individual M.2 NVMe drives on ASUS HYPER M.2 x4 sleds (PCI-E x4 adapter cards) - using the BIOS or Windows to set up RAID
  • Four M.2 NVMe drives on a single ASUS HYPER M.2 x16 card - again, depending on the BIOS or Windows' software-based RAID
  • HighPoint SSD7101A-1 NVMe RAID card - using HighPoint's controller-specific software as well as Windows' software RAID

* Note: This is just the hardware we tried for this comparison, not necessarily products we carry or plan to offer here at Puget Systems. *

We tested these across a variety of motherboards and chipsets, in order to see how they behaved. Many chipsets have their own implementation of RAID, which is accessed differently depending on how the BIOS is set up, and there are also variations in support for features like PCI-E bifurcation - which is required for the HYPER M.2 x16 four-drive NVMe adapter to work properly.

We used Samsung 983 DCT 1TB M.2 drives for this testing, which are enterprise-grade SSDs that are certified for use with some of these RAID configurations. Our product manager, Josh, did all of the actual hardware swapping and running of benchmarks. He used ATTO's Disk Benchmark to test read and write speeds as well as IOPS (input output operations per second) on the different arrays.

The motherboards we tested on, along with which RAID implementations we tested on each one, are shown in the chart below. Unless otherwise noted, four drives in RAID 0 made up each array. Things like the specific CPU and RAM capacity in the systems don't really matter for this type of testing, though, so we are leaving those details out for the sake of keeping both this author and our readers sane.

Motherboard Hardware Adapter Windows Striping HighPoint NVMe Manager Intel VROC AMD RAIDXpert
ASUS WS C422 SAGE HP SSD7101A-1 Tested Tested Tested Incompatible
ASUS WS C422 SAGE 4x HYPER M.2 x4 Tested Incompatible Tested Incompatible
ASUS WS C621E SAGE HP SSD7101A-1 Tested Tested Tested Incompatible
ASUS WS C621E SAGE HYPER M.2 x16 Tested Incompatible Tested Incompatible
ASUS WS C621E SAGE 4x HYPER M.2 x4 Tested Incompatible Tested Incompatible
GB X299 Designare EX HP SSD7101A-1 Tested Tested Incompatible Incompatible
GB X299 Designare EX 4x HYPER M.2 x4 Tested Incompatible Incompatible Incompatible
GB X399 Aorus Xtreme
Normal NVMe Mode
HP SSD7101A-1 Tested Tested Incompatible Incompatible
GB X399 Aorus Xtreme
Normal NVMe Mode
HYPER M.2 x16 Tested Incompatible Incompatible Incompatible
GB X399 Aorus Xtreme
HP SSD7101A-1 Tested Incompatible Incompatible Tested
GB X399 Aorus Xtreme
HYPER M.2 x16 Tested Incompatible Incompatible Tested
GB Z390 Designare 2x HYPER M.2 x4 Tested Incompatible Tested (RST) Incompatible

That is a lot to take in, and we haven't gotten to any performance results yet. Please remember that we tested RAID 0 (striping) since that is the primary array type our customers request, but some of these configurations may support other RAID modes as well. We also tested all of these as secondary drives, with a separate SSD for Windows and applications. Not all of these options are bootable!

The most important takeaways from the chart above are as follows:

  • Windows 10's software-based striping works on pretty much everything. As long as the array is not your primary drive (where the OS itself is installed) this is the easiest way to go.
  • Intel's VROC functionality didn't work on the X299 board with non-Intel drives, though it did work on the C-series chipsets.
  • The ASUS HYPER M.2 x16 card requires PCI-Express bifurcation in order to work properly, which was only supported on the ASUS WS C621 SAGE and Gigabyte X399 AORUS Xtreme motherboards.
  • We could only test with two drives on the Z390 motherboard due to PCI-Express slot / lane limitations. Also, that chipset does not support VROC - so we used Intel's RST remapping technology instead.

Results - Read and Write Speeds

To start our results off, lets look at ATTO Disk Benchmark's read and write speeds. Here are the results for each platform we tested (motherboard + drive controller) across all of the RAID methods that were compatible with that platform. We opted to limit the scope of the data to results from 4KB to 4MB sizes, since the smaller results were extremely low and once it hit 4MB the sizes above that were generally the same. The results are shown in gigabytes per second, and all of the charts use the same scale (0 to 10 GB / sec).

We kept that scale and the line colors consistent across all the graphs, to make comparisons easier. You can scroll through the various graphs via the left and right buttons, or go directly to one via the thumbnails at the bottom. Their order matches the chart above.

Read speeds are up first:

And now write speeds, with the same data sizes and chart scale:

Results - Read and Write IOPS

The second half of our data is the input output operations per second recorded by the ATTO Disk Benchmark. Results are broken down in the same way as the pure transfer speed tests, and line colors were kept the same as well. The units here are in K IOPS (thousands of IOPS) and in most cases we have recorded two or three significant digits of results. Due to the way Excel displays things, sometimes there is a trailing zero which can be disregarded but which I couldn't figure out a way to remove without losing precision in other places.

This time there are two different scales used: either 0 to 100k IOPS, with lines every 20k, or 0 to 200k IOPS with lines every 40k. That was done because some array configurations were able to provide substantially more operations per second, so please pay close attention to the numbers rather than just the slopes of the lines.

As before, read results are up first:

And write IOPS as well:


That is a ton of data, between both the compatibility table and performance charts, but a few trends do emerge:

  • The most widely compatible RAID method was Windows 10's built-in functionality. It is the only solution that worked across every motherboard and controller option we tested.
  • Windows' RAID was also the fastest option (either clearly ahead or tied) in almost every situation. The only major exception was when facing off against RAIDXpert on the X399 platform, where AMD's solution pulled ahead if the system was in RAID mode. Using that mode slowed down Windows RAID, but in NVMe mode it was able to match RAIDXpert's best performance.
  • Write performance was largely similar across the board, with little regard to what platform things were running on, though we did observe again that Windows' RAID on X399 when the board is set to RAID mode performed poorly (but not in NVMe mode).
  • Regarding different methods of connecting multiple NVMe drives to the system, there did not appear to be any benefit from having a card with its own hardware controller (like the Highpoint) versus just having a host card with four M.2 drive slots or even four separate M.2 adapters. However, not all motherboards / chipsets support a quad M.2 card! If your system can, though, it will take up a lot less expansion space than a bunch of individual M.2 adapters.

It is worth remembering that all of this testing was done on Windows 10 Pro, so your mileage may vary with older versions of Windows or with other operating systems. If you have additional questions about RAID, scroll down to our FAQ section or ask in the comments.


Based on what we found in our testing, Windows 10's built-in RAID functionality is the best way to set up an array on NVMe drives today. While we focused on striping (RAID 0), with the goal of improving storage performance, Windows 10 also supports mirroring (RAID 1). That will not affect performance as much, but is an option if your concern is data redundancy instead of drive speed.

Looking for a New Workstation?

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Frequently Asked Questions

What is RAID?

RAID stands for Redundant Array of Independent (or Inexpensive) Disks. It is a method for improving computer storage performance, and / or adding redundancy, by spreading data out over multiple drives. If want more background on what it is or the various modes it can operate in, check out our in-depth article on the subject: RAID Explained. Its an oldie, but a goodie.

How can you decide which RAID method to use?

There are two sides to this:

  • Hardware - what components are needed to connect the drives to the system in such a way so that RAID can be used
  • Software - the interface used to configure and maintain the array, usually either via a pre-boot environment or within the OS

To pick the right hardware, you need to know what is compatible with your motherboard and the drives you want to use. Ideally, keeping the amount of different components to a minimum is best (fewer points of failure and less space taken up) but it is also important to go with reliable brands and stay within your budget. We found that a quad M.2 host card was ideal, in systems that support it properly. If your motherboard does not, then the Highpoint we tested was a little bit more widely compatible - but costs more. If you don't need a full four drives, individual adapter cards are fine too... they just take up more expansion slots in total. If your board already has some M.2 slots, however, you might need fewer M.2 adapters.

On the software side, the goal should be maximizing performance and reliability, along with ease of setup. We found that Windows 10's built-in RAID functionality was great in this regard, since it requires no additional software to be installed and it can be accessed from within the operating system. However, that means that it cannot be used for the primary (boot) drive. For most users, though, putting your OS on an array is not a good idea anyhow; simple backups are a better, more comprehensive solution.

What is the best way to configure NVMe drives in RAID on Windows 10?

Currently, Windows 10's built-in RAID functionality is the most widely-compatible and overall best-performing way to set up an array. If you have an AMD system with the X399 chipset, then their proprietary RAIDXpert is also good... but no better than Windows' RAID as long as you set the BIOS properly for each one (RAID mode for RAIDXpert, NVMe for Windows' built-in RAID).

Does using Windows RAID affect CPU utilization?

There is some CPU overhead involved in using Windows RAID, but in our testing we found it to put a smaller load on the processor than Intel's VROC. With most modern CPUs it should have very little if any discernable impact on overall performance.

How do you set up RAID on Windows 10?

To set up two or more drives in RAID within Windows 10 Pro:

  1. Go to Disk Management (it is in the shortcut list if you right-click on the Start button)
  2. Right-click on one of the drives you want to use as part of the array. Make sure you click on the drive itself, not a partition, and that the drive is empty. Any data on the drives you use will be wiped out! You may need to delete existing partitions first.
  3. In the pop-up menu, select either "New Striped Volume" (RAID 0, speed) or "New Mirrored Volume" (RAID 1, redundancy)
  4. Continue through the various options menus that come up to select the drives to use, specify capacity, etc.

If you need a more in-depth walkthrough, this page has a good guide. It focuses on mirroring, but the setup for striping is similar.

Does Windows 10 Home support RAID?

The built-in RAID functionality we looked at in this article is only available on Windows 10 Pro, not Home. Other RAID methods (motherboard or add-on card solutions) may work on Windows 10 Home, but we recommend Pro as a better overall platform and do not offer Home on our workstations here at Puget. For simple redundancy on Windows 10 Home, check out Storage Spaces.

Is Windows RAID the same as Storage Spaces?

Storage Spaces is different and separate from Windows RAID. It is a newer multi-drive array option, focusing more on redundancy and not on performance. It supports mirroring and parity, but lacks any striping option. It is accessible from withing the Windows Control Panel. If you want to know more, just search online for "Windows 10 Storage Spaces" and you should find plenty of info.

Does Puget Systems offer RAID on workstations or servers?

RAID is something we offer at customer request, within a limited range of options based on the hardware in a given system. Our policies change over time as we test functionality on new motherboards, controller cards, drives, and operating system versions.

Currently, as of this writing, we support:

  • Windows 10 Pro's built-in striping and mirroring for secondary drives (not where the OS itself is located)
  • Linux software RAID for secondary drives (not where the OS itself is located)
  • A selection of dedicated RAID controller cards, which are best if you need advanced RAID modes (5, 6, etc)
  • Intel VROC (on compatible motherboards) for NVMe drives
  • Intel RST (on compatible motherboards) for SATA SSDs, hard drives, and NVMe drives if VROC is unavailable

If you are interested in getting a Puget workstation or server with RAID, please contact us to determine the best option for you.

Tags: RAID, NVMe, M.2, SSD, Hardware, Host, Bus, Adapter, HBA, Controller, Software, Windows, Intel

Excellent article! Your results mirror my own with NVMe RAID and I've been considering a HighPoint SSD7101A-1, but I don't currently have enough PCIe lanes to justify it without an X series CPU.

You should revisit Storage Spaces though. It does support striping, but you have to enable the correct number of columns via PowerShell, the GUI will only use 1 by default which is more like spanning. With true striping in Storage Spaces it is a bit faster with less CPU overhead than Disk Management's striping, and it also supports TRIM natively where Disk Management does not.

This command will tell you how many columns you have in your virtual disk:

Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, NumberOfDataCopies, @{Expression={$_.Size / 1GB}; Label="Size(GB)"}, @{Expression={$_.FootprintOnPool / 1GB}; Label="PoolFootprint(GB)"} -AutoSize

When I created a new striped array for my two SATA SSDs on my laptop, I used this command to get 2 columns instead of the default of 1 that the GUI would use:

New-VirtualDisk -FriendlyName Data -StoragePoolFriendlyName "Data" -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize

Here are three benchmarks comparing Disk Management, Storage Spaces with 1 column, and Storage Spaces with 2 columns, on two SATA SSDs...

Striping in Disk Management:
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 538.208 MB/s
Sequential Write (Q= 32,T= 1) : 498.910 MB/s
Random Read 4KiB (Q= 8,T= 8) : 406.875 MB/s [ 99334.7 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 367.006 MB/s [ 89601.1 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 222.675 MB/s [ 54364.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 205.961 MB/s [ 50283.4 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 31.233 MB/s [ 7625.2 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 74.422 MB/s [ 18169.4 IOPS]

Test : 1024 MiB [D: 0.0% (0.2/953.8 GiB)] (x5) [Interval=5 sec]
Date : 2019/07/02 8:50:01
OS : Windows 10 Professional [10.0 Build 18362] (x64)

Storage Spaces with 1 column:
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 499.835 MB/s
Sequential Write (Q= 32,T= 1) : 459.504 MB/s
Random Read 4KiB (Q= 8,T= 8) : 415.258 MB/s [ 101381.3 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 353.744 MB/s [ 86363.3 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 216.459 MB/s [ 52846.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 194.832 MB/s [ 47566.4 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 28.688 MB/s [ 7003.9 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 64.558 MB/s [ 15761.2 IOPS]

Test : 1024 MiB [D: 0.0% (0.1/950.9 GiB)] (x5) [Interval=5 sec]
Date : 2019/07/02 8:40:35
OS : Windows 10 Professional [10.0 Build 18362] (x64)

Storage Spaces with 2 columns:
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 559.864 MB/s
Sequential Write (Q= 32,T= 1) : 518.295 MB/s
Random Read 4KiB (Q= 8,T= 8) : 408.738 MB/s [ 99789.6 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 371.697 MB/s [ 90746.3 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 213.924 MB/s [ 52227.5 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 195.622 MB/s [ 47759.3 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 28.498 MB/s [ 6957.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 62.277 MB/s [ 15204.3 IOPS]

Test : 1024 MiB [D: 0.0% (0.2/949.9 GiB)] (x5) [Interval=5 sec]
Date : 2019/07/02 9:02:51
OS : Windows 10 Professional [10.0 Build 18362] (x64)

1 column degrades in performance quite significantly after some use, all three of these were right after fresh formats. The Disk Management method is the easiest, but doesn't support native TRIM commands by the operating system, which also degrades in performance over time.

Posted on 2019-07-02 13:15:03

Thank you for that info, Aaron! I was unaware that Storage Spaces was "hiding" the ability to do striping... I wonder why Microsoft didn't expose that through the GUI? A pity :(

Unfortunately we don't have all the hardware around to re-test with that configuration, but I appreciate you providing the results from your system :)

Posted on 2019-07-02 18:30:59

Yeah, Microsoft is moving to PowerShell for everything and hiding a lot from the GUI. Exchange management has become a major headache in 2016 and 2019 as a result. If I wanted to do everything from a console, I'd run Linux for a server. :-P

Posted on 2019-07-02 19:14:47

Hey, I have heard the exact opposite - that Storage Spaces has more CPU overhead than the Disk Management route. Do you have a source for the CPU?

Posted on 2020-02-06 06:14:51

Thank you for publishing this data and sharing it with the public.

This is very useful information.

It's somewhat surprising to see what the write speeds, as it is benchmarked using the ATTO benchmark, really can't do much more than 4.88 GB/s acrossed four NVMe SSDs.

I cannot express to you how grateful I am to you, for having published this data and putting it out there in the public domain.

Thank you!

Posted on 2019-10-07 20:11:51

What processors were you using for the testing? How much RAM? etc. Can you provide a little bit more details in regards to each of the platforms and the hardware associated with those platforms that you used in these tests?

Thank you.

Posted on 2019-10-08 02:31:06

I don't recall the specific CPUs and RAM that we used, but I will talk to others involved in the testing and see if they can remember. That shouldn't really matter, though - just copying around data isn't terribly CPU intensive, nor does it use a lot of RAM (though we often max out systems when testing them, so I bet these were all equipped with 64GB+).

Posted on 2019-10-08 18:24:37

Well...I thought that there might be a difference because, for example, the 9th generation Core i9 and also the latest generation of Ryzen 3 (Zen 2) processors only have 16 lanes and 16+4+4 lanes respectively, so if there's a PCIe 3.0 x16 lane graphics card, then that may have an impact on the performance of the drives.

I'm also asking because right now, I have a system that uses the older X79 chipset with an Intel Core i7-4930K (6-core, 3.4 GHz stock, HTT disabled) with four Samsung 860 EVO 1 TB SATA 6 Gbps SSDs also in RAID0 through a Broadcom/Avago/LSI MegaRAID 9341-8i 12 Gbps SAS HW RAID HBA and with a 64 kiB stripe size and 64 kiB block size for the benchmarks, I am also peaking out at around 2 GB/s on sequential writes.

So it's interesting that for the writes, with four SSDs, and a 64 kiB block size, there is hardly any difference between SATA drives and NVMe drives.


Again, thank you for conducting these tests and posting the results into the public domain. This is very useful/helpful.

Thank you.

Posted on 2019-10-08 19:20:54

Oh, if PCI-E speed / lanes is your concern then I am 99% sure that all of these were running at the full PCI-E speed they were capable of. That is somewhat dependent on the CPU, but moreso on the motherboard - and which motherboard we used in testing each one is listed in the Test Methodology and Hardware section of the article.

Posted on 2019-10-08 19:38:34

Yeah, I read the section that detailed the different motherboards -- hence why I asked the question in regards to the details of the hardware that wasn't published here -- the CPU and RAM.

But your comment helps to answer/address that question.

I know that, in another instance though, that if you are copying and moving data around, that the thread/process that is in charge of it -- because of the sensitivity of the timer, that thread migration between processing cores can also have an impact on the results.

For example, on the Core i7-4930K, when I was running the benchmark, CPU utilisation was maybe upto around 6%. It's not a lot, but it isn't nothing either.

Posted on 2019-10-08 19:48:23
Daniel Brown

I just put together an Intel Z390 system with 2 Intel SSD's and set them up as a remapping RST RAID in the BIOS and installed Windows 10 on it... How can I use the 2 SSD's with the Windows RAID driver instead, since it's so much higher performance? Would I literally need to have a separate OS drive connected and format/create the RAID for the 2 SSD's from Drive Management and then disconnect the OS drive and install Windows fresh on the new SSD RAID? I mean, Windows installed in about 10 mins with a USB 3.1 thumb drive and it boots from power off In about 6-7 seconds, should I even bother?

Posted on 2019-12-11 00:43:30

I wouldn't bother, personally. It sounds like your drive setup is already blazing fast :)

Posted on 2019-12-13 18:13:01
Michael Barlas

Thanks for the article, it was very useful.

I have a Gigabyte Designaire Z390 mobo myself and want to RAID my 2
V-Nand SSD 970 Evo Plus NVME M.2's with an expansion card.

Do I have to buy the Asus Hyper M.2 x4 mini? (Which I notice doesn't support my mobo)

or the

ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2(2242/2260/2280/22110)Up to 128 Gbps for Intel VROC & AMD


Posted on 2019-12-28 18:22:50

I am using x299 ASUS Prime Deluxe II with Hyper M.2 x16 card w/ VROC. Seemed to work properly and give good speeds with 4x Intel 760p 512gb drives.I recently bought a premium VROC key so I could experiment with non intel drives but haven't gotten any yet.

Posted on 2020-06-07 21:47:09

It would be nice for an update with the new drives and PCI 4 support becoming more and more popular now a days.
Specifically the speed/io of,
a Gen4 NVMe on PCIe 3.0 slot
vs that same device on Gen4 NVME compatible on a PCIe 4.0 M.2 slot.
vs 2x, 3x, 4x of the same Gen4 NVMe SSD (e.g. Samsung 980 Pro) on PCIe 3.0 raid0
vs 2x, 3x, 4x of the same Gen4 NVMe SSD (e.g. Samsung 980 Pro) on PCIe 4.0 Gen4 NVME raid0

Another comparison would be running Raid0 with a Gen4 NVMe on a "ASUS Hyper M.2 x16 Gen4 Adapter" VS. running Raid0 on the motherboard's x570 M.2 Gen4 built in raid controller. (Windows Raid in both scenarios)

Posted on 2021-04-11 10:35:26