Setting Up RAID 0 on NVMe SSDs: Fast as Hell, But Don’t Expect it to Save Your Data

So, you’ve got a couple of NVMe SSDs lying around and think, "Why not strap these rockets together with RAID 0 and see how fast we can go?" Well, good news: it’s actually pretty simple. Just remember, RAID 0 is all about speed. You lose one drive, and poof—all your data’s gone. But hey, we’re here for speed, right? Let’s do this.

Step 1: Creating the RAID 0 Array

Let’s assume your blazing-fast NVMe drives are /dev/nvme0n1 and /dev/nvme1n1. The command to make these babies scream in RAID 0 is:

sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1

Congratulations, you just turned two SSDs into one beastly speed demon. RAID 0’s got no redundancy, though, so don't say I didn’t warn you when it all goes up in smoke if one drive dies.

Step 2: Picking a Stripe Size – Bigger is Better, Right?

Since you’re dealing with NVMe SSDs that laugh at SATA speeds, your stripe size should match their attitude. The default 512 KB stripe size? It’s okay, but let’s go bigger, because why not? You can set it to 1 MB for large, sequential workloads:

sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --chunk=1024 /dev/nvme0n1 /dev/nvme1n1

Boom. Now you’ve got 1 MB chunks flying around between your drives. This is good if you’re dealing with big-ass files like 4K video or gigantic datasets. Smaller files? Meh, it’ll still be fast—just maybe not as fast.

Step 3: Formatting – Align That Filesystem or Regret It

Okay, we’ve built the array, but you need to tell the filesystem to play nice with your chunk size. You’ve got your RAID with a 1 MB chunk, so let’s tell ext4 to align with that. Otherwise, it’s like putting premium fuel in a Prius—wasted potential.

sudo mkfs.ext4 -E stride=256,stripe-width=512 /dev/md0

  • stride: Divide that 1 MB chunk size by the 4 KB block size (1024 / 4 = 256).
  • stripe-width: Multiply that by the number of drives (256 * 2 = 512). Math, right?

Step 4: Mount it Properly (Because Mount Options Matter)

Once formatted, don’t be lazy and just mount it with default options. We can do better:

sudo mount -o noatime,discard /dev/md0 /mnt/raid

  • noatime: Let’s stop the filesystem from updating the access time on every file read. Seriously, no one cares when the file was last accessed.
  • discard: Enables TRIM so your fancy NVMe SSDs don’t choke on themselves after you delete stuff.

Oh, and throw that into /etc/fstab so you don’t have to remount it after every reboot, because you know you’ll forget:

UUID=your-uuid-here /mnt/raid ext4 defaults,noatime,discard 0 0

Step 5: Tuning – Because You Can Always Go Faster

Increase Read-Ahead Cache

We’re all about speed here. Let’s crank up the read-ahead cache so your system grabs more data when it reads. For NVMe, let’s go big:

sudo blockdev --setra 131072 /dev/md0

That’s 64 MB of read-ahead. Might seem excessive, but trust me, you’re not gonna complain about too much performance.

I/O Scheduler – No One Needs a Queue

NVMe drives don’t need a scheduler to slow them down. Set it to none, and let the drives do what they’re made to do: go fast.

echo none | sudo tee /sys/block/nvme0n1/queue/scheduler

If none feels too hardcore, try mq-deadline, but I’m sticking with none—because less is more when it comes to bottlenecks.

Step 6: NVMe-Specific Tweaks – Extra Juice

Enable NUMA (for the Real Hardcore)

If you’re running some multi-socket, multi-CPU beast of a system, enable NUMA awareness for those NVMes. This keeps your drives talking to the right CPUs. If this doesn’t apply to you, skip it. If it does, you probably already know what I’m talking about.

CPU Affinity for mdadm

If you’re really pushing the limits here, setting CPU affinity to distribute RAID processing across cores is the next level. But let’s be honest, unless you’re benchmarking the hell out of this thing, you probably won’t care.

Step 7: Monitor Your Setup – Because Shit Happens

Just because you went full speed doesn’t mean you can ignore monitoring. You should check your array and the health of your NVMe drives regularly because the only thing worse than losing data is not seeing it coming.

sudo mdadm --detail /dev/md0

For NVMe-specific stuff, use nvme-cli:

sudo apt install nvme-cli

sudo nvme smart-log /dev/nvme0n1

sudo nvme smart-log /dev/nvme1n1

Now you’ve got the data on how much life those drives have left before they kick the bucket.


TL;DR

  1. Create your RAID 0 array with your NVMes (mdadm does the job).
  2. Pick a proper stripe size. 1 MB is solid for fast stuff.
  3. Align your filesystem, or you’re just wasting performance.
  4. Mount with the right options—noatime and discard are your friends.
  5. Tweak read-ahead and schedulers. Faster is better, and schedulers? We don’t need them.
  6. Keep an eye on your setup because RAID 0 means no safety net.

Now go enjoy that blazing-fast RAID 0 setup. Just don’t blame me when you lose everything because you didn’t back it up.

Alright, so you shelled out for NVMe SSDs, and now you want to make sure they're running at the PCIe speeds they’re supposed to, right? Let’s see if they’re getting the bandwidth they deserve. We can check this with a quick command to see the link width and speed.

Step 1: Use nvme-cli (You Did Install It, Right?)

If you haven’t installed nvme-cli yet, go ahead and do it now:

sudo apt install nvme-cli

Once that’s done, you can use the nvme command to check out your SSD’s link details:

sudo nvme list

You’ll get a nice list of your NVMe devices. Now, let’s dig into the juicy details of the one you care about. For this example, let’s say it’s /dev/nvme0:

sudo nvme id-ctrl /dev/nvme0

Look for something like pci_link_width and pci_link_speed. If you see x4 for width and 8.0 GT/s for speed, congrats, your SSD is doing what it should. If not, then something's bottlenecking your drive, and it's time to check that your drive's plugged into the right slot with the right number of lanes.

Step 2: Check PCIe Status Using lspci (for the Overachievers)

If you want to get real fancy with it, you can also use the lspci command:

sudo lspci -vvv | grep -A 15 'Non-Volatile memory controller'

This will dump all the glorious details about your PCIe devices, including your NVMe SSDs. Look for LnkCap and LnkSta—this is where you'll see the maximum width and the current operating width/speed.

If it says LnkSta: x4, and the speed is 8 GT/s (Gen3) or 16 GT/s (Gen4), you’re in the clear. If it’s less, something’s screwed up—either in your BIOS, or your motherboard’s playing games with your PCIe lanes.


Getting KDiskMark to Use NVMe Defaults and Max Threads: Let’s See What These SSDs Can Really Do

KDiskMark is great for benchmarking because it’s basically CrystalDiskMark for Linux, but you’ll need to tweak a couple of things to get it to handle NVMe SSDs properly.

Step 1: Install KDiskMark

First things first, get it installed. You probably already have it, but if you don’t:

sudo apt install kdiskmark

Step 2: Set NVMe Defaults

By default, KDiskMark is set up for your grandma's hard drives, so let’s make sure it’s actually tuned for NVMe. Fire up KDiskMark, and in the UI, head over to the settings.

  • Block size: Set this to 1 MB. NVMe drives love big block sizes, and it’ll help you see those blazing sequential speeds.
  • Queue depth: Crank this up to 32. NVMe SSDs can handle deep queues like a pro, so let them stretch their legs.
  • Test size: You can leave this at 1 GB unless you want to test larger files. More doesn’t always mean better here—if you’re just benchmarking for raw numbers, 1 GB’s enough.

Step 3: Enable Max Threads

This is the fun part. By default, KDiskMark might not be using all the threads your system can handle. We want to push it to the max, so:

  • Threads: Set this to all the threads you’ve got. If you have an 8-core CPU with hyper-threading, crank that number up to 16 threads. Why? Because we want every single CPU core hammering your NVMe to see what kind of insane numbers we can get.

You’ll likely find these options in the settings or preferences menu of KDiskMark. Max them out—after all, what’s the point of benchmarking if you’re not pushing your system to its absolute limits?

Step 4: Run the Benchmark

Now that you’ve got NVMe-friendly settings and all your threads fired up, go ahead and hit that start button. Watch the pretty numbers fly by, and if all goes well, you’ll see some stupid fast read/write speeds.

And just like that, you’ve got KDiskMark tuned for NVMe SSDs and a fully maxed-out thread count. If the numbers don't look crazy fast, check your link width and speed again—something’s likely bottlenecking your drives.