Xpander-Z card in the PCI_E1 slot?

ladamyr153802d1

New member
Joined
May 16, 2025
Messages
22
Since I'm using a Zotac 4090 GPU and since at Gen 5.0, X16 is not that much better than X8 for a Gen 4 card, should I try to use it in the PCI_E2 slot and use the Xpander-Z card in the PCI_E1 slot so I can use it to run 2 M.2 2280 NVME PCIe 5.0 x4 drives? They should both run at 5.0 x4 there, correct? Eight lanes when the #2 slot is using eight on the MEG X670E ACE, right?
I could run it as a striped drive too. Will it run twice as fast?
How about a Raid 0 drive between all three Gen 5 drives, the two on the card and the one in the M.2-1 5.0 x4 slot on the motherboard. Will they run three times as fast?
Anybody want to take bets? I'll be doing this soon: First a Gen 5 for the board, then a 2nd Gen 5 and try it on the card with the first. If I get two Gen 5.0 x4 drives on the Xpander-Z card working at 5.0 x4, I'll first make it a RAID 0 array and test it to see how fast it is. Then go for a 3rd and see how fast of a boot drive I can get.
If this works I'll have a very fast 12 TB boot drive. And with the remaining slots on the M/B I can get a pretty fast Gen 4.0 RAID 0 array as well, stuff that doesn't need to be as fast as Gen 5, and certainly much faster than SATA 3.
 
Last edited:
What's the benefit of putting Xpander on PCIe1 instead of PCIe2?
stripped drive will definitely run faster as long as it's not capped by the hardware limitation. (let's say if you have a SSD that runs 12000MB/s, you don't expect making 2 raid0 would give you 2400MB/s performance)
And no reason to run raid 0 on so many drive, you need to understand the more drive you use, the more vulnerable the volume is. You might just build RAID 10 rather than RAID 0.
 
What's the benefit of putting Xpander on PCIe1 instead of PCIe2?
stripped drive will definitely run faster as long as it's not capped by the hardware limitation. (let's say if you have a SSD that runs 12000MB/s, you don't expect making 2 raid0 would give you 2400MB/s performance)
And no reason to run raid 0 on so many drive, you need to understand the more drive you use, the more vulnerable the volume is. You might just build RAID 10 rather than RAID 0.
The Zotac card is a 2-1/2 slot card that covers the #2 slot when in the #1 slot. The only way to bifurcate the #1 & #2 lanes between it and Xpander-Z card is to put it in the #2 slot and put the Xpander-Z card in the #1 slot.

I completely understand I would be tripling the possibility of a complete RAID Array failure using 3 drives in a striped drive. However the need for RAID 5 & 10 arrays is kind of moot when you have working solid state drives that have projected 1-1/2 MILLION hours of use. Once they're up and running, it is SUPER rare that one of them is gonna fail in the next ten years. If my machine was going to be used as the last resort storage for a business that if I lost the array it would be catastrophic, of course. But it's a gaming machine and I want the fastest boot drive I can get.

I think this is the way to go. And I'm not so naive as to think that speeds will double or triple. It might not even double if all three are striped in together, Crystal Diskmark will decide that in the end. But I have it and HWinfo64 to tell me what is going on so I'll know. If I can get double speed using all three, that will make me very happy.

After all, the fastest with the bestest...

I'll post screenshots of the results. I'm just posting this now while I wait for the parts to arrive to see if anybody knows or has thought of something I missed. You thought of something I didn't have to, I already knew there was no other way to use the #1 & #2 slots. Aside from that, you make no objection I didn't consider already.
 
I take that plutomate, that you agree with my assessment.

Turns out my case, a Fractal Define 7 XL, is gonna work with this setup very well. The Zotac will wind up about 1/4" from the mesh cage that covers the P/S and will draw cool air from the bottom filtered area around the P/S. I'll never see the fans. The placement of the Xpander-Z card will put it right at the exhaust and since all my fans blow into the case (there is no "exhaust" fan) and the Noctua NH-D15 chromax.black fans draw air from the front fans that will blow right at the M.2-1 Gen 5.0 x4 NVME and its cooler and out the exhaust, any changes in the old air flow should actually improve. Before, with the GPU in the #1 slot, it used to draw some of the air from the front fans, competing with cool air meant for the Noctua NH-D15. On top of all that air flow improvement, the card will hide the wires that plug into the bottom edge of the M/B, AND eliminate the need for that stupid prop that the Zotac needs so gravity doesn't destroy it. Now it will lay there with just a little pad, perhaps some thermal padding, out of the way and out of sight, to hold it up straight and keep it from sagging.

I'm using the Team Group 4TB NVME's that come with the little coolers. I may be able to do without a fan on the M.2-1 drive and the Xpander-Z cards own fans are possibly adequate. One guy is using it for 2 Gen 5 drives but he doesn't specify how he has it configured or whether it is running both of them at Gen 5 speed. If he's using it in the PCI_E3 slot he can't be getting both of them to run at Gen 5.0 x4. There's only one x4 lane on it and it can't be split into two x2 lanes. He says they're running cool enough and if he is using them in the #3 slot then that would explain them running cool since they can only run at Gen 4 speeds with the both of them there.

I'll be finding that out when I get the second drive. I'll plug the two into the card (with their little coolers and fans if they fit) and see how hot they run during CrystalDiskMark test runs. It's good at heating them up. If the little coolers and fans that come with them fit, I'll see which ones fans are louder and which ones cool the best and make a decision then.
 
Here's a problem I have that I wonder if anybody knows an easy solution to.

My original setup with this board was to have 6 2TB NVME Gen 4 in all the slots available in a RAID 10 array. I did this first to see if it was possible (it is) and to see how fast it would be: It ran 9323 read and 6632 write. I did this when I was worried about drive failure and have since been brought to reality regarding that. I'll be taking full advantage of the ability of a RAID 0 array to get the fastest possible boot drive for my machine now.

Here's my problem. I had to send in my ACE for warranty repair and so had a conundrum: How do I keep my boot drive files and settings operational while I send the M/B off for repair? I found that answer in a little ITX board I plan to use later for a totally solid state HTPC I will build later after I get my ACE back from MSI: It's a Gigabyte B650I AORUS ULTRA and it has 3 slots for those NVME drives I have. It's the one I'm using now, looks like a VW engine inside a Mack truck engine compartment in this Define 7XL case, lol. Putting a Ryzen 5 8600G APU in it allowed me to use the Xpander-Z card in it's one PCI_E slot and that let me to put 5 of the 6 M.2 drives in it, everything runs as if I only had one drive failure (because it's missing).

So here's the problem. I can easily put the 5 drives back in the ACE when it arrives and transfer all of my settings to the Team group Gen 5 drive and make it my boot drive, but then I won't be able to turn it into a RAID array. I can likely pick out one of the 5 NVME that I have my boot drive on and boot up on four of them to transfer onto a RAID 0 drive, but the goal is to make a RAID 0 drive on three Gen 5 M.2 drives. That's gonna mean I have to find the three drives that have the "blue stripe" if you will and install them into the M.2-2, 3 & 4 slots, that is if I am to go on and make a RAID 0 array on the M.2-1 slot and the two on the Xpander-Z card, a Gen 5 x4 RAID 0 array from three drives.

So, anybody know how to identify the drives in the RAID array I have? Can I find out the three that have the "blue stripe" any way other than just randomly pick one out, cross my fingers and see? If that doesn't work, pick out a different one, and try again? Keep going until I find the right combination?
 
Last edited:
You don't want to use M.2 NVME drives to make a striped RAID setup. You will see no performance increase and you will cause the drives to wear faster due to the number of writes to each drive.
 
You don't want to use M.2 NVME drives to make a striped RAID setup. You will see no performance increase and you will cause the drives to wear faster due to the number of writes to each drive.
I'll see, won't I?

My experience with a RAID 10 setup on my first iteration of this M/B seems to indicate I will see a significant increase striping these Gen 5 drives together. The first array in 2023 was 6 Kingston Fury Renegade 2TB PCI-E Gen 4 2280's and they are rated at "up to" 7300 read and 7000 write and they ran 9323 read and 6632 write in that RAID 10 array. I can at least expect a bigger increase since this array won't be mirroring on top of striping, it should be close to doubling with two and certainly more with the three. My bet is 28,000 read and 23,000 write. My hope is 36,000 read and 30,000 write.

As far as "wearing" them out. These Team Group NVME's have a calculated life of 1.7 million hours, which is almost 194 years, I don't think "wear" is gonna be a problem in the next ten years.

Also, you do know how striping works, right? If you stripe two drives, half the data gets written to each drive, so each drive gets half the activity it would if it were the only drive. I think you got this one backward brother. In a manner of speaking, this three drive array will have a calculated life of 5.1 million hours since each drive is only going to be getting 1/3rd of the load.
 
I was building RAID 5 machines back in the early 2000's, pretty much know how they work.
 
Looking over the specs on this M/B I notice it supports RAID 5 for M.2 NVME slots, so I will definitely find out what the speed of a RAID 5 array of the three Gen 5 drives as compared to the speed of a straight RAID 0. I'm sure it won't be nearly as fast, maybe not as fast as a two drive RAID 0 array, but it will be worth it to see what they all are.

So, screenshots of CrystalDisKMark...
1) Team Group in the M.2-1 slot
2) 2 TG's on the card as a RAID 0 array (also w/shot of HWinfo64 showing them both running PCIE Gen 5.0 x4)
3) 3 TG's as a RAID 5 array
4) 3 TG's as a RAID 0 array
In the end, if all these projections come true, I will still likely go with a straight striped array for my C: drive unless the benefits are drastically lower than I anticipate. If you're right dvair, and there is no increase in speed, I'll abandon the idea of any kind of RAID drive at all, but I don't see that happening. I've already seen a speed increase with Gen 4 drives in a RAID 10 array on this M/B. But if the speed increase is not much, say 10% with RAID 5 and 20% with RAID 0, I'll go with a RAID 5 array. I've got a feeling it's gonna be more like R5 = +50% Read: R0 = +200% Read.
 
Last edited:
I will see a significant increase striping these Gen 5 drives together.

The benchmark numbers don't mean that much for daily use. People think it's so much faster because of the impressive linear read/write numbers, but it's mostly effective in benchmarks, not in real-life workloads where other things matter more, and where the bottleneck is not the linear thruput. I posted about it here and here for example.

Combine that with the big hassle once you have to switch boards for whatever reason, or sometimes even after a simple BIOS update, and it's easy to see why RAID has largely fallen out of favour with M.2 PCIe SSDs in particular. I would never recommend it. If you want the best performance without any experiments, then i recommend one WD_Black SN8100 as your boot drive.
 
I was building RAID 5 machines back in the early 2000's, pretty much know how they work.
Then you must be thinking about RAID 1 when you say more activity. Mirroring takes up more processor resources and slows things down. RAID 0 (striping) divides the reading and writing between the two, solves the old "disk bottleneck" and results in faster reading as well as writing. Approximately half the time for each operation if there's two disks, 1/3rd if there's three.

Oh and I was building RAID arrays too, in the early 90's, on a Cyrix 133mhz 5x86 machine...
...using Stacker to compress the drives as well.

Installed DOS 6 and the 6.2 upgrade that stole Stacker's technology.

BTW I didn't use the 6.22 disk Microsoft sent me to satisfy the court order to take away my pirated copy of Stacker that Microsoft sold me.

Does that mean I've been a bad boy?
 
The benchmark numbers don't mean that much for daily use. People think it's so much faster because of the impressive linear read/write numbers, but it's mostly effective in benchmarks, not in real-life workloads where other things matter more, and where the bottleneck is not the linear thruput. I posted about it here and here for example.

Combine that with the big hassle once you have to switch boards for whatever reason, or sometimes even after a simple BIOS update, and it's easy to see why RAID has largely fallen out of favour with M.2 PCIe SSDs in particular. I would never recommend it. If you want the best performance without any experiments, then i recommend one WD_Black SN8100 as your boot drive.
I see your points but frankly I'm going to find out no matter. This motherboard DID show an increase in speed with a RAID 10 array of six 2TB Gen 4 drives of 27.5% (the reason it didn't show an increase in write times was because it was RAID 10) and now I have the opportunity to see what the increase in PCIE Gen 5 speed will be in a three disk array that is purely a striped array. No parity slowdowns, no mirroring slowdowns. "Numbers" may not "mean that much" for daily use, but as I've said this is a gaming machine and any decrease in load times WILL be noticed, trust me.
 
Last edited:
It's all good. I was not thinking of a mirror. It's just that today's drives are so fast on their own that a RAID setup tends to be more of a hassle. Too many variables when the RAID setup is tied to the BIOS, have seen a lot of issues when a BIOS gets upgraded that people lose their RAID. I will look forward to seeing what you can get.
 
It's all good. I was not thinking of a mirror. It's just that today's drives are so fast on their own that a RAID setup tends to be more of a hassle. Too many variables when the RAID setup is tied to the BIOS, have seen a lot of issues when a BIOS gets upgraded that people lose their RAID. I will look forward to seeing what you can get.
That's happened to me already and I knew how to get it back. It's mostly knowing what to look for in the BIOS.

I don't know if you read it, but I took 5 of those drives and just plugged them into this ITX board I'm using now (using a Ryzen 7 8600G on it) and it booted up fine. The board took 3 and the Xpander-Z card the other 2, going into the ITX boards one PCIE slot. It's going to be a totally solid state, yes no fans, HTPC. I'm strongly leaning towards that gorgeous Streacom case that looks like a work of modern "cubist" art.

My main problem will be how to find the three among those 5 that have the "blue stripe" if you will and put those into the M.2-2, 3 & 4 slots on the ACE to get my C: drive so I can copy it onto the PCIE Gen 5, 5.0 x4, RAID 0 array I'm gonna build.

If only I had another Xpander-Z card: I could get a USBC to HDMI cable, use this 8600G processor I'm currently using on this ITX board, and then use both of the Xpander-Z cards to do this. I could put both of them in the PCI_E1 & E2 slots, one of them with the two 4TB Gen 5 drives and the other one with the two drives in the current 5 drive RAID 10 array. The three others can then go into the M.2-2, 3 & 4 slots on ACE and the third Gen 5 drive in the M.2-1 slot. Then I'll just make the Gen 5 RAID 0 array, install Windows on it, boot into it and copy the files from the Gen 4 RAID 10 array onto it.

Then I can put the 7950X3D back in, erase all of the Gen 4 NVME's, use three of them to make a RAID 5 D: drive for 4TB of backup for the root (just in case) and have one Xpander-Z card for the HTPC with two matching 2TB NVME's inside. With three M.2 slots left on the HTPC board and no need to array anything on it (the 8600G will run any video and many high-rez graphics games), the HTPC is gonna have plenty of room for movies and TV. I won't settle for less than 10TB.

One last question maybe you guys know the answer to. Does it matter if the games I want to run on this Gen 5 RAID 0 array are run on the root drive, that is will they run faster on the C: drive than on a D: drive? Maybe a 4TB, RAID 5, PCIE Gen 4 C: drive will be better. Certainly big enough and virtually no chance of losing any important data. Going from my experience with the current RAID 10 array, a RAID 5 array may be just as fast. If so my only concern is will my games run just as fast from a D: drive as they will from the root drive.
 
Last edited:
In the past I used to run games from a separate drive so that they wouldn't be slowed by activity on the system drive. But I haven't done this in quite a while, pretty much install everything to my C drive and haven't noticed any slowing (not that I play that much anymore).
 
"Numbers" may not "mean that much" for daily use, but as I've said this is a gaming machine and any decrease in load times WILL be noticed, trust me.

Well, i trust numbers the most. So take some baseline numbers for loading times without any RAID, add up the numbers for a couple of your most played games, and then later you can compare against that baseline.

If so my only concern is will my games run just as fast from a D: drive as they will from the root drive.

I think you are confusing root folder and boot drive here. 😅 The drives don't have a root drive - or master/slave - anymore, that was in the IDE days, there is no sort of hierarchy anymore. Already with SATA, they went to a point-to-point connection, each drive is addressed seperarely, each drive has the bandwitdh/speed that the slot offers and the drive is capable of. The boot drive doesn't have any special performance benefit. You can freely define the drive letters and/or the boot drive, for example you could use the slowest M.2 slot for your boot drive and C:\, and you could put some backup drive in M2_1 on the CPU-provided PCIe lanes and give it the drive letter Z:\. Now drive Z:\ will probably have a higher speed than drive C:\.

Of course this is not how it should be done. For the boot drive, the one and only thing i can recommend is to put it into M2_1 (CPU-provided PCIe lanes), and have it be the fastest or one of the fastest SSDs. Then if you additionally want to have drives in a RAID, i can see that for the games perhaps, you do that for those other SSDs only. This way you can never run into trouble with the OS itself when it comes to the RAID. This is the only kind of RAID i could understand nowadays, but i'd be curious to see what kind of improvement it brings for loading times in games. From the evidence i've seen before, the improvements will be magnitudes lower than in the theoretical benchmarks.
 
In the past I used to run games from a separate drive so that they wouldn't be slowed by activity on the system drive. But I haven't done this in quite a while, pretty much install everything to my C drive and haven't noticed any slowing (not that I play that much anymore).
I see your time zone dvair, England?
The more I think about it the more I'm leaning towards making this RAID 0 drive my D: drive. A nice 4TB RAID 5 array for the root should fare pretty well, especially if it will run at 8000+ read. That way I won't have to think about ever losing the C: drive.
 
For the boot drive, the one and only thing i can recommend is to put it into M2_1 (CPU-provided PCIe lanes), and have it be the fastest or one of the fastest SSDs. Then if you additionally want to have drives in a RAID, i can see that for the games perhaps, you do that for those other SSDs only. This way you can never run into trouble with the OS itself when it comes to the RAID. This is the only kind of RAID i could understand nowadays, but i'd be curious to see what kind of improvement it brings for loading times in games. From the evidence i've seen before, the improvements will be magnitudes lower than in the theoretical benchmarks.
Thank you for helping. Forgive my old school, IDE talk, I forget sometimes. I'm talking about resource use coming from the controller having to take from the same drive at the same time.

Yes, and those lanes on the PCIE slots are from the CPU as well.

So I'm going for 3 PCIE Gen 5 NVME's in a RAID 0 array that gets all of its lanes from the CPU (12 lanes) and making that a 12TB C: drive (that could run over 30,000 read and over 24,000 write, well see), or make that a D: drive...

...and make the 3 Gen 4 NVME's into a RAID 5 array, faster on the read, a little slower on the write, and make that my C: drive.

My question to you is which would be faster for my games? The OS on the smaller RAID 5 array and the games on the 12TB very fast RAID 0 array...

...or all of it on the one, very fast array. As I think about that, I think everything on the very fast array sounds like the best option. Use the 4TB Gen 4 RAID 5 array as backup for the OS. Games can be recovered. A thousand settings, program updates and passwords can't.
 
Back
Top