Xpander-Z card in the PCI_E1 slot?

ladamyr153802d1

New member
Joined
May 16, 2025
Messages
22
Since I'm using a Zotac 4090 GPU and since at Gen 5.0, X16 is not that much better than X8 for a Gen 4 card, should I try to use it in the PCI_E2 slot and use the Xpander-Z card in the PCI_E1 slot so I can use it to run 2 M.2 2280 NVME PCIe 5.0 x4 drives? They should both run at 5.0 x4 there, correct? Eight lanes when the #2 slot is using eight on the MEG X670E ACE, right?
I could run it as a striped drive too. Will it run twice as fast?
How about a Raid 0 drive between all three Gen 5 drives, the two on the card and the one in the M.2-1 5.0 x4 slot on the motherboard. Will they run three times as fast?
Anybody want to take bets? I'll be doing this soon: First a Gen 5 for the board, then a 2nd Gen 5 and try it on the card with the first. If I get two Gen 5.0 x4 drives on the Xpander-Z card working at 5.0 x4, I'll first make it a RAID 0 array and test it to see how fast it is. Then go for a 3rd and see how fast of a boot drive I can get.
If this works I'll have a very fast 12 TB boot drive. And with the remaining slots on the M/B I can get a pretty fast Gen 4.0 RAID 0 array as well, stuff that doesn't need to be as fast as Gen 5, and certainly much faster than SATA 3.
 
Last edited:
Are all your games Online? If so, the slowest thing will be the Internet connection.
I do my backups to an external hard drive so that it can removed and safe from anything that should happen to the machine.
 
I'm not entirely sure I can make more than one RAID drive with the NVME's. As I recall the BIOS just gives me the option to make all of them AHCI mode or RAID mode: No distinction of which of them are in or out, nor any option of a RAID #1 and RAID #2. If I can only make the one RAID array it will have to be the single Gen 5 array. I could live with that.

I could try to just install the 3 Gen 5 drives, specify in the BIOS a RAID mode, make the RAID 0 array and install the OS. Then, install the 3 Gen 4 drives and see if the OS sees them. If it does, I can remove the 3 Gen 5 drives and make a RAID 5 array with the Gen 4's and install the OS again. Then in the BIOS, see if the two arrays are shown and if they are choose the Gen 5's to be the boot drive.

That may work. This board surprised me already when it made a RAID 10 array out of those 6 Kingstons.
 
Are all your games Online? If so, the slowest thing will be the Internet connection.
I do my backups to an external hard drive so that it can removed and safe from anything that should happen to the machine.
None of my games are on-line and thanks for reminding me to get that hot swap bay, I've been meaning to do that, now's the time.
 
I see from the block diagram of page 573 of the manual on this board that it looks like my plan to use the 8600G APU to use the M.2_2, 3 & 4 slots on the M/B along with the two currently on the Xpander-Z card would work. Those five work as the RAID 10 drive using 8 lanes, the second Xpander-Z card would use 8 more, the M.2_1 (Gen 5) would use 4 and the USBC for the on-board video would use 4 for a total of the 24 lanes available.

If only I had a second card.

It also encourages me to think that the Gen 5 RAID 0 array will be very fast indeed. Since the original RAID 10 array I made two years ago was 27.5% faster than the top rated speed of those Kingston's and three of them were routing through the CPU, 1 was through chipset A and 2 were going through chipset B, having the three Gen 5 drives in RAID 0, all going through the CPU, I expect an over 30,000 read figure to be very possible now.
 
Last edited:
Who'd a thunk it?
Just out of curiosity, fueled by boredom, I ran CrystalDiskMark on this broken RAID 10 drive that I have in this ITX board I'm using until the parts come in. This is WAAY better than they were doing in ACE, and I'm dumbfounded at that write figure.
Is that some sort of mistake because CDM is running on a broken array? One with a disk missing? I mean a 50% increase over stock is understandable as a read increase, and on this board the resource allocation is entirely different, but a 154% increase in write speed?
 

Attachments

  • CDM scan.png
    CDM scan.png
    388.4 KB · Views: 18
I just had a brainstorm. No need to fret about whether or not to make the super fast array the C: drive or not if I can make two arrays that the BIOS will see: One, the very fast 12TB Gen 5.0 x4 striped array and the other the 4TB Gen 4.0 x4 RAID 5 array. Both of them will be made using the Windows installation thumb drive and so both of them will be OS drives. That way I can test to see which is better. Use the boot menu to boot into the RAID 5 array and see how fast the game that's on the Gen 5.0 x4 array runs, then restart and boot into the striped array and see if the game runs better.
My hope is that it won't make a difference or won't be noticeable. If so, then I'll just make the RAID 5 array the OS and clean up the Gen 5.0 x4 for all my high-end games. With the C: drive a RAID 5 array there's virtually no chance of ever losing any data and so no need for any backup exists, and at 4TB it will be big enough for Windows 23 in 2050.
 
Last edited:
Just be aware that using Raid 5 and other variants that calculate checksum to recover data in case some drive fail will be hard on TBW endurance and degrade drive in faster rate.
 
Just be aware that using Raid 5 and other variants that calculate checksum to recover data in case some drive fail will be hard on TBW endurance and degrade drive in faster rate.
With an expected lifetime of 1.8 million hours per on those Kingstons, I'm not too worried about a few extra megabytes per year written as checksums degrading the SSD much.
And you remind me of the beauty of a RAID 5 array. I can watch the condition of the array over time and when it gets "iffy" I can simply take one of the drives out and install a new one in its place, let the array "repair" itself, then repeat the process with the other two drives and wind up with a totally new array. All data rewritten on totally new drives.
 
With an expected lifetime of 1.8 million hours per on those Kingstons, I'm not too worried about a few extra megabytes per year written as checksums degrading the SSD much.
And you remind me of the beauty of a RAID 5 array. I can watch the condition of the array over time and when it gets "iffy" I can simply take one of the drives out and install a new one in its place, let the array "repair" itself, then repeat the process with the other two drives and wind up with a totally new array. All data rewritten on totally new drives.
You are mistaking two different parameters, 1.8m hopurs is Mean Ttime Before failure which only means that 50% drives should work that long (not counting endurance wear), however what you should be looking at is Endurance or how many TB can be written to drive, Standard endurance is 600TBW per TB making it 2400TBW for 4TB drive.
Some nvmes have less, like Kingston NV3 only 1280TBW, some like Seagate Firecuda 530R have 5050 TBW for 4TB drive.

This is also you dont want to have drive full because it limits ability of drive to distribute wear over empty cells instead of repeatedly overwriting same cell and making it fail early.
When your endurence get depleted there are two situations, cell have higher chance to fails or drive switch to read only mode giving you time to copy data to new drive.
If you keep repeatedly writing to 4TB drive with 2400TBW endurance just at 50% speed drive wont last longer than 28 days, at full speed jsut 14 days.
When you use drive with just small writes only then MTBF apply.

Just for illustration we had PB sized Enterprise disk array for SAP and Databases and it was common to keep replacing some of drives every week after few months of ussage and those have much higher, and I man WAY WAY higher Endurance, 10-30 times higher actually.

1747689825507.png
 
Last edited:
You are mistaking two different parameters, 1.8m hopurs is Mean Ttime Before failure which only means that 50% drives should work that long (not counting endurance wear), however what you should be looking at is Endurance or how many TB can be written to drive, Standard endurance is 600TBW per TB making it 2400TBW for 4TB drive.
Some nvmes have less, like Kingston NV3 only 1280TBW, some like Seagate Firecuda 530R have 5050 TBW for 4TB drive.

This is also you dont want to have drive full because it limits ability of drive to distribute wear over empty cells instead of repeatedly overwriting same cell and making it fail early.
When your endurence get depleted there are two situations, cell have higher chance to fails or drive switch to read only mode giving you time to copy data to new drive.
If you keep repeatedly writing to 4TB drive with 2400TBW endurance just at 50% speed drive wont last longer than 28 days, at full speed jsut 14 days.
When you use drive with just small writes only then MTBF apply.

Just for illustration we had PB sized Enterprise disk array for SAP and Databases and it was common to keep replacing some of drives every week after few months of ussage and those have much higher, and I man WAY WAY higher Endurance, 10-30 times higher actually.

View attachment 202148
I figured you knew your stuff with that list of parts in your rig. I'll update mine soon.

There are 3 Gen 4 NVME's going into the RAID 5 array and they are 2TB each making for a total 4TB in the array. For RAID 5, you are correct, they will "wear out" faster than if not in an array. Oh, and the Kingstons in this array are 2.0 PBW, that is 1000 TBW per terrabyte.

I understand how drives in an environment you describe are going to be constantly written to with large files, but this is a gaming machine. I fully expect to get 5 years on this thing and maybe, maybe the SSD monitor will say the array is getting tired and then I'll decide to renew it, maybe I'll be building a PCIE Gen 7 machine and all this will seem sophomoric to us. I have a 120 GB Samsung SSD that's going on 20 years old and it still runs fine. It's been in 2 machines over the years and I use it now as a hot-swap drive to transfer large files from one computer to another, and they are large files, but it gets 20 GB or more transferred every month or two, not every 20 minutes.

I'm not worried about this RAID 5 array I'm gonna build. If it is that fragile, I'll soon see and adjust accordingly, but I seriously doubt that.
 
Last edited:
Parts came in today. 1 MSI MEG X670E ACE and 1 Team Group T-Force 4TB Gen 5 NVME M.2 drive. I didn't realize the 8600G I was using won't run PCIE Gen 5 so I had to put my 7950X3D back in along with my Noctua NH-D15S chromax.black cooler. I tried to run on the 8600G but CrystalDiskMark showed the Team Group running 6800 and I realized I hadn't even looked to see if the APU ran PCIE Gen 5. It don't.

So after an hour or two I had the front line hardware back in it and I'm showing the results now. The speed is a bit disappointing but the temperature is really good since I took the little radiator off and am using it with ACE as the board is designed to cool it and it's doing a good job.

When the array is finished each drive should be running 1/3rd since it's a striped array, so cooling is not going to be an issue.

I'm gonna have two, maybe three of these little coolers left over after all is said and done. I'm sure the Xpander-Z card will keep them cool now and since I already have one TG coming I'll look for one without the cooler and get it if it's cheaper. If not, I still have a use for it. I'll put at least two Peltier's on the inside of that Streacom "cube" and transport some heat out of it. Little A/C units is what they'll be and I will control them using the fan headers on my new ITX. I'll switch to voltage and vary the voltage to them so they don't ice up. It will take a bit of trial and error to find the right voltage, testing to see if there's a drip inside the case, seeing how much A/C I can get away with.
CDI.png
CDM.png
 
Almost forgot. I got a bit discouraged when I discovered my plan to make two arrays using the 8600G processor was a bust, so I sat and cogitated a bit and decided to just see if I hit the lottery and disconnect the Xpander-Z card from the M/B. Well I did hit the lottery because I am typing this on the three drives of my 6 drive RAID 10 array right this minute.
I have the Zotac in the PCI_E2 slot, 1 Gen 5 4TB drive in the M.2_1 slot and 3 (the 3 with the "blue stripe") Gen 4 2TB drives in the M.2_2, 3 & 4 slots. When the next TG comes in I'll put it and the one I have in the Xpander-Z card and see if I can make an array with them using the Windows installation thumb drive I have. True it will be an OS drive, and I'll erase it after I get the third and make another. I'll need it for a little while to erase and then make a RAID 5 array from the 3 Gen 4's, but after that is done the last thing is to test to see which is faster for my games: Booting into the RAID 5 array or the faster Gen 5 array.
 
Back
Top