- Joined
- Aug 7, 2019
- Messages
- 15
I've worked in the IT industry for a long time and I've always been a hardware lover. I adopted SSD's with the first generation of Intel SSD's (it was an 40GB SATA II drive LOL) and I quickly saw the advantages they give you as long as you are using them for the right reasons in the correct configurations. That being said, I've always been interested in disk throughput and one of my jobs I had was as a storage architect for a company that created surveillance software for the government. So I was tasked with building servers which could hold large amounts of geospatial data plus requiring very high performance to allow for the data to be queried as fast as possible. It was a great job but it had the side effect of influencing me to always want to create the fastest storage arrays for my own use. I always had a server so I wasn't worried about losing data on my PC which allowed me to use RAID 0 for said arrays. I quickly adopted the combination of LSI MegaRAID adapters and multiple SSD's in RAID 0 striping configurations.
That being said, it is WELL past time to move on. NVMe drives have made such configurations obsolete. I always do a ton of research for the issue and for the hardware I select. I realize that AMD has the edge with PCIe 4.0 and a greater "pipeline" in the way they are designed. However, I wanted to preface my remarks by saying that I build Intel-based systems mostly. I see that AMD processors have better performance and when added to the PCIe 4.0 component, that seems to be the way to go until Intel can catch up. I am not a "Intel-only fanboy" or an "Nvidia-only fanboy" for the record, I just tend to use those two types of hardware more because I know them better. I have used both AMD processors and Radeon video cards and I own laptops which have AMD hardware. For those of you who are older, there were the good old Tseng and Matrox video cards LOL so not everything was just Nvidia or AMD.
I just wanted to establish my background and explain my thinking before getting to the matter at hand. For Intel-based motherboards, I understand the following concepts well - PCIe architecture, PCIe lanes per processor, PCI switches, bifurcation, chipset lanes, etc. However, what I wanted to discuss was building NVMe-based RAID 0 striped arrays for Intel motherboards. The reason I was going to go with Intel for now is that the Threadripper processors for AMD aren't available or are very expensive vs. using Intel Core i9 processors. The reason that matters is that I need the higher number of lanes either can offer since I usually have at least 3 add-in cards requiring additional PCIe lanes. I did my reading and I see that Intel's VROC doesn't appear to be an option for me from the standpoint that a) it forces you to pay extra for the hardware key and b) that I'm reading you can only use Intel-based NVMe drives which are not anywhere near as good in price or performance. I feel sad saying that because back in the day, Intel SSD's were great in pricing, performance, and durability. It also seems like you have to use Intel NVMe drives if you want the array to be bootable.
This leads me to my first question which I can't determine the answer to because different people are saying different things. Is it true that even if you purchase the key to fully enable Intel VROC, you can't create a bootable NVMe RAID 0 array on any Intel-based motherboard with non-Intel drives? Do you even need the hardware key for RAID 0 configurations or is it only for RAID 5, 0+1, etc. configurations? I also am hearing that Intel VROC just slows the performance down for RAID 0 arrays and that Windows software RAID performs better. Is this true? I feel like the slower performance applies more for the RAID 5 configurations since there should be almost no processor overhead to perform RAID 0 striping (no parity).
So I was looking at building an Intel-based motherboard setup with NVMe SSD drives. However, it looks like there is a serious bottleneck much like the old 667 MBps bottleneck for the old days (just using as an example of a bottleneck). That bottleneck was actually what led me to adopting RAID adapters early on in my hardware days. My thinking is that if I can't get around the problems with processor-allocated PCIe lanes and Intel VROC, then chipset-allocated PCIe lanes are the way to go for trying to build bootable NVMe RAID 0 configurations. I started reading about that but then it looks like there is another roadblock. Is it true that due to the way the PCIe lanes and current Intel chipsets are designed, that no matter how many NVMe SSD drives you connect in a RAID 0 configuration, you are always going to hit a PCIe 3.0 x 4 bottleneck? It appears to me that Intel chipset-based PCIe lanes can't handle more than PCIe 3.0 4 lanes maximum (good old 2015 DMI 3.0 spec) which severely limits the maximum throughput I could get from a RAID 0 configuration.
I have looked into HighPoint NVMe RAID controllers which bypass these issues but my problem with them is that they aren't truly RAID controllers. This meaning they don't actually have a dedicated RAID processor on the boards, their primary function is to bifurcate an x16 slot in (4) x4 lanes. Yes I know for a fact they can push more throughput but the pricing is horrible on them when you consider it isn't a true RAID controller, it is in fact a software RAID controller. And they seem to have a lot of compatibility and stability issues. That is why I always loved LSI MegaRAID adapters - they are rock solid! I still have servers and computers running 9260, 9265, and 9266 adapters. I have never had one of their adapters fail yet.
I'll wait to hear all the feedback on these questions, but currently I'm leaning towards having to wait until Intel removes the bottleneck problem and implements PCIe 4.0 successfully or waiting for the next round of AMD Threadripper processors and building with an AMD-based motherboard since they don't appear to have all the constraints I listed above. And yes I do realize Intel is supposedly adding PCIe 4.0 with Rocket Lake but that doesn't help me much since those processors don't have enough PCIe lanes. I realize they will (hopefully) be increasing the maximum number of PCIe lanes a processor can handle but the 16 lanes maximum they currently offer is too few. And the old days of motherboard manufacturers using PLX switches has come to an end due to Broadcom gouging the manufactuers with their monopoly and pricing. I know a couple of other companies are working on their own PLX switches and I hope they are successful. It really is a shameful what Broadcom did and the effect on motherboard manufacturers which then limits us in how many lanes we have available to work with. And yes - I do realize that Broadcom bought LSI whose boards I loved.
I also would like to say that I am thankful for websites like TheSSDReview, etc. that offer articles I can read about to keep up with all the latest advances in hardware. I bring up that website because they have written articles about NVMe RAID adapters such as the HighPoint SSD7101 adapters, WD AN1500 adapter, etc. So I am aware of that type of solution as well. I just like being able to select my own choice of NVMe SSD's and device that is upgradeable. Please keep in mind that I build RAID 0 arrays for speed and not just for small block size file operations like for gaming but also for video editing large file sizes. I do understand when people ask why someone wants higher speeds for gaming or boot times and yes they are correct in that there really is very little difference in perceived speed for those types of usage whether you have 1000 MBps or 10000 MBps. However, if you are moving large files around and working with large file sizes like I do, then yes the higher throughput is desirable.
The last question I have is for those who have built AMD-based motherboard computers with NVMe RAID 0 configurations. What are the typical speeds you are seeing for these configurations and which NVMe drives did you go use? Is it just as easy to work with RAID arrays in the current AMD world?
I'd love to hear people's experiences with building NVMe SSD arrays in general whether it be Intel, AMD, etc. or using adapters with NVMe drives on them like the Gigabyte GP-ASACNE2200TTTDA card, HighPoint cards, etc.
That being said, it is WELL past time to move on. NVMe drives have made such configurations obsolete. I always do a ton of research for the issue and for the hardware I select. I realize that AMD has the edge with PCIe 4.0 and a greater "pipeline" in the way they are designed. However, I wanted to preface my remarks by saying that I build Intel-based systems mostly. I see that AMD processors have better performance and when added to the PCIe 4.0 component, that seems to be the way to go until Intel can catch up. I am not a "Intel-only fanboy" or an "Nvidia-only fanboy" for the record, I just tend to use those two types of hardware more because I know them better. I have used both AMD processors and Radeon video cards and I own laptops which have AMD hardware. For those of you who are older, there were the good old Tseng and Matrox video cards LOL so not everything was just Nvidia or AMD.
I just wanted to establish my background and explain my thinking before getting to the matter at hand. For Intel-based motherboards, I understand the following concepts well - PCIe architecture, PCIe lanes per processor, PCI switches, bifurcation, chipset lanes, etc. However, what I wanted to discuss was building NVMe-based RAID 0 striped arrays for Intel motherboards. The reason I was going to go with Intel for now is that the Threadripper processors for AMD aren't available or are very expensive vs. using Intel Core i9 processors. The reason that matters is that I need the higher number of lanes either can offer since I usually have at least 3 add-in cards requiring additional PCIe lanes. I did my reading and I see that Intel's VROC doesn't appear to be an option for me from the standpoint that a) it forces you to pay extra for the hardware key and b) that I'm reading you can only use Intel-based NVMe drives which are not anywhere near as good in price or performance. I feel sad saying that because back in the day, Intel SSD's were great in pricing, performance, and durability. It also seems like you have to use Intel NVMe drives if you want the array to be bootable.
This leads me to my first question which I can't determine the answer to because different people are saying different things. Is it true that even if you purchase the key to fully enable Intel VROC, you can't create a bootable NVMe RAID 0 array on any Intel-based motherboard with non-Intel drives? Do you even need the hardware key for RAID 0 configurations or is it only for RAID 5, 0+1, etc. configurations? I also am hearing that Intel VROC just slows the performance down for RAID 0 arrays and that Windows software RAID performs better. Is this true? I feel like the slower performance applies more for the RAID 5 configurations since there should be almost no processor overhead to perform RAID 0 striping (no parity).
So I was looking at building an Intel-based motherboard setup with NVMe SSD drives. However, it looks like there is a serious bottleneck much like the old 667 MBps bottleneck for the old days (just using as an example of a bottleneck). That bottleneck was actually what led me to adopting RAID adapters early on in my hardware days. My thinking is that if I can't get around the problems with processor-allocated PCIe lanes and Intel VROC, then chipset-allocated PCIe lanes are the way to go for trying to build bootable NVMe RAID 0 configurations. I started reading about that but then it looks like there is another roadblock. Is it true that due to the way the PCIe lanes and current Intel chipsets are designed, that no matter how many NVMe SSD drives you connect in a RAID 0 configuration, you are always going to hit a PCIe 3.0 x 4 bottleneck? It appears to me that Intel chipset-based PCIe lanes can't handle more than PCIe 3.0 4 lanes maximum (good old 2015 DMI 3.0 spec) which severely limits the maximum throughput I could get from a RAID 0 configuration.
I have looked into HighPoint NVMe RAID controllers which bypass these issues but my problem with them is that they aren't truly RAID controllers. This meaning they don't actually have a dedicated RAID processor on the boards, their primary function is to bifurcate an x16 slot in (4) x4 lanes. Yes I know for a fact they can push more throughput but the pricing is horrible on them when you consider it isn't a true RAID controller, it is in fact a software RAID controller. And they seem to have a lot of compatibility and stability issues. That is why I always loved LSI MegaRAID adapters - they are rock solid! I still have servers and computers running 9260, 9265, and 9266 adapters. I have never had one of their adapters fail yet.
I'll wait to hear all the feedback on these questions, but currently I'm leaning towards having to wait until Intel removes the bottleneck problem and implements PCIe 4.0 successfully or waiting for the next round of AMD Threadripper processors and building with an AMD-based motherboard since they don't appear to have all the constraints I listed above. And yes I do realize Intel is supposedly adding PCIe 4.0 with Rocket Lake but that doesn't help me much since those processors don't have enough PCIe lanes. I realize they will (hopefully) be increasing the maximum number of PCIe lanes a processor can handle but the 16 lanes maximum they currently offer is too few. And the old days of motherboard manufacturers using PLX switches has come to an end due to Broadcom gouging the manufactuers with their monopoly and pricing. I know a couple of other companies are working on their own PLX switches and I hope they are successful. It really is a shameful what Broadcom did and the effect on motherboard manufacturers which then limits us in how many lanes we have available to work with. And yes - I do realize that Broadcom bought LSI whose boards I loved.
I also would like to say that I am thankful for websites like TheSSDReview, etc. that offer articles I can read about to keep up with all the latest advances in hardware. I bring up that website because they have written articles about NVMe RAID adapters such as the HighPoint SSD7101 adapters, WD AN1500 adapter, etc. So I am aware of that type of solution as well. I just like being able to select my own choice of NVMe SSD's and device that is upgradeable. Please keep in mind that I build RAID 0 arrays for speed and not just for small block size file operations like for gaming but also for video editing large file sizes. I do understand when people ask why someone wants higher speeds for gaming or boot times and yes they are correct in that there really is very little difference in perceived speed for those types of usage whether you have 1000 MBps or 10000 MBps. However, if you are moving large files around and working with large file sizes like I do, then yes the higher throughput is desirable.
The last question I have is for those who have built AMD-based motherboard computers with NVMe RAID 0 configurations. What are the typical speeds you are seeing for these configurations and which NVMe drives did you go use? Is it just as easy to work with RAID arrays in the current AMD world?
I'd love to hear people's experiences with building NVMe SSD arrays in general whether it be Intel, AMD, etc. or using adapters with NVMe drives on them like the Gigabyte GP-ASACNE2200TTTDA card, HighPoint cards, etc.
Last edited: