PCIe bug with M2_1 slot on X870E Tomahawk

jlsmith.01712b902af

New member
Joined
Feb 16, 2025
Messages
14
So, the M.2_1 slot keeps reverting to PCIe 1 x 4 on a cold boot, sometimes just a reboot from within Windows. Manually setting it to Gen 4 or 5 within the BIOS will get it to run full speed until the next shut down/cold boot, then it's back to 1 x 4 again. Basically, I have to keep switching between gen 4 and 5 setting on every boot or it's stuck at 1 x 4. Auto is also broken. Samsung 990 Evo Plus 1TB.

I've seen another user on Reddit with the exact same issue. Same board, different drive. Anyone using this board, check your drive speed!
 
So I have been trying to find some way of tracking hardware revisions to see if it is a physical issue and MSI actually solves it, how would we know which boards have the updated hardware? They sure as hell do not seem to be transparent enough to actually tell us. And I think I found something through a PowerShell command that the documentation defines as reporting out a mobo's "Version of the physical element" "define any component of a system that has a distinct physical identity." so this might tell us the hardware revision? I standby to be corrected if anyone knows more than me here.

Anyway, Can you tell us what this tells you under "Version" if you enter this into Powershell? -->
gwmi win32_baseboard | FL Product,Manufacturer,SerialNumber,Version

Mine reads "Version 1.0" and I have the problem. I wanted to use this on more recently manufactured boards to see if that number ever changes to a 2.0, But since you think your board works, you have me curious what yours reports for that too.

Also, could you please tell us what BIOS version you are using and if you changed any settings?
Sure thing, it's 1.0 too:
1747190505482.png

I started with bios A22 or smth like that and updated as soon as they came out in hopes that it somehow would fix the DD3 issue, so now I am on A34. As to BIOS settings, I'll try to save it as my profile and see if it's human readable, but overall nothing fancy.


UPD: Ok, the file is not human readable, so here's what I did, I reset to defaults and asked to show changes:

1747191845013.png


These are the changes from my current settings (on the left) to default settings (on the right).
 
Last edited:
Can confirm I'm having issues with M.2\_1

Manufacture date B2412
Crucial T705 4TB Gen5 SSD
AMD 9950X CPU

All drivers, firmware, BIOS, etc are all updated and current (just updated my CPU drivers today as well.

Here's 4 boots I just tested, 2 cold, 2 warm. What's odd is the 4th boot, warm, is saying it's current mode is Gen4 but it's R/W speeds in CrystalDiskMark are Gen5. The 1st cold boot was saying current mode Gen5 but R/W speeds were Gen4. In 5 boots only once was it giving Gen1 speeds even though it was saying current mode was Gen5 (not pictured).

Clearly something is wrong and MSI is aware of it since they released a BIOS fix for the Tomahawk it appears. I've opened a support ticket and am waiting on a response:
 

Attachments

  • InCollage_20250515_072659214(1).jpg
    InCollage_20250515_072659214(1).jpg
    836.7 KB · Views: 37
Can confirm I'm having issues with M.2\_1

Manufacture date B2412
Crucial T705 4TB Gen5 SSD
AMD 9950X CPU

All drivers, firmware, BIOS, etc are all updated and current (just updated my CPU drivers today as well.

Here's 4 boots I just tested, 2 cold, 2 warm. What's odd is the 4th boot, warm, is saying it's current mode is Gen4 but it's R/W speeds in CrystalDiskMark are Gen5. The 1st cold boot was saying current mode Gen5 but R/W speeds were Gen4. In 5 boots only once was it giving Gen1 speeds even though it was saying current mode was Gen5 (not pictured).

Clearly something is wrong and MSI is aware of it since they released a BIOS fix for the Tomahawk it appears. I've opened a support ticket and am waiting on a response:
Welcome to the party, and MSI's crap show.
While some of us are still monitoring this thread for possible answers, I think some have just given up, or just using their other slots and limited to just one Gen5, or have changed to brands that actually work.
I am closing in on 90 days since my original ticket to MSI regarding this issue. So, here's what you can probably expect after submitting your ticket:
1. A follow-up email asking the basic questions of did you do this, did you do that.
2. After you respond, another email saying their engineering team is looking into it.
3. You wait, you follow up, they respond "we cannot duplicate the issue". All while you have probably at this point included this thread in your response to them of others having the same issues.
4. They respond, "You will need to RMA your board" without giving a reason as to why and "if" it will fix the problem that they can't seem to duplicate.

I've got another thread here running asking the question if anyone has done the RMA yet and what the results were. Sadly, I do not think anyone has done that yet. So far though the commonality that I have seen is B2412 manufacture date. Maybe others can also check this.
 
Welcome to the party, and MSI's crap show.
While some of us are still monitoring this thread for possible answers, I think some have just given up, or just using their other slots and limited to just one Gen5, or have changed to brands that actually work.
I am closing in on 90 days since my original ticket to MSI regarding this issue. So, here's what you can probably expect after submitting your ticket:
1. A follow-up email asking the basic questions of did you do this, did you do that.
2. After you respond, another email saying their engineering team is looking into it.
3. You wait, you follow up, they respond "we cannot duplicate the issue". All while you have probably at this point included this thread in your response to them of others having the same issues.
4. They respond, "You will need to RMA your board" without giving a reason as to why and "if" it will fix the problem that they can't seem to duplicate.

I've got another thread here running asking the question if anyone has done the RMA yet and what the results were. Sadly, I do not think anyone has done that yet. So far though the commonality that I have seen is B2412 manufacture date. Maybe others can also check this.
So I found this article from October last year, Crucial is saying it's actually an issue with AMD, not MSI

 
So I found this article from October last year, Crucial is saying it's actually an issue with AMD, not MSI

I've replaced my Crucial drive with a Samsung 9100 Pro 4TB. The problem still exists.
 
I've replaced my Crucial drive with a Samsung 9100 Pro 4TB. The problem still exists.
The article didn't say it was an issue due to Crucial, rather it's AMD's fault:

While not a function of the chipset itself, it turns out that there is a flaw in the way AMD designed the PCI-Express I/O of the X670E platform, specifically the PCIe Gen 5-capable M.2 NVMe interfaces that are attached to the CPU, causing them to drop in speeds to Gen 1. This problem isn't surfacing on the AMD B650 or the B650E, or even the X670—it is oddly specific to the X670E, despite the Gen 5 M.2 NVMe slots not being wired to the chipset.

While AMD made no public statement on the technical aspect of the flaw, if we were to guess, this could be a faulty implementation of PCIe ASPM (active state power management) at the firmware level, which reduces the speed of the PCIe link layer to reduce power.
 
The article didn't say it was an issue due to Crucial, rather it's AMD's fault:

While not a function of the chipset itself, it turns out that there is a flaw in the way AMD designed the PCI-Express I/O of the X670E platform, specifically the PCIe Gen 5-capable M.2 NVMe interfaces that are attached to the CPU, causing them to drop in speeds to Gen 1. This problem isn't surfacing on the AMD B650 or the B650E, or even the X670—it is oddly specific to the X670E, despite the Gen 5 M.2 NVMe slots not being wired to the chipset.

While AMD made no public statement on the technical aspect of the flaw, if we were to guess, this could be a faulty implementation of PCIe ASPM (active state power management) at the firmware level, which reduces the speed of the PCIe link layer to reduce power.
I dont buy that, I think this problem is exclusive to only MSI. I am not aware of any other motherboard vendor having this issue on the x870e platform. And when that article was written about the x670e platform, it was suspected two vendors were impacted: ASUS and MSI, which I assume is how they came to their conclusion that its likely an AMD problem. I spent some time months ago when investigating this trying to track down all of their sources and every case of this from an ASUS motherboard that I could find fell through and ended up being something else, or nothing at all. All credible reports of this bug I could find were only from MSI. I really think MSI is just releasing shitty BIOS files.
 
I dont buy that, I think this problem is exclusive to only MSI. I am not aware of any other motherboard vendor having this issue on the x870e platform. And when that article was written about the x670e platform, it was suspected two vendors were impacted: ASUS and MSI, which I assume is how they came to their conclusion that its likely an AMD problem. I spent some time months ago when investigating this trying to track down all of their sources and every case of this from an ASUS motherboard that I could find fell through and ended up being something else, or nothing at all. All credible reports of this bug I could find were only from MSI. I really think MSI is just releasing shitty BIOS files.
Ahh gotcha. Yeah I heard that there was a BETA BIOS released for the Tomahawk that resolved this? TBH I'm just happy to see the enthusiast community attacking this head on and helping each other when the big corporations are not. Gives me hope for the future.

Also heard of this temporary fix from Reddit but I haven't tested it yet:

 
Ahh gotcha. Yeah I heard that there was a BETA BIOS released for the Tomahawk that resolved this? TBH I'm just happy to see the enthusiast community attacking this head on and helping each other when the big corporations are not. Gives me hope for the future.

Also heard of this temporary fix from Reddit but I haven't tested it yet:

The 2A41 beta bios did resolve the issue for the Tomahawk. The fix is now included in the official bios since 2A52. Since I built my PC and installed the beta bios two months ago, I never had the issue. Not even once.
 
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.

I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.

I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.

Hope this helps you as well and feel free to share, cheers
Followed this to a T, after cold boot it's saying I'm in PCIe 4.0 but only 3700MB/s speeds. Crucial T705 4TB Gen5 SSD installed in M.2_1 Godlike X870E.
 

Attachments

  • 20250516_065201.jpg
    20250516_065201.jpg
    533.9 KB · Views: 21
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.

I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.

I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.

Hope this helps you as well and feel free to share, cheers
After numerous reboots, warm, cold, pull power, waiting, still the same outcome. Haven't hit Gen5 speeds once today but I was at least getting them yesterday after warm reboots. The results today have been all over the place, saying it's in PCIE1.0 but giving me PCIE3.0 speeds, but never faster than PCIE4.0. Some warm reboots have also gotten stuck with an error code 0d, first time this started happening as well.

I'm getting ready to chuck this MOBO out of a window, or just move the SSD to another slot. Does M.2_4 and M.2_5 share lanes with anything? The manual does not have an adequate description.
 

Attachments

  • InCollage_20250516_080525067.jpg
    InCollage_20250516_080525067.jpg
    1.1 MB · Views: 25
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.

I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.

I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.

Hope this helps you as well and feel free to share, cheers
Reverted back to all of the default settings in Windows Power Plan. Powered down, unplugged power, waited 1min, plugged back in, cold boot got Gen1 speeds, then did a restart/warm boot and got my first Gen5 speeds today.

I could not get Gen5 speeds once following your guidelines.
 

Attachments

  • 20250516_083723.jpg
    20250516_083723.jpg
    527.7 KB · Views: 30
Build a system couple of weeks ago - instantly flashed A52 BIOS and never had an issue with M.2_1 slot.
However, couple of times i had SSD dissapearing from M.2_4 after normal reboot.
Resolved after full shut down and power on, never disappearing once it's spotted.
I since then disabled ASPM, gonna keep watching.
Anyone aware what the most recent BIOS does?
 
Build a system couple of weeks ago - instantly flashed A52 BIOS and never had an issue with M.2_1 slot.
However, couple of times i had SSD dissapearing from M.2_4 after normal reboot.
Resolved after full shut down and power on, never disappearing once it's spotted.
I since then disabled ASPM, gonna keep watching.
Anyone aware what the most recent BIOS does?
For which MOBO?
 
Ahhh yes the Tomahawk issue has been resolved it seems. But the Godlike still has not been addressed and MSI keeps telling me I need to RMA my board.
I have now tried 5 different Godlike boards and all have the issue. Plus all the people on here and reddit with the issue. Unless they actually changed something on the RMA boards, an RMA will not do anything. I really wish someone would just sue them already. I am ready to help if people want to get organized for that.
 
I have now tried 5 different Godlike boards and all have the issue. Plus all the people on here and reddit with the issue. Unless they actually changed something on the RMA boards, an RMA will not do anything. I really wish someone would just sue them already. I am ready to help if people want to get organized for that.
Yeah it's just unacceptable for a $1400 MOBO. I was thinking class action lawsuit as well. I'm also gonna see how to contact Steve from Gamers Nexus. I'm honestly shocked none of the big YouTube channels have picked up on this yet.
 
Back
Top