PCIe bug with M2_1 slot on X870E Tomahawk

Status
Not open for further replies.

jlsmith.01712b902af

New member
Joined
Feb 16, 2025
Messages
14
So, the M.2_1 slot keeps reverting to PCIe 1 x 4 on a cold boot, sometimes just a reboot from within Windows. Manually setting it to Gen 4 or 5 within the BIOS will get it to run full speed until the next shut down/cold boot, then it's back to 1 x 4 again. Basically, I have to keep switching between gen 4 and 5 setting on every boot or it's stuck at 1 x 4. Auto is also broken. Samsung 990 Evo Plus 1TB.

I've seen another user on Reddit with the exact same issue. Same board, different drive. Anyone using this board, check your drive speed!
 
Yeah, it irritates me that when you know full well it’s 100% a software bug or hardware flaw, especially when it’s backed up by other users, yet 1st line support have to ask questions like have you turned it off and on and is the storage device connected correctly?
 
X870E Tomahawk reporting in. Same issue. Latest firmware installed. Only getting 1x4 speeds on a 990 Pro via M.2_1.

Support responds with "gee golly, never seen that one before" and proceeds to offer to RMA... <sigh>
 
New X870E Tomahawk BIOS update released 3/5/2025 - 7E59v2A33 - no change to the situation. M.2_1 still registers as 1x4 no matter what I try. Multiple reboots and it never changes.

For what it's worth, I also updated to the latest chipset driver 7.02.13.148 (dated 2/25/2025) available from AMD's website. (note: this driver is not currently showing in MSI Center, so you have to go to AMD directly)

The wait and frustration continues...
 
Last edited:
New X870E Tomahawk BIOS update released 3/5/2025 - 7E59v2A33 - no change to the situation. M.2_1 still registers as 1x4 no matter what I try. Multiple reboots and it never changes.

For what it's worth, I also updated to the latest chipset driver 7.02.13.148 (dated 2/252025) available from AMD's website. (note: this driver is not currently showing in MSI Center, so you have to go to AMD directly)

The wait and frustration continues...
Thanks for confirming, I tried it too and no change either. I did further screenshots and sent them to the MSI ticket support when it drops to 1.0 x4 along with benchmarked read/write speeds and the same again when it worked at 5.0 x4 to show the big difference on the performance drops.

They just replied with they are currently testing it.
 
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.

I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.

I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.

Hope this helps you as well and feel free to share, cheers
 

Attachments

  • Screenshot 2025-03-06 231331.png
    Screenshot 2025-03-06 231331.png
    22.9 KB · Views: 164
  • Screenshot 2025-03-06 231444.png
    Screenshot 2025-03-06 231444.png
    23.9 KB · Views: 153
Last edited:
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Shut down the system. Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.
I did 20 warm reboots, and 20 cold boots - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.
I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though).

Hope this helps you as well and feel free to share, cheers
This is an interesting observation - it does seem that the drive/slot goes into low power mode and then can't recover, which is why mine mainly drops to Gen 1 after waking from sleep.
@Krisk7157302df - saying that they're testing it could be considered progress!
 
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Shut down the system. Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.
I did 20 warm reboots, and 20 cold boots - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.
I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though).

Hope this helps you as well and feel free to share, cheers
It can’t be related to a power state as it happens immediately at boot, long before the system is idle ?
 
It mostly happens to me when I wake the PC from sleep, but it has also done it a couple of times from a restart.
In my case, it's 100% stuck at 1x4 regardless if it's a cold boot, Windows restart, or waking from sleep. I have toggled the BIOS setting between Auto, 4.0 and 5.0 with no change at all in performance according to Samsung Magician and CrystalDiskMark.
 
nope. I tried that and after a single restart it went to PCIe 3.0x4 mode. Then I tried putting my PC to sleep, and it actually woke up in 5.0 mode. But then I put it to sleep a second time, and it woke up in 1.0 mode that time. Maybe there is something else in there we can fiddle with to make it work, though?
 
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Shut down the system. Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.
I did 20 warm reboots, and 20 cold boots - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.
I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though).

Hope this helps you as well and feel free to share, cheers
Is it possible your success is a combination of multiple changes? I tried what you suggested here and it did not help. Maybe a combination of that plus some BIOS settings? or is there anything else you could have tweaked but missed it in your write up?
 
Is it possible your success is a combination of multiple changes? I tried what you suggested here and it did not help. Maybe a combination of that plus some BIOS settings? or is there anything else you could have tweaked but missed it in your write up?

Did you do a cold boot after the OS changes to reset any stale states that the link might be stuck in? (shut down, pull power, wait 30s, switch on)
A warm restart after making the changes didn't work for me either, till a cold boot.

Also hope you've made the changes to either the High performance or Ultimate performance profiles, and you're on either of the 2. I did not test Balanced.

In the bios, other than a bunch of settings related to PBO and Memory overclocking, I only have:
HD audiodisabled
Wifi/BTWifi only
XHCI hand-offdisabled
legacy USBAuto
Integ. Graphicsdisabled
SVMdisabled
TSMEdisabled
MSI driverdisabled
Security devicedisabled
Default homeAdvanced
Resume by USBEnabled
 
Last edited:
In my case, it's 100% stuck at 1x4 regardless if it's a cold boot, Windows restart, or waking from sleep. I have toggled the BIOS setting between Auto, 4.0 and 5.0 with no change at all in performance according to Samsung Magician and CrystalDiskMark.
That's unlucky, at least mine is at the correct link speed SOME of the time. Normally it's ok if I sleep/wake the PC.
I'm going to switch my system drive to M2_2 slot and be done with it.

This is a serious and widespread problem that MSI need to address very soon
 
Same issue here with my 990PRO and x870e Tomahawk.
No luck with the new bios too.
Opened a ticket to make volume, but for now they just made some inquiries on my system and components.
I tried to switch positions with a Kingston gen4 on m2_4 and they both worked full speed.
Changed them again and 990pro was back to gen1 speed..
So much for going for the best tier mobo 😤
 
Same issue here with my 990PRO and x870e Tomahawk.
No luck with the new bios too.
Opened a ticket to make volume, but for now they just made some inquiries on my system and components.
I tried to switch positions with a Kingston gen4 on m2_4 and they both worked full speed.
Changed them again and 990pro was back to gen1 speed..
So much for going for the best tier mobo 😤
It seems to be a problem with all the MSI X870E boards, I have the Godlike and have the problem😲
 
I think we can rule out the SSDs - most on this forum with the problem have 990 Pro but not all
Also the CPU - most have 9800X3D but there are exceptions so there's no pattern to specific hardware/devices
 
I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.

The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here: https://nvmexpress.org/resource/technology-power-features/

Here's how I disabled these power management features:

1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/

2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use

3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use

Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.

I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.

I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.

Hope this helps you as well and feel free to share, cheers
I have been disabling all power saving by default for years\
is one of the first things I do when I install Windows


1741358383419.png
 
Status
Not open for further replies.
Back
Top