I think I managed to fix (or work around) the M2_1 low link speed issue. The issue we're dealing with here is the PCIe link between the CPU and SSD going down to a lower speed (from PCIe 5.0 to 3.0 to even 1.0), not the width (x4). My gut feeling told me that at some point, either the CPU / NVME controler from the HW side, or the Driver / OS from the SW side is initiating a request to enter a lower power state for the link, but then isn't able to recover to full speed when there is load. So, I focussed on eliminating the link from going down to lower speed to save power altogether.
The PCIe spec allows for exactly such a state called L0s where the link speed is allowed to drop whenever the link is idle. Most GPUs work with this very well, but I'm not sure if there is a bug in the X870E bios implementation or the CPU's PCIe implementation when working with NVMe drives (remember NVMe drives are PCIe). There are other lower power PCIe states called L1 where the device goes into a deeper sleep state where the width also drops but is more useful on laptops to save every once battery possible, but i've not really seen this implemented on the desktops. There are also timeouts at the OS level that trigger these states once the respective idle timeouts are met. Some relevant info here:
https://nvmexpress.org/resource/technology-power-features/
Here's how I disabled these power management features:
1) Download the Windows power plan settings explorer utility from the 1st post here:
https://forums.guru3d.com/threads/windows-power-plan-settings-explorer-utility.416058/
2) You'll see the app (thanks a ton to whoever wrote this) is divided into sections. The settings are in the top section, and the values you set for each of the different power plans are in the bottom section. Highlight the section you want to change on top, then change the values below, and don't forget to apply after every set. Let's first attack the Hard disk section. Here, set:
a) AHCI Link Power management - HIPM/DIPM (i.e short host initiated or device initiated) - set this to Active in for the power profiles you use
b) Maximum power level - set this to 100 for all the power profiles you use
c) NVMe NOPPME - set this to ON for all the power profiles you use
d) Other than the above 3, there are 7 more settings (2x NVMe timeouts, 2x NVMe tolerences, 1x AHCI Link power, 1x Hark disk burst, 1x Turn off Hard disk ) - for all these 7 settings, set values for all of them to 0 in all the power profiles you use
3) Next scroll down to the PCI Express section. There is only one setting here called Link state power management - set this to OFF for all the power profiles you use
Do a cold boot - Shut down the system, pull power, wait 30s - this is a must to reset any stale states the link may be stuck in.
Power on and use your NVMe provider tool or CrystalDiskInfo to check the link speed.
I did 20 warm reboots, and 20 shutdown/power ons - and my Crucial T705 stayed at PCIe 5.0 x 4 all the time and Sequential reads/writes were consistently >10GB/s all the time.
I've configured my system the way I want it in High performance and the Ultimate perfomance power profiles - which are the only 2 profiles I use, but I applied the changes to Balanced as well (didn't test Balanced though), so I recommend you stay on either High perf or the Ultimate profile till you have this working as well.
Hope this helps you as well and feel free to share, cheers