Originally posted by Zan Lynx
View Post
Announcement
Collapse
No announcement yet.
Features Expected For Linux 5.13 From Apple M1 To FreeSync HDMI To AMD Aldebaran
Collapse
X
-
Originally posted by syrjala View PostASPM only matters while the device is in D0. During s2idle it should be in D3 and ASPM is not relevant.
powersave
sets ASPM to save power wherever possible, regardless of the cost to performance.
Linux does have another option powersupersave this can start powering down sections of the PCIe switch in the chipset and cpu when the lans are not need.
Low power mode is often achieved by reducing or even stopping the serial bus clock as well as possibly powering down the PHY device itself.
Please note the PCIe switches hidden inside the system are not classed as PCIe devices so don't have a Dx number so are not power switched by setting s2idle. If your ASPM is sitting in performance or normal powersave instead of powersupersave when doing s2idle on a few laptops you will be up a few watts because the power will be eaten by the chipset/cpu pcie switch stuff not the device. Yes you can have some hidden ASPM rule saying keep the GPU nicely ready in performance mode.
Setting the BIOS default in laptop for ASPM to performance instead of powersave or powersupersave can make a reviewer say laptop has better performance. Lot of reviewers don't check suspended battery life.
Do note not all pcie switch designed used in CPU and chipsets are equally smart or foolish when it comes to ASPM stuff. Yes some designs of the ASPM bits in the pcie switches are going to responded more sanely to s2idle setting D3 to devices while ASPM is in performance mode but this is not required by PCIe specification.
Yes the horrible undefined area of PCIe specification lead to some nice things where settings of ASPM is more important in some laptops than others for battery life.
Comment
-
Originally posted by asriel View Post
The issue is not with the desktops but with the laptops. Firstly majority of AMD laptop vendors removed S3 suspend in favor of s2idle modern standby. S3 works(not many AMD laptops support it), but with modern standby laptop pretends that it is suspended but big part of GPU continue eating battery like mad - 4-5 watts. For desktop it is nothing - you will not notice it, 4 watts desktop will be cool and quiet. But for laptop your battery will be gone in 5-10 hrs. So AMD issued the patch that stops the GPU completely and it does not eat battery when suspended - but in this case it does not wake up. So some kernels do not sleep , some do not wake up - depending on the patchsets in AMD GPU driver. There is some work and activity around this bug for more than half a year - but as I see it is not yet fixed. I got rid of my AMD laptop because of that and use Tigerlake now.
- Likes 1
Comment
-
Originally posted by asriel View Post
The issue is not with the desktops but with the laptops. Firstly majority of AMD laptop vendors removed S3 suspend in favor of s2idle modern standby. S3 works(not many AMD laptops support it), but with modern standby laptop pretends that it is suspended but big part of GPU continue eating battery like mad - 4-5 watts. For desktop it is nothing - you will not notice it, 4 watts desktop will be cool and quiet. But for laptop your battery will be gone in 5-10 hrs. So AMD issued the patch that stops the GPU completely and it does not eat battery when suspended - but in this case it does not wake up. So some kernels do not sleep , some do not wake up - depending on the patchsets in AMD GPU driver. There is some work and activity around this bug for more than half a year - but as I see it is not yet fixed. I got rid of my AMD laptop because of that and use Tigerlake now.
Originally posted by muncriefThat was the excuse when I had my old R9 390 installed. But now I have two RX 580 GPUs, and no one says my GPU is too old any longer.
Originally posted by theriddickI've had a ongoing issue with my 6800xt causing the lower quarter of the screen to tear after resume, even resume from xscreensaver oddly enough. Never happened on previous cards or under windows.. No idea if that will be fixed. It is related to having freesync enabled however. (which has never been great under Linux to be honest)
Comment
-
Originally posted by cynical View Post
Have you tried Wayland recently? It's a very nice experience, even when gaming (though I don't do much of that anymore)
Comment
-
Originally posted by theriddick View PostI last tried it like a month-6wks back, and it still had some issues. Also I don't think it even supports freesync/vrr yet. Its still IMO at least a year or two away from being usable for a gamer/power desktop user.
Should we make the output commit fail if VRR can't be enabled? We don't know when VRR is indeed enabled, but in some cases we know when it fails. We still don't have support for test-only commits, ...
If you were playing with a current version of sway 4 to 6 weeks back you would have had freesync/vrr support and its been present for over a year now.
With #5063, users have a way to unconditionally enable VRR. However this can cause some flickering on some monitors. Flickering seems to happen on higher-end monitors which have a larger VRR range....
But having that support is only half. "Automatic VRR management" is being tricky for everyone to implement. It does not help that EDID information is lacking some thing important. The speed that you can change HZ at.
Some monitors if you change freesync/vrr speed too quickly they display tearing heck worse section of screen as technically stalled until the chip in the monitor self reboots. So I am not quite sure if you were seeing screen tearing in the GPU or a monitor chip crash and reboot. Yes this is sometimes fixed/overridden by altering VRR range. Yes some monitors with windows come with a driver disc where you don't install that you get the same kind of half screen tear you described under window for the same problem. Yes windows rate of change in vrr is different to the Linux one so you get some monitors that don't have a chip crash under windows but are crashing under Linux and the reverse..
Freesync did not have the same mandated quality control as the Nvidia gsync and sometimes this really shows.
Comment
-
Originally posted by oiaohm View Post
Freesync did not have the same mandated quality control as the Nvidia gsync and sometimes this really shows.
Comment
Comment