AMD Ryzen 5000 Temperature Monitoring Support Sent In For Linux 5.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Keith Myers
    replied
    It's a shame that Linux does not have the power and temperature monitoring capabilities out of the box equivalent to Windows.

    Thank goodness, out of tree drivers like zenpower and apps like zenmonitor have been developed to bridge the lack of information that Linux imposes on AMD cpus.

    Both are working for me quite well on my Zen 2 and Rome processors which run 24/7 at full load. Nice to know that I don't have to constantly monitor them for out of bounds temp and power excursions that would either produce invalid results or crashes of the systems.

    Leave a comment:


  • DRanged
    replied
    Originally posted by SilverFox
    A little off topic, But i just bought a 5600xt and i will be getting a new mobo soon. In reflection to the temp issue can anyone recommend a mobo that all/most sensors are picked up?
    Go for an ASUS or ASRock board. MSI is crap with sensors, see OpenRGB whare they have switch off all MSI support. I myself have a Ryzen 5 3600 and use zenpower to see my sensors, voltage, wattage and so. I have a MSI B550-A Pro and I am unable to see fanspeeds etc. In the coming week I'll change the board to my old ASRock X470 Master SLI as that reported the fanspeeds and the rest.

    sensors
    zenpower-pci-00c3
    Adapter: PCI adapter
    SVI2_Core: 938.00 mV
    SVI2_SoC: 1.09 V
    Tdie: +28.2°C (high = +95.0°C)
    Tctl: +28.2°C
    Tccd1: +27.5°C
    SVI2_P_Core: 2.47 W
    SVI2_P_SoC: 12.88 W
    SVI2_C_Core: 2.63 A
    SVI2_C_SoC: 11.77 A
    Last edited by DRanged; 15 March 2021, 05:52 PM.

    Leave a comment:


  • creative
    replied
    fafreeman

    That 6900 XT is a rediculously capable card. A 1440p144hz+ card that will last you longer than you actually want. Most likely will eventually match or exceed th RTX 3090 if AMD continues the trend of improving drivers like they did for the older RX series like the 580.

    I have yet to upgrade my GTX [email protected] honestly I would have to upgrade my display to 1440p@144+ even if I had a 5700 XT or RX 6800, I'm holding out for a while until something like either one of those cards gets really accessible at non-insane prices.
    Last edited by creative; 17 February 2021, 08:39 AM.

    Leave a comment:


  • creative
    replied
    Originally posted by fafreeman View Post
    it really doesn't matter if its high end or low end cpu. the problem is on linux if you buy any new piece (and at this point its getting to even older stuff now as time goes on) of hardware, you don't have access to the tools to do basic troubleshooting and monitoring to test it to make sure its stable. you can't monitor most things to make sure things are running as they should. like running burning hot stress tests and monitoring temps, voltages, fan speeds, power consumption, vrms, etc. you can't monitor c-state behavior, you can't monitor power saving if you have that enabled, you can't monitor ANY of that basic stuff on most hardware these days on linux.

    you don't even have basic OS level bios controlling software either. which makes linux pretty lakluster for extreme overclockers IF linux even had some of the extreme overclocking benchmark suites. which really sucks because linux technically would be the best possible OS to use for extreme overclocking. compared to bloat hog Windows 10.

    its really not ok at all and its really frustrating. i myself reinstalled Windows 10 on a spare SSD to test out my 6900 xt to make sure it was truly stable at stock because on linux we simply don't have everything (like 3dmark and i found out on windows, the VRS tier 1 test (tier 2 works fine though) is completely broken on rdna 2 and reported it to amd). its how i found out my 6900 xt has vrm temperature sensors and a few other things that's not exposed on linux. my 6900 xt is a amd manufactured reference card -_-
    Do what you need to do. Been using linux since 2001, thats where I admt a lot of stuff is not up to snuff but that does not keep me from using and enjoying it.

    It is not for everone.

    Maybe try back in about six months to a year. or you could monitor your stuff in windows for a while and make sure everything is running ok and just go back to linux and live your life.

    There is nothing wrong with running Windows, its just not my thing.

    Leave a comment:


  • fafreeman
    replied
    Originally posted by creative View Post
    The only reason to gripe about any cpu is if someone has something like an RTX 3090, at that stage you might as well go the whole hog including the postage and opt for the fastest cpu you can buy.

    I have yet to buy a cpu that I did not end up wanting to tune, yes that includes my 65watt i7 7700 that at full load with the included heatsink fan would hit 100°C out of the box on render!
    it really doesn't matter if its high end or low end cpu. the problem is on linux if you buy any new piece (and at this point its getting to even older stuff now as time goes on) of hardware, you don't have access to the tools to do basic troubleshooting and monitoring to test it to make sure its stable. you can't monitor most things to make sure things are running as they should. like running burning hot stress tests and monitoring temps, voltages, fan speeds, power consumption, vrms, etc. you can't monitor c-state behavior, you can't monitor power saving if you have that enabled, you can't monitor ANY of that basic stuff on most hardware these days on linux.

    you don't even have basic OS level bios controlling software either. which makes linux pretty lakluster for extreme overclockers IF linux even had some of the extreme overclocking benchmark suites. which really sucks because linux technically would be the best possible OS to use for extreme overclocking. compared to bloat hog Windows 10.

    its really not ok at all and its really frustrating. i myself reinstalled Windows 10 on a spare SSD to test out my 6900 xt to make sure it was truly stable at stock because on linux we simply don't have everything (like 3dmark and i found out on windows, the VRS tier 1 test (tier 2 works fine though) is completely broken on rdna 2 and reported it to amd). its how i found out my 6900 xt has vrm temperature sensors and a few other things that's not exposed on linux. my 6900 xt is a amd manufactured reference card -_-
    Last edited by fafreeman; 16 February 2021, 02:44 PM.

    Leave a comment:


  • creative
    replied
    The only reason to gripe about any cpu is if someone has something like an RTX 3090, at that stage you might as well go the whole hog including the postage and opt for the fastest cpu you can buy.

    I have yet to buy a cpu that I did not end up wanting to tune, yes that includes my 65watt i7 7700 that at full load with the included heatsink fan would hit 100°C out of the box on render!
    Last edited by creative; 16 February 2021, 02:26 PM.

    Leave a comment:


  • creative
    replied
    Guest I see your argument.

    For me its absolutely fine. Heck just for giggles I just did a power limit to 70c, You know how much performance I lost in games and exporting video at full load? Very little, not even loss thats a reason to complain or tell a difference. Going to be lazy and just leave the limit at 70c.

    Trying even eco mode at 45watt. now that was really interesting. Talk about running cool, couldnt tell a huge difference in performance except video exports, and that was not even a big deal.
    Last edited by creative; 16 February 2021, 02:06 PM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by creative View Post
    If anyone is really that worried about how accurate the reporting of the temperatures from their specific super i/o chip included with your motherboard, I think it would be safe to say that one could set a power limit in celsius via bios. A setting of 80c should be a good target with minimal performance loss. Also a manual setting of PBO values for example will give very similar results, though one might want to do their own research into their own motherboard equivalent to a power limit of 80°C.

    For mine it was, PPT=120, TDC=80, EDC=125.

    Otherwise I personally am not worried about things being on auto.

    Its not unheard of for modern motherboards to give floating values that are behind the speed at which temperatures occur and are then reported. There will always be some degree of latency in which values get reported in a readable form.

    Some are conventionally recieved and others are not.

    I am not an electrical engineer or coder.
    Well see that's not good enough. My CPU has a 105 W TDP and runs at ~140 W at full all-core load. That's just 4 GHz all-core. At 80 degrees Celsius. It's not reaching it's max. capabilities. Because there are current/power limits holding it back. Know how I know that? Because I'm able to see the current and power usage in Windows. If I can see it in Windows, I should be able to see it in Linux too - it's not an OS specific measure.

    To actually use my CPU at it's full capabilities, I need to see current, voltage, power usage and temperature. If AMD isn't willing to show basic things like that in my OS of choice, then AMD is not an option. Intel shows all of that, and starts this work well ahead of time (like a year or 1.5 years). AMD has absolutely no excuse here.

    ​​​​​
    ​​​​​

    Leave a comment:


  • creative
    replied
    If anyone is really that worried about how accurate the reporting of the temperatures from their specific super i/o chip included with your motherboard, I think it would be safe to say that one could set a power limit in celsius via bios. A setting of 80c should be a good target with minimal performance loss. Also a manual setting of PBO values for example will give very similar results, though one might want to do their own research into their own motherboard equivalent to a power limit of 80°C.

    For mine it was, PPT=120, TDC=80, EDC=125.

    Otherwise I personally am not worried about things being on auto.

    Its not unheard of for modern motherboards to give floating values that are behind the speed at which temperatures occur and are then reported. There will always be some degree of latency in which values get reported in a readable form.

    Some are conventionally recieved and others are not.

    I am not an electrical engineer or coder.
    Last edited by creative; 16 February 2021, 11:07 AM.

    Leave a comment:


  • creative
    replied
    Originally posted by creative View Post

    Intel right now is a go to if you need to do a full system upgrade. An i7 10700k along with a motherboard and ram is a pretty good deal at the moment for what you pay.

    A reason I went to a 5800x is cause I was already on the x570 platform, and it was the next best option to the 5900x which is outrageously out of my price range at the moment due to supply and extremely high demand.
    While at least two people liked my post concerning going intel, I would like it to be noted. That if I were to do a full system upgrade right now and was on old hardware, I still would not go intel. I have various reasons why I would not do so.

    At current date, I stand by AMD. In fact and most intrestingly enough, I would go so far as to say between my experiences with Z270 and X570. AMD has been the better experience.

    Nuff said.
    Last edited by creative; 16 February 2021, 09:51 AM.

    Leave a comment:

Working...
X