Announcement

Collapse
No announcement yet.

AMD Ryzen 5000 Temperature Monitoring Support Sent In For Linux 5.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by creative View Post
    Even if stuff is a bit off, I would settle for an approximation, rather than nothing.
    Well, personally, I would, too. But many people just take it for granted and complain if it doesn't work perfectly for them.
    Which I can understand very well, too. So I can understand and support the decision by the k10temp maintainers.
    Originally posted by sandy8925 View Post
    If you can't effectively see temperatures, current, voltage, power consumption, you can't find out if your CPU is defective. Are any of the cores achieving turbo? Are all of them achieving base frequency atleast? What's holding it back? Only the above stats can tell you if there's something holding back the CPU. And since you can't see them........
    Frequency monitoring has nothing to do with this, it's completely separate.

    Apart from that: yes, hardware monitoring is absolutely essential.
    However, if your monitoring is unreliable, it's useless. And the k10temp current/voltage/power consumption monitoring is unreliable. Same for zenpower.
    It's AMD's fault for not providing proper instructions to the mainboard manufacturers and not releasing proper docs on the CPUs.

    So don't get me wrong: I absolutely want to have monitoring of my CPU. But it has to be properly implemented and verified.

    Comment


    • #52
      Originally posted by Keith Myers View Post

      Which is what particularly irked all of us Crosshair VI and VII Hero board users which were premium boards. Those SIO chips never worked correctly.

      Forced the BIOS devs to come up with a WMI BIOS interface to get around the major issues with the ITE chips. Sadly the WMI BIOS went away with the x570 boards.

      The asus-wmi-sensors driver that uses the WMI interface works wonderfully on my C7H motherboards. As much info as you get with HWinf064 in Windows.
      Ugh......WMI. that thing we have to suffer with. Trying to figure out how to call something with it is a pain in the ass.

      Comment


      • #53
        Originally posted by creative View Post

        Just did an lsmod, nct6775 which is the particular nuvoton super i/o chip. Definitely one of the cheapest of the X570 boards but also one of the most capable, yet lacking in bells in whistles, which for me were not needed.
        That is the standard kernel driver for the Nuvoton SIO chips. I have it on both my Asrock boards. But very limited in functionality. Most of the values are totally bogus without some major scaling configuration files.

        But there are other alternative third-party drivers that have been reverse-engineered that work very well. Zenpower and ryzen_smu come to mind.

        Comment


        • #54
          Originally posted by sandy8925 View Post

          Clock speed should already be reported....... don't tell me even THAT'S missing.
          I can get that from /proc/cpuinfo. To have the peak clocks at least temporarily recorded by present uptime or desktop session by an application is another story.
          Last edited by creative; 16 February 2021, 09:50 AM.

          Comment


          • #55
            Originally posted by creative View Post

            Intel right now is a go to if you need to do a full system upgrade. An i7 10700k along with a motherboard and ram is a pretty good deal at the moment for what you pay.

            A reason I went to a 5800x is cause I was already on the x570 platform, and it was the next best option to the 5900x which is outrageously out of my price range at the moment due to supply and extremely high demand.
            While at least two people liked my post concerning going intel, I would like it to be noted. That if I were to do a full system upgrade right now and was on old hardware, I still would not go intel. I have various reasons why I would not do so.

            At current date, I stand by AMD. In fact and most intrestingly enough, I would go so far as to say between my experiences with Z270 and X570. AMD has been the better experience.

            Nuff said.
            Last edited by creative; 16 February 2021, 09:51 AM.

            Comment


            • #56
              If anyone is really that worried about how accurate the reporting of the temperatures from their specific super i/o chip included with your motherboard, I think it would be safe to say that one could set a power limit in celsius via bios. A setting of 80c should be a good target with minimal performance loss. Also a manual setting of PBO values for example will give very similar results, though one might want to do their own research into their own motherboard equivalent to a power limit of 80°C.

              For mine it was, PPT=120, TDC=80, EDC=125.

              Otherwise I personally am not worried about things being on auto.

              Its not unheard of for modern motherboards to give floating values that are behind the speed at which temperatures occur and are then reported. There will always be some degree of latency in which values get reported in a readable form.

              Some are conventionally recieved and others are not.

              I am not an electrical engineer or coder.
              Last edited by creative; 16 February 2021, 11:07 AM.

              Comment


              • #57
                Originally posted by creative View Post
                If anyone is really that worried about how accurate the reporting of the temperatures from their specific super i/o chip included with your motherboard, I think it would be safe to say that one could set a power limit in celsius via bios. A setting of 80c should be a good target with minimal performance loss. Also a manual setting of PBO values for example will give very similar results, though one might want to do their own research into their own motherboard equivalent to a power limit of 80°C.

                For mine it was, PPT=120, TDC=80, EDC=125.

                Otherwise I personally am not worried about things being on auto.

                Its not unheard of for modern motherboards to give floating values that are behind the speed at which temperatures occur and are then reported. There will always be some degree of latency in which values get reported in a readable form.

                Some are conventionally recieved and others are not.

                I am not an electrical engineer or coder.
                Well see that's not good enough. My CPU has a 105 W TDP and runs at ~140 W at full all-core load. That's just 4 GHz all-core. At 80 degrees Celsius. It's not reaching it's max. capabilities. Because there are current/power limits holding it back. Know how I know that? Because I'm able to see the current and power usage in Windows. If I can see it in Windows, I should be able to see it in Linux too - it's not an OS specific measure.

                To actually use my CPU at it's full capabilities, I need to see current, voltage, power usage and temperature. If AMD isn't willing to show basic things like that in my OS of choice, then AMD is not an option. Intel shows all of that, and starts this work well ahead of time (like a year or 1.5 years). AMD has absolutely no excuse here.

                ​​​​​
                ​​​​​

                Comment


                • #58
                  Guest I see your argument.

                  For me its absolutely fine. Heck just for giggles I just did a power limit to 70c, You know how much performance I lost in games and exporting video at full load? Very little, not even loss thats a reason to complain or tell a difference. Going to be lazy and just leave the limit at 70c.

                  Trying even eco mode at 45watt. now that was really interesting. Talk about running cool, couldnt tell a huge difference in performance except video exports, and that was not even a big deal.
                  Last edited by creative; 16 February 2021, 02:06 PM.

                  Comment


                  • #59
                    The only reason to gripe about any cpu is if someone has something like an RTX 3090, at that stage you might as well go the whole hog including the postage and opt for the fastest cpu you can buy.

                    I have yet to buy a cpu that I did not end up wanting to tune, yes that includes my 65watt i7 7700 that at full load with the included heatsink fan would hit 100°C out of the box on render!
                    Last edited by creative; 16 February 2021, 02:26 PM.

                    Comment


                    • #60
                      Originally posted by creative View Post
                      The only reason to gripe about any cpu is if someone has something like an RTX 3090, at that stage you might as well go the whole hog including the postage and opt for the fastest cpu you can buy.

                      I have yet to buy a cpu that I did not end up wanting to tune, yes that includes my 65watt i7 7700 that at full load with the included heatsink fan would hit 100°C out of the box on render!
                      it really doesn't matter if its high end or low end cpu. the problem is on linux if you buy any new piece (and at this point its getting to even older stuff now as time goes on) of hardware, you don't have access to the tools to do basic troubleshooting and monitoring to test it to make sure its stable. you can't monitor most things to make sure things are running as they should. like running burning hot stress tests and monitoring temps, voltages, fan speeds, power consumption, vrms, etc. you can't monitor c-state behavior, you can't monitor power saving if you have that enabled, you can't monitor ANY of that basic stuff on most hardware these days on linux.

                      you don't even have basic OS level bios controlling software either. which makes linux pretty lakluster for extreme overclockers IF linux even had some of the extreme overclocking benchmark suites. which really sucks because linux technically would be the best possible OS to use for extreme overclocking. compared to bloat hog Windows 10.

                      its really not ok at all and its really frustrating. i myself reinstalled Windows 10 on a spare SSD to test out my 6900 xt to make sure it was truly stable at stock because on linux we simply don't have everything (like 3dmark and i found out on windows, the VRS tier 1 test (tier 2 works fine though) is completely broken on rdna 2 and reported it to amd). its how i found out my 6900 xt has vrm temperature sensors and a few other things that's not exposed on linux. my 6900 xt is a amd manufactured reference card -_-
                      Last edited by fafreeman; 16 February 2021, 02:44 PM.

                      Comment

                      Working...
                      X