Announcement

Collapse
No announcement yet.

AMD Energy Monitoring Driver Slated To Be Removed From The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Jumbotron View Post
    AMD does not have any hardware or software roadmaps at all for this paradigm
    They have ROCm support for Xylinx FPGAs. Say what you will about ROCm, but it's not nothing!

    Originally posted by Jumbotron View Post
    cribbing off Intel AVX extensions
    AVX extensions are not an AI strategy! If they were, Xeon Phi wouldn't have failed!

    Originally posted by Jumbotron View Post
    at least Nvidia can execute unlike Intel.
    Okay, no argument there... but wait...

    Originally posted by Jumbotron View Post
    Intel is going to recover as it did when it dropped the disasterous Pentium 4 and adopted the Core paradigm. Their composable Tiles architecture with BIG.Little scheme ala ARM is going to be a hit. Along with OneAPI and their GPUs Intel will catch up to AMD by the time Zen 4 is rolling out at scale for both HPC and consumers.
    Wow, such a sudden turn of confidence in Intel! Their P4 debacle was due to a bad micro-architecture, which is a lot easier to recover from than the recent and near-total meltdown of their manufacturing tech!

    And I'll believe Intel GPUs are competitive when I see it. I kind of like them, from a GPGPU perspective, because they're the most CPU-like. But, I also think that works against them, in graphics efficiency. So, I'm expecting Xe HPG to have the worst perf/W of the lot, when it launches, and only middling performance. Lucky for Intel, the current market is so GPU-starved that they'll still sell every one they can make!

    Originally posted by Jumbotron View Post
    And lastly, I have NEVER owned a single stock and never will. I don't even have an IRA or 401K. Never have, never will.
    Okay, I'm not about to critique your financial acumen. I do have a 401(k), IRA, and other investments. Nothing disproportional for my age, FWIW. However, I don't trade individual stocks, after getting burned pretty badly.

    Comment


    • #22
      Fuck you AMD !

      Yep, you suck on software side !

      Comment


      • #23
        Originally posted by coder View Post
        They have ROCm support for Xylinx FPGAs. Say what you will about ROCm, but it's not nothing!


        AVX extensions are not an AI strategy! If they were, Xeon Phi wouldn't have failed!


        Okay, no argument there... but wait...



        Wow, such a sudden turn of confidence in Intel! Their P4 debacle was due to a bad micro-architecture, which is a lot easier to recover from than the recent and near-total meltdown of their manufacturing tech!

        And I'll believe Intel GPUs are competitive when I see it. I kind of like them, from a GPGPU perspective, because they're the most CPU-like. But, I also think that works against them, in graphics efficiency. So, I'm expecting Xe HPG to have the worst perf/W of the lot, when it launches, and only middling performance. Lucky for Intel, the current market is so GPU-starved that they'll still sell every one they can make!


        Okay, I'm not about to critique your financial acumen. I do have a 401(k), IRA, and other investments. Nothing disproportional for my age, FWIW. However, I don't trade individual stocks, after getting burned pretty badly.
        #1, I said all those previous things, as I stated on another thread not related to this, as well as what I am about to say, as a 30+ year AMD fanbois.

        #2, ROCm on Xilinx ?? Although AMD is working hard (God bless Bridgman) on ROCm it's still a mess on their GPUs. How the hell can it get any easier on an FPGA which are already a bit hairy to develop on?

        #3, AVX is EVERY BIT an A.I. strategy !! Intel even SAYS SO! It's one of the reasons that Linus Torvolds 2.0 almost lost his Chakra aligned religion and had to go back to Anger Management and Sensitivity classes. Intel, even though they bought out and bought off half of AMD's top GPU engineers and executives to FINALLY make a decent GPU, is still and ALWAYS will be a CPU centric company. They AND "Kicking" Pat Gelsinger are not about to throw away 30 years of MMX development. Just as they begrudgingly capitulated to AMD and added 64 bit extensions to their Pentium CPUs when they saw the market wasn't very enthusiastic about their hideously overpriced and architecturaly weird Itanium chips, Intel will do whatever they can to keep the ghost of Larrabee alive.

        #4, I said this months ago, AMD has an ever closing window of opportunity before Intel gets their shit together, which they will. And this was BEFORE "Kicking" Pat Gelsinger came back to Intel. Gelsinger knows Intel. He's ruthless. He's manifestly dishonest. But he'll get shit done. Intel has figured out 7nm. It'll take time to ramp. They'll be late to the 5nm party for sure. But the market doesn't care. As no one ever got fired for buying IBM back in the day, no one gets fired for continuing to buy huge ,slow, hot 14nm++++++++++++++++ Intel Server chips. Just ask Dell. So when Intel releases 7nm chips the market will rejoice and rejoice more when "Kicking" Pat Gelsinger kicks in "Turbo Bullshit Mode" and announces they have already arrived at 5nm as well. Of course it will be their 7nm ++++++++ node, but hey has that stopped ANYBODY in the chip business from lying about node sizes, much less Intel. Suddenly, AMD can't point to Intel and tell the market HEY WE'RE AT 7NM WHILE INTEL IS AT 14NM+++++++++++++. Now it LOOKS like they're at parity. Of course they're not but the market doesn't care.

        #5, I agree with you, Intel's initial discreet GPUs will be better at GPGPU than games but that is by design. As I alluded to in another post no one gives a shit about PC gaming. It's all phones, handhelds, consoles and streaming. A SoC can handle that now. Discreet GPUs are now for mining, and protein folding and Higgs Boson chasing and Black Hole hunting and Amazon and Wall Street. If the way we engineer our GPUs make for great FPS in Crysis then cool. But we were NOT thinking about that at the time. That's the attitude of GPU manufacturers.

        #6, It may not be financially smart but my Old Man the accountant told me to never bet money I couldn't afford to lose. I've never had enough extra to bet. So I've stayed away from the markets. Even if I did have extra loot to "invest" I would still go to Vegas instead of Wall Street because at least in Vegas I know the house is trying to screw me. And I'll still get a cheap steak dinner and a show. Wall Street will just rough ride me raw with no lube all the while claiming they're just here to serve the client, that being my ass.
        Last edited by Jumbotron; 22 April 2021, 12:35 AM.

        Comment


        • #24
          Hm, apparently Guenter Roeck likes to remove things (his it87 git repo, now this) to exercise pressure on companies (I guess) ... unfortunately that just doesn't work for functionality that's not strictly necessary. The users like us are just left in the dust (I have no problem becoming root on my machine).

          Or he is just angry he can't have it his way.

          Very unfortunate

          Comment


          • #25
            Originally posted by mazumoto View Post
            The users like us are just left in the dust (I have no problem becoming root on my machine).
            RHEL (and other enterprise distros) may have packaged the drive separately as kmod-amd-energy, if it's requested by the user.

            Comment


            • #26
              Originally posted by Jumbotron View Post

              AMD CAN innovate and exexute. MIGHTILY! But only once per decade and not even for the entire decade. After Zen 4 and 3nm AMD will be tapped out. The Compute Paradigm going forward is Heterogenious and Composable with A.I. baked into the die or as a Discreet Compute part as part of the composibilty of the entire wafer and or SiP (System in Package). AMD does not have any hardware or software roadmaps at all for this paradigm other than cribbing off Intel AVX extensions and CDNA cards.

              And lastly, I have NEVER owned a single stock and never will. I don't even have an IRA or 401K. Never have, never will.
              You may blame AMD for failing to execute roadmaps on time, but blame AMD for not having roadmaps for Heterogeneous Computing?
              LAUGHING MY ASS OFF

              Anyone who has decent memory can remember who is the first, and was the sole initiator of Heterogeneous System Architecture.
              Anyone who has any functioning memory can remember AMD just purchased Xilinx a few months ago.

              It's YOUR fault to be called as a "total Nvidia/ARM shill", because it's clear that you can't/won't remember anything that's not NVIDIA.

              Comment


              • #27
                Originally posted by Teggs View Post
                Further, Guenter Roeck is responsible to the Linux community, not to AMD.
                And this is where I think he wrongs the community.

                For one, there is no compelling reason why this randomizing has to happen in kernel space. If access is restricted to root, then some userspace daemon can perform the randomizing for untrusted applications.

                Then there are lots of things in kernel which are insecure or dangerous to expose to userland. Usually they are behind config options that explicitly mention the dangers. A prominent example is the modify_ldt() syscall which is used to run 16-bit Windows programs on 64-bit Wine. So any individual who enables the AMD Energy Monitoring or distro which inflicts it on its users will have to assume responsibility for this action.

                Originally posted by Jumbotron View Post
                Mantle - Market Rejected
                That is a very lopsided view on things. And specifically Mantle has essentially become DX12 and Vulkan now. You will find Internet comparisons between Mantle and DX12 documentation and it is very clear where DX12 came from (Microsoft has since amended their documentation to remove some of the highlighted parts):

                Source: https://twitter.com/renderpipeline/s...86347450007553

                And then you are conveniently ignoring all the technologies that AMD/ATI introduced or was onboard from the beginning, and that have since become industry standards:

                x86_64 (was mentioned already)
                Tesselation (originally called Truform)
                GDDR5
                HBM
                Async Compute
                Unified Shaders (first was Xbox 360 Xenos)
                etc.

                Comment


                • #28
                  Originally posted by zxy_thf View Post
                  You may blame AMD for failing to execute roadmaps on time, but blame AMD for not having roadmaps for Heterogeneous Computing?
                  LAUGHING MY ASS OFF

                  Anyone who has decent memory can remember who is the first, and was the sole initiator of Heterogeneous System Architecture.
                  Anyone who has any functioning memory can remember AMD just purchased Xilinx a few months ago.

                  It's YOUR fault to be called as a "total Nvidia/ARM shill", because it's clear that you can't/won't remember anything that's not NVIDIA.
                  Horseshit. It was AMD along with Lisa Su that killed their own Heterogeneous Fusion architecture and the HSA Foundation which AMD founded and was the ONLY x86 member, all others being ARM or ARM based, such as Mediatek and Qualcomm and Samsung and Xlinx, etc. Of course since your only 4 years old you're not aware of that. AMD has been first on a LOT of x86 tech innovations but they never follow through. HSA is the latest. They killed off Zero Copy Cache Coherent Unified Memory Addressing which is the hallmark of HSA First doesn't matter without follow through and long term support. We may see AMD return to true HSA with Zen 4 and IF 3.0 and a 48 bit or better unified memory address scheme but only with HMM and ROCm and other bits and bobs fully baked into the kernel and outside the kernel along with proper compiler support in GCC and LLVM/Clang.

                  I have never owned one single device that had anything Nvidia in it. My current x86 laptop and desktop both have AMD Bristol Ridge HSA APUs in them. The desktop runs at 3.8 Ghz. Both have been my daily Linux drivers since late 2016. I will run them until they die or some OEM has a properly supported Zen 4 APU with a real RDNA 3 integrated GPU with IF 3.0 and true HSA and unified memory addressing at the same bit depth like the Bristol Ridge AND is properly supported by ROCm.



                  Comment


                  • #29
                    Originally posted by Jumbotron View Post
                    #2, ROCm on Xilinx ?? Although AMD is working hard (God bless Bridgman) on ROCm it's still a mess on their GPUs. How the hell can it get any easier on an FPGA which are already a bit hairy to develop on?
                    Some of the more recent issues involving ROCm are related to compatibility with the opensource graphics stack. Fortunately for Xylinx, they only have compute to worry about, so that should be a non-issue.

                    And I think the word is that those ROCm/graphics problems are fixed upstream, and also a non-issue for people taking the prepackaged drivers from AMD.

                    Originally posted by Jumbotron View Post
                    #3, AVX is EVERY BIT an A.I. strategy !! Intel even SAYS SO!
                    Of course they do, because they've got nothing else on the market, and they want to sell CPUs. That doesn't make it a good long-term strategy, though. If an Intel Xeon with AVX-512 and VNNI is a riding lawn mower, then a Nvidia A100 GPU is a jet plane. That's the magnitude of difference, and yet you can get an A100 card for less than the price of Intel's top-end CPUs for a dual-processor server.

                    So, that's another example of you swallowing their marketing BS as if it's something real.

                    Originally posted by Jumbotron View Post
                    no one gets fired for continuing to buy huge ,slow, hot 14nm++++++++++++++++ Intel Server chips.
                    Not sure about that. 2 years ago, that was probably right. Nowadays. AMD has proven they're a real player in the server market, and have compelling features and advantages.

                    Originally posted by Jumbotron View Post
                    Just ask Dell.
                    First off, Dell totally dragged their feet on embracing EPYC. Dell likes to do things like forcing customers to buy a second processor, if they want to put a GPU in their 2U server. EPYC has so many PCIe lanes that the same trick doesn't fly. So, Dell has to think of other ways to milk customers.

                    Also, a lot of shops are setup to deal with Intel & their technologies (stuff like vPRO) and change requires work. So, laziness takes hold and they tend to keep buying Intel, unless they really need what AMD is offering. You can never just flip the whole market, overnight.

                    Finally, AMD has a limited and relatively inflexible production allocation from TSMC. They couldn't ramp that up to replace all of Intel's sales volume, in the current market. So, if you're buying a server and the lead time on AMD is 18 weeks but the Intel server would ship out in 2 days, then you buy the Intel server unless you really need/want the AMD one.

                    Originally posted by Jumbotron View Post
                    HEY WE'RE AT 7NM WHILE INTEL IS AT 14NM+++++++++++++. Now it LOOKS like they're at parity. Of course they're not but the market doesn't care.
                    Ice Lake can't even clock as high as Zen 2. That doesn't look like parity to me. Maybe Tiger Lake does, but that's still not where the bulk of Intel's production is at.

                    Originally posted by Jumbotron View Post
                    Discreet GPUs are now for mining, and protein folding and Higgs Boson chasing and Black Hole hunting and Amazon and Wall Street. If the way we engineer our GPUs make for great FPS in Crysis then cool. But we were NOT thinking about that at the time. That's the attitude of GPU manufacturers.
                    Funny how they apparently haven't heard that. If no one cares about gaming, then why did AMD make a dedicated RDNA architecture that's optimized for graphics? Why did Nvidia split off the 100-series from the rest of their lineup? And why is Intel using separate architectures and even different fabs, specifically for the gaming market?

                    But, don't let facts get in the way of you narrative.

                    Originally posted by Jumbotron View Post
                    I would still go to Vegas instead of Wall Street because at least in Vegas I know the house is trying to screw me.
                    I've done well by just following the standard advice. Getting screwed by Wall St. means different things, depending on context. If we're talking about mutual funds, yeah they tend to charge excessive fees, but you still make bigger returns than stuffing your wad under your mattress or even in a savings account. But there are lower-fee options and assets that are subject to more or less volatility. Most people invest, because it tends to work out well for them -- unlike Vegas, where the longer you stay in a casino, the more you tend to lose.

                    Talking about 401(k) and IRAs, they have tax advantages that are like a free bonus from the federal government. I believe in paying the taxes I owe, but that was done as an incentive for people to save for retirement, so I have no problem taking advantage of it.

                    Comment


                    • #30
                      Ah yes, more of that amazing Linux support by AMD. No current, voltage, power readings. No temperature support as well either.

                      Fuck AMD. I regret buying Zen 2 and will sell it off and get Intel instead. I've wasted enough of my life on a company that doesn't give a shit about Linux.

                      It's extremely unprofessional, embarrassingly bad and crap software.

                      Comment

                      Working...
                      X