Announcement

Collapse
No announcement yet.

AMD Releases ZenDNN 5.0 For Deep Neural Network Library Optimized For Zen 5 EPYC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Releases ZenDNN 5.0 For Deep Neural Network Library Optimized For Zen 5 EPYC

    Phoronix: AMD Releases ZenDNN 5.0 For Deep Neural Network Library Optimized For Zen 5 EPYC

    AMD ZenDNN 5.0 was rolled out this morning as the newest version of this deep neural network library that is compatible with Intel's oneDNN APIs and infrastructure. ZenDNN 5.0 is now optimized for AMD Zen 5 processors such as the EPYC 9005 series. ZenDNN 5.0 also ships performance enhancements for generative large language models (LLMs) with its PyTorch plug-in...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    • zenDNN = oneDNN
    • AOMP = OpenMP
    • AOCC = LLVM/Clang/Flang
    • Next-Gen Fortran = LLVM Fortran
    Now we just need an AMD optimized distro
    Last edited by Kjell; 15 November 2024, 09:26 AM.

    Comment


    • #3
      Originally posted by Kjell View Post
      Now we just need an AMD optimized distro
      Gentoo with science overlay, maybe?

      Comment


      • #4
        There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

        So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

        While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

        Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

        So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

        But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

        And while the human brain consumes around 20 watts, these massive pattern matching computers consume uncounted millions, and counting.

        However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

        In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

        So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death.

        Comment


        • #5
          Originally posted by muncrief View Post
          There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.
          That's bullshit. You talk as if ML was the only game in town, but neural networks only captured the mainstream academia and industry about a dozen years ago. Prior to that, they were regarded by most as a nice idea but relegated to fringes and niches, for lack of adequate scale to do anything very interesting with them.

          Sure, other ML techniques existed, but they weren't seen as alternatives to AI, but more like numerical optimization techniques.

          Originally posted by muncrief View Post
          So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.
          This shows you really don't understand LLMs very well. They do not simply "recombine human writings", like someone cutting and pasting. There are key examples that prove this point.
          • writing about a given subject in song or in verse shows they understand the underlying information apart from how it's expressed in human language.
          • arithmetic and logic problems aren't things you can solve by simply regurgitating something previously seen.


          Originally posted by muncrief View Post
          But the problem is, as I said in the beginning, there is no actual intelligence involved at all.
          What is intelligence? At a very practical level, you need to ask yourself this: if it responds intelligently, how can you really say it's not? If it can do that and solve novel problems, it would be silly not to regard it as intelligent.

          Note that intelligence is very different from being infallible. People seem to want AI to be like in the old movies, where everything follows strict logic and it's never wrong about anything, when given the correct data. However, such strict approaches struggle to handle the myriad contradictions and paradoxes we deal with in everyday life. In contrast neural networks are capable of abstract knowledge representation and reasoning, while still being able to compartmentalize and special-case very well.

          Originally posted by muncrief View Post
          ​These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is.
          You're probably using an experiential definition of knowledge, as if this somehow exists on a higher plane. If you need to experience everything to actually know it, then most of us wouldn't know very much, at all. Ultimately, knowledge is an abstraction, no matter how you come by it.

          Originally posted by muncrief View Post
          ​​And while the human brain consumes around 20 watts, these massive pattern matching computers consume uncounted millions, and counting.
          So, measure the energy used by a human brain to write an article and compare it with the energy used by a LLM to write essentially the same piece. Better yet, because the human brain can't exist without a human body, you should probably measure the entire resource footprint of that human, for that amount of time. In such a matchup, the human isn't looking very good.

          AI catches a lot of grief (and I'm glad about this) for energy usage, because it compresses lifetimes of human learning into weeks. However, if you look at the resource footprint of a human, for the amount of time it would take them to learn the same amount, it's not small!

          Originally posted by muncrief View Post
          ​​​However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.
          That's dumb. It's the equivalent of simulating a mechanical computer by building a CAD model and using a kinematics simulation. Horribly inefficient. A lot about how a human brain works has to do with physical constraints of chemistry and biology. It's much more efficient to understand and replicate what it's doing at an abstract level, then optimizing it for the silicon-based platform that we're actually using.

          What's more disturbing about your statement is that it doesn't address any of the key issues you seemed bothered by. It still needs to be trained by feeding it lots of data and it's still far from infallible. It's still fundamentally machine learning, no matter how you dress it up.

          Originally posted by muncrief View Post
          ​​​​In the meantime it's important for our species to recognize the danger of relying on generative AI for anything,
          Agreed. I wouldn't even qualify it as "in the meantime". AI is dangerous for all kinds of reasons. Since we can't put the genie back in the bottle, solutions remain elusive, but we need to keep trying to figure out how to mitigate the biggest downsides.

          Originally posted by muncrief View Post
          ​​​​​So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death.
          I strongly disagree with how cavalier Tesla has been towards self-driving, but I do think it's a tractable problem for it to work at least as safely as humans. It needn't be perfect, just better than we are. Humans aren't actually very good at driving, if you include the full distribution of driving abilities over all drivers on the road, including intoxicated drivers, elderly, sleep-deprived, aggressive youths, over-stressed, people on medications, and all the various distractions that take our attention away from the task.

          I don't love self-driving. I was not asking for it. However, I do think it'll happen and I think the good will outweigh the bad. I worry about the environmental impact of decoupling travel by car from having an available driver. I also worry about the potential for hackers to cause systemic outages. However, if we're just talking about robots' ability to drive, that's actually the part I'm least worried about.

          Comment

          Working...
          X