Announcement

Collapse
No announcement yet.

AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

    Phoronix: AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

    With the AMD Ryzen 7040 series "Ryzen AI" was introduced as leveraging Xilinx IP onboard the new Zen 4 mobile processors. Ryzen AI is beginning to work its way out to more processors while it hasn't been supported on Linux. Then in October was AMD wanting to hear from customer requests around Ryzen AI Linux support. Well, today they did their first public code drop of the XDNA Linux driver for providing open-source support for Ryzen AI...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Preliminary support, baby steps. But I hope they take note of the usability complaints against ROCm to aim for turn key solutions in the future rather than the mess AMD has right now for GPU compute.

    Comment


    • #3
      Can someone give me examples of where that A.I could be used in the Linux OS?

      Comment


      • #4
        There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

        So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

        While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

        Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

        So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

        But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

        However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

        In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

        So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death.​

        Comment


        • #5
          Originally posted by muncrief View Post
          ... wall of text...​
          Can't have AGI if you don't even know what either intelligence or sentience is or how to properly define and quantify it. "I know it when I see it," isn't good enough (as the Turing test proves - humans are very easily fooled). Cognitive science hasn't even come that far. That's why the term "artificial intelligence" was used in the first literature about machine learning and self adaptive programming. It was an acknowledgement that what they're using and talking about had nothing to do with intuitive thinking nor could they precisely replicate what humans do. We don't even know what "it" is nor how our brains do what they do. Until very recently we were arrogantly sure that "lower" life forms couldn't feel emotions, and that we were the only species that used tools and could learn new techniques with tools. Cognitive science and mental health science are still in a comparative dark age when held against the physical sciences thanks to religious taboos, arrogance, and stigma.

          Anyone that takes a breath and steps back a moment can see we're really no closer to a synthetic sentient being than we were in the 80s. Those saying otherwise are mostly trying to persuade people to fund their research just like they were in the 70s & 80s (and every other bubble in history). Cramming all the information in every library and database in the world into a single model doesn't make a sentient being and it never will, but it will make for better, more adaptive templating (even generative AI 'art' is really just advanced statistical templating) and predictive autocorrection at least.

          The AI funding winter will return again, but at some point hackers will have a use for NPUs beyond the commercially blessed data models companies are using to try to pursue revenue growth at all costs, that's when the really interesting stuff will happen for the rest of us. Bubbles usually leave behind some fragments of useful stuff.

          Comment


          • #6
            Originally posted by muncrief View Post
            There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

            So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.
            Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

            The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.

            Comment


            • #7
              Originally posted by Chugworth View Post
              Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

              The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.
              That's the problem with machine learning and accumulating massive volumes of human authored information to create the illusion of intelligence - it takes enormous amounts of power and storage space.

              While the human brain runs on about 100 watts, and fits in the tiny space of our head.

              The last 50 years of R&D on machine learning have been an enormous waste of resources, time, and money. All of which will be discarded two or three decades from now when the human brain is actually reverse engineered.

              In the meantime, as I said in my OP, we can expect a lot of destruction and death because of our lack of foresight and vision, and the unfortunate relative ease involved in fooling human beings.
              Last edited by muncrief; 25 January 2024, 10:33 PM.

              Comment


              • #8
                Originally posted by naysayers
                ignorant assertions about what intelligence means, copyright pearl-clutching, and other disingenuous braindead bullshit
                *sips tea*

                Comment


                • #9
                  Originally posted by muncrief View Post
                  ...
                  In the meantime, as I said in my OP, we can expect a lot of destruction and death because of our lack of foresight and vision, and the unfortunate relative ease involved in fooling human beings.
                  ... which is pretty much every scientific advance in human history ... If we followed your advice no one would ever do anything at all.

                  I don't like the current LLM & cryptocurrency gold rush horrific waste any more than anyone else. There are remedies for this, both socially and regulatory. If we followed your advice we'd have never flown in the first balloons, let alone put humans in orbit or on the moon. After all, flying risks more people than just those in the machine, but everyone in the path on the ground as well. Fire? Probably our greatest discovery... also the single biggest killer in all of history. How many people have cut their fingers slicing bread over the past five thousand years? Yes, I'm deliberately pointing out the absurdity in your rant. Humans are really bad at being proactive to curb risk in a rational way, but that's no reason to not even try.

                  Comment


                  • #10
                    Originally posted by stormcrow View Post

                    ... which is pretty much every scientific advance in human history ... If we followed your advice no one would ever do anything at all.

                    I don't like the current LLM & cryptocurrency gold rush horrific waste any more than anyone else. There are remedies for this, both socially and regulatory. If we followed your advice we'd have never flown in the first balloons, let alone put humans in orbit or on the moon. After all, flying risks more people than just those in the machine, but everyone in the path on the ground as well. Fire? Probably our greatest discovery... also the single biggest killer in all of history. How many people have cut their fingers slicing bread over the past five thousand years? Yes, I'm deliberately pointing out the absurdity in your rant. Humans are really bad at being proactive to curb risk in a rational way, but that's no reason to not even try.
                    I believe you missed the primary point of my post.

                    If we'd been more intelligent as a species, and less greedy and impatient, and begun work on reverse engineering the brain 50 years ago, instead of pursuing machine learning, we would already have, or be much closer to, actual general artificial intelligence.

                    There is indeed danger in all advancement of human knowledge, and producing true general artificial intelligence will pose many science-fiction type threats to humanity.

                    However as I said pursuing machine learning was an incredible waste of resources and time, and poses threats because it is illusory and will never work, but people have already been deceived and are using it in critical systems that it is simply not appropriate for. Writing a novel or painting a picture is one thing, but controlling vehicles and other critical systems is simply foolish and deadly without cause or true benefit.

                    But make no mistake, real general AI will pose threats because it will be like humans, with all our inherent gifts and flaws. It will have the potential to be incredibly good and helpful, or evil and dangerous.

                    Just as each of us are.

                    But the sooner we confront our true future, the sooner we can begin to learn to cope with, and hopefully control, it.

                    Comment

                    Working...
                    X