Announcement

Collapse
No announcement yet.

AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Machine learning is

    Comment


    • #12
      The documentation doesn't yet outline any upstreaming plans they have for this driver to get it into the mainline kernel or if they will just be maintaining it out-of-tree or what all their Linux support plans entail.
      Michael
      That's true. But one of the open issues gives insight:

      The Linux kernel required to run this xdna-driver is based on an old release candidate of Linux 6.7 but Linux 6.7.2 has been released since with a lot of fixes (https://cdn.kernel.org/pub/linux/ker...


      The dependency on patches in that repo is temporary, the patches in it are on their way upstream. Some are making it for 6.8, but a few will go out to 6.9 (or later depending upon code review).

      Comment


      • #13
        Originally posted by muncrief View Post

        I believe you missed the primary point of my post.

        If we'd been more intelligent as a species, and less greedy and impatient, and begun work on reverse engineering the brain 50 years ago, instead of pursuing machine learning, we would already have, or be much closer to, actual general artificial intelligence.

        There is indeed danger in all advancement of human knowledge, and producing true general artificial intelligence will pose many science-fiction type threats to humanity.

        However as I said pursuing machine learning was an incredible waste of resources and time, and poses threats because it is illusory and will never work, but people have already been deceived and are using it in critical systems that it is simply not appropriate for. Writing a novel or painting a picture is one thing, but controlling vehicles and other critical systems is simply foolish and deadly without cause or true benefit.

        But make no mistake, real general AI will pose threats because it will be like humans, with all our inherent gifts and flaws. It will have the potential to be incredibly good and helpful, or evil and dangerous.

        Just as each of us are.

        But the sooner we confront our true future, the sooner we can begin to learn to cope with, and hopefully control, it.
        Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?

        Comment


        • #14
          Originally posted by timrichardson View Post
          Machine learning is
          ... just a paraphrase for quantum processes following a statistical program.

          There is no entity derived from "ones" and "zeros" that could learn something in the sense of learning like a living being could. Even after centuries we still don't have a solution to basic questions like "the hard problem of consciousness". We therefore don't have the slightest idea how to create a sentient entity that experiences "qualia".

          A "modern" computer for M.L. or A.I. or pseudo A.G.I. or whatever marketing spin is currently being the trend for statistical programming, basically still works according to the data input and data output principles of early computers from the last century. We assign abstract numbers and letters etc. to special patterns of code derived from switches. But instead of using mechanics, punched cards or balls like in a abacus we are using transistors.

          Comment


          • #15
            Originally posted by muncrief View Post
            However as I said pursuing machine learning was an incredible waste of resources and time, and poses threats because it is illusory and will never work, but people have already been deceived and are using it in critical systems that it is simply not appropriate for. Writing a novel or painting a picture is one thing, but controlling vehicles and other critical systems is simply foolish and deadly without cause or true benefit.
            You're bitter about something that is not even real. You say machine learning will never work, yet it is already working TODAY.
            I can have long and more intelligent conversations with GPT4 than I ever had with any professor during my university days.
            It can help me brainstorm ideas for things that no one has attempted before and it has great intuition about what will work and what not.
            It can help me explain obscure papers for which barely any discussion exists on the internet. It can explain both in extremely simple terms and it can also explain the concepts in the paper in-depth. And even produce code where appropriate.
            It tells me about an undocumented configuration option in open source software and it can explain exactly how it alters the behavior of the software... and it's about a configuration switch that yields exactly 2 results on Google: a source file in the github project itself and a short Chinese post on some forum.

            You talk about how these systems cannot achieve intelligence, yet they do - intelligence is the ability to create new knowledge from existing knowledge, which requires abstraction. And that is what these systems do extremely well. And they have to since it's the only way they could "fit" so much knowledge into so few parameters.

            Let's take your example with the daisy:
            While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.
            What you conveniently left out is that you can only have 2 pictures of daisies in your training set and then pictures of other flowers in 10 different angles and afterwards it can still generate a daisy from 10 different angles. And it does not take large neural networks like the modern diffusion-based image generators with billions of parameters to do this. Even a simple GAN with less than 100 million parameters, the type I have trained many instances of, is capable of such abstraction.

            And the larger the models get, the better they get at it. For multi-language LLMs that also means that it can explain information to you in English even if the information only existed in a completely different language in the training set.
            It has been shown that these models internally build a model of the world - and multi-modal models which are on the rise now are even better at this since they can incorporate images and sounds in their model of the world as well.

            Even relatively small LLMs can do addition and multiplication for arbitrary examples that they have never seen before. They learn the rules of how it works just by given examples of it. And GPT4 can do far more advanced things like calculating with complex numbers and run mathematical proofs of the kind that I often spent hours on when I was in university.

            And these models achieve all of that with a fraction of the capacity that a human brain has. It is rumored that GPT4 is a mixture of experts with 8x220b parameters, which is a mere 1.6 trillion compared to the 100 trillion synapses a human brain has. And each of these weights are worth less than a synapse, because artificial neural networks tend to be dense while synapses are not. If two neurons in an ANN should not be connected, they still require a weight of 0, while in a brain, the synapse would simply be removed or never form in the first place.

            In addition to that, they can achieve these results without thinking, that is to say, they cannot decide to use more compute on problems that are hard like humans can. But that's only the status quo and many people are working hard on changing that, so we'll see the results of that in the coming few months.

            So instead of being bitter about your idea for "real" intelligence not having being pursued enough in the past, if I were you, I would actually start to study the field of machine learning... and actually neuroscience as well, since like many other people, you seem to put the human brain on a pedestal, thinking there is something magical or special about it, when there really is not. It is a fascinating field regardless and you will find that there are many parallels between the process of learning in ANN and how learning happens in real brains of animals and humans, even if the fundamental underlying method is different.
            Last edited by david-nk; 26 January 2024, 05:31 AM.

            Comment


            • #16
              Originally posted by muncrief View Post
              There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

              So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

              While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

              Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

              So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

              But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

              However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

              In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

              So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death.​
              It's weird how some people can talk with such absolute confidence about this topic without even being able to get terminology straight, or even using it inconsistently...

              Yes, AI exists today, and it has for decades. Bots in games? That's AI. Pathfinding? That's AI. Chatbots? That's AI. Hand writing recognition? That's AI.

              What do you think the word 'artificial' means? In this context, it literally means artificial as in 'fake'. Fake Intelligence. Not real intelligence. That's why we have a term for the real stuff that can do actual logic and reasoning in the same way humans can - it's called 'general intelligence', a term you even used in that very same ranting wall of text, correctly even. You could easily fix your entire post by just doing a find and replace on 'AI' and 'artificial intelligence' and replacing them both with GI and general intelligence and everything else you said would be correct.

              What is it about all this generative AI that has suddenly created this wave of people who keep mistaking the term 'artificial intelligence' for 'real intelligence'? If you only googled the terms you're using you would easily find that you're using the term completely wrong.

              Comment


              • #17
                Originally posted by M.Bahr
                There is no entity derived from "ones" and "zeros" that could learn something in the sense of learning like a living being could.
                That is precisely what a "living being" is in fact. A biological animal like a human, as you are referencing here, is an emergent phenomenon, a complex pattern emerging from a set of simple interactions occurring at scale in both time and space.

                Whether you want to imagine a human, an organ, a cell, a molecule, an atom, or something even smaller and simpler still, each level is an emergent pattern of greater complexity arising from the interactions of the simpler patterns beneath.

                Does acknowledging this reality mean that a chat bot running on your desktop computer in 2024 is the same level of complexity as a human? Of course not. Does it mean that your desktop computer experiences itself the same way a human experiences itself? Of course not.

                Trying to argue whether ChatGPT is the same thing as a human is missing the point entirely, and intentionally so.

                The correct realization to have here is that complex systems emerge from the interaction of simple systems at scale. This quite literally explains the universe and everything in it at every scale, except of course why anything exists in the first place.

                A companion to this realization, also relevant to the AI discussion, is to understand that patterns are not one and the same as the medium they are carried on. The word "intelligence" can be carved into a rock, written in a book, stored on a hard drive, conveyed through the compression of air or the pulsing of light, or spelled out on your lawn by forming a batch of fallen tree leaves into the appropriate letters.

                The patterns that you assume are fundamentally inextricable from your current physical manifestation can absolutely be replicated, despite the fact that we aren't yet capable of doing so.
                Last edited by quaz0r; 26 January 2024, 06:32 AM.

                Comment


                • #18
                  What could a 7840 accelerate with its ML capabilities? The spec sheet summary says performance: up to 10 TOPS, total processor performance: up to 32 TOPS, NPU performance: Up to 10 TOPS. I'm ignorant of ML, is that enough for an ML stockfish on the NPU to easily beat standard stockfish running on the 7840 CPU? With all other variables fixed how does power consumpton look between x86 stockfish and ML stockfish? If the 7840 is weak sauce AI how about the 8840, on paper it has up to 16 TOPS performance does that open up any more possibilities?

                  I'm not so interested in heavy things which this small AI engine would likely struggle with, but in the future there will probably be ML components to audio/video/image encoders; it would be nice if the 7840 was good enough to noticeably accelerate those.​

                  Comment


                  • #19
                    Originally posted by geerge View Post
                    What could a 7840 accelerate with its ML capabilities?​
                    AMD says as of now, you can enhance your video conferences with it. Everything else comes "soon" or "in the future".

                    Originally posted by AMD
                    Unlock the Future of AI
                    Get ready to usher in the future with new capabilities that AMD Ryzenâ„¢ AI will soon provide.

                    AI-Driven Data Analytics
                    PCs with AMD Ryzenâ„¢ AI will soon be able to run advanced AI- powered modeling and analysis directly on the PC, accelerating individual decision-making while helping to strengthen data privacy.

                    AI-Informed Personal Assistant
                    AMD Ryzen™ AI is designed to learn from your interactions and adapt over time. In the future, you’ll have all the advantages of having a customized digital assistant to help craft presentations, write email, manage calendars, and summarize conversations.

                    AI-Augmented Content Creation
                    AMD Ryzenâ„¢ AI will soon offer new advantages and opportunities for creative professionals and workstation users. Envision generating 3D content without prior CAD expertise or running complex simulations and workloads without added processing power.

                    AI-Enhanced Collaboration
                    PCs powered by AMD Ryzenâ„¢ AI offer engaging experiences today, like auto framing, eye gaze correction, and enhanced background blurs in video conferencing applications, for more natural interactions. Windows Studio Effects is currently available to applications that use the integrated Windows camera, including Microsoft Teams, Zoom, and Google Meet.

                    Comment


                    • #20
                      they move so fast
                      btw does w11 used AI from 7000 series?
                      before intel ultra i've never read about w11 AI support but now with intel is like everybody is on fire and no mention of 7000 cpu with AI support

                      Comment

                      Working...
                      X