Announcement

Collapse
No announcement yet.

AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by chithanh View Post
    AMD says as of now, you can enhance your video conferences with it. Everything else comes "soon" or "in the future".
    Wake me up when there is any kind of open source application worth testing.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #22
      Originally posted by david-nk View Post

      You're bitter about something that is not even real. You say machine learning will never work, yet it is already working TODAY.
      I can have long and more intelligent conversations with GPT4 than I ever had with any professor during my university days.
      It can help me brainstorm ideas for things that no one has attempted before and it has great intuition about what will work and what not.
      It can help me explain obscure papers for which barely any discussion exists on the internet. It can explain both in extremely simple terms and it can also explain the concepts in the paper in-depth. And even produce code where appropriate.
      It tells me about an undocumented configuration option in open source software and it can explain exactly how it alters the behavior of the software... and it's about a configuration switch that yields exactly 2 results on Google: a source file in the github project itself and a short Chinese post on some forum.

      You talk about how these systems cannot achieve intelligence, yet they do - intelligence is the ability to create new knowledge from existing knowledge, which requires abstraction. And that is what these systems do extremely well. And they have to since it's the only way they could "fit" so much knowledge into so few parameters.

      Let's take your example with the daisy:


      What you conveniently left out is that you can only have 2 pictures of daisies in your training set and then pictures of other flowers in 10 different angles and afterwards it can still generate a daisy from 10 different angles. And it does not take large neural networks like the modern diffusion-based image generators with billions of parameters to do this. Even a simple GAN with less than 100 million parameters, the type I have trained many instances of, is capable of such abstraction.

      And the larger the models get, the better they get at it. For multi-language LLMs that also means that it can explain information to you in English even if the information only existed in a completely different language in the training set.
      It has been shown that these models internally build a model of the world - and multi-modal models which are on the rise now are even better at this since they can incorporate images and sounds in their model of the world as well.

      Even relatively small LLMs can do addition and multiplication for arbitrary examples that they have never seen before. They learn the rules of how it works just by given examples of it. And GPT4 can do far more advanced things like calculating with complex numbers and run mathematical proofs of the kind that I often spent hours on when I was in university.

      And these models achieve all of that with a fraction of the capacity that a human brain has. It is rumored that GPT4 is a mixture of experts with 8x220b parameters, which is a mere 1.6 trillion compared to the 100 trillion synapses a human brain has. And each of these weights are worth less than a synapse, because artificial neural networks tend to be dense while synapses are not. If two neurons in an ANN should not be connected, they still require a weight of 0, while in a brain, the synapse would simply be removed or never form in the first place.

      In addition to that, they can achieve these results without thinking, that is to say, they cannot decide to use more compute on problems that are hard like humans can. But that's only the status quo and many people are working hard on changing that, so we'll see the results of that in the coming few months.

      So instead of being bitter about your idea for "real" intelligence not having being pursued enough in the past, if I were you, I would actually start to study the field of machine learning... and actually neuroscience as well, since like many other people, you seem to put the human brain on a pedestal, thinking there is something magical or special about it, when there really is not. It is a fascinating field regardless and you will find that there are many parallels between the process of learning in ANN and how learning happens in real brains of animals and humans, even if the fundamental underlying method is different.
      What was the open source project and what was the undocumented option?

      Comment


      • #23
        Originally posted by Zupi View Post
        Can someone give me examples of where that A.I could be used in the Linux OS?
        Fastand high quality TTS for screen reader. At least that's when I've worked on using RK3588 now. Also open source.

        Comment


        • #24
          Originally posted by quaz0r View Post

          That is precisely what a "living being" is in fact. A biological animal like a human, as you are referencing here, is an emergent phenomenon, a complex pattern emerging from a set of simple interactions occurring at scale in both time and space.

          Whether you want to imagine a human, an organ, a cell, a molecule, an atom, or something even smaller and simpler still, each level is an emergent pattern of greater complexity arising from the interactions of the simpler patterns beneath.

          Does acknowledging this reality mean that a chat bot running on your desktop computer in 2024 is the same level of complexity as a human? Of course not. Does it mean that your desktop computer experiences itself the same way a human experiences itself? Of course not.

          Trying to argue whether ChatGPT is the same thing as a human is missing the point entirely, and intentionally so.

          The correct realization to have here is that complex systems emerge from the interaction of simple systems at scale. This quite literally explains the universe and everything in it at every scale, except of course why anything exists in the first place.

          A companion to this realization, also relevant to the AI discussion, is to understand that patterns are not one and the same as the medium they are carried on. The word "intelligence" can be carved into a rock, written in a book, stored on a hard drive, conveyed through the compression of air or the pulsing of light, or spelled out on your lawn by forming a batch of fallen tree leaves into the appropriate letters.

          The patterns that you assume are fundamentally inextricable from your current physical manifestation can absolutely be replicated, despite the fact that we aren't yet capable of doing so.
          I can see from your reply that you don't seem to have much knowledge about the matter. Most of your biased thoughts seem to be derived from popular mainstream sci-fi videos consumed by the mainstream. As a matter of fact i read many claims, assumptions, speculations and philosophies but no evidence on the grounds of hard science. You even make up own definitions for the classification of a living being. If you reallly want to learn more about this topic i recommend to learn what a consciousness is according to science in the first place.

          Comment


          • #25
            Is it just me or are other people having a hard time taking the whole AI thing seriously? I think if anything happens some unknown species out of our range of developed instruments most likely from Europa is going to assume a core code pattern and trick us all into thinking we made something great all while taking over the world.

            Imagine that as a science fiction novel, mankind conquered by jellyfish from Europa.

            I don't think we are the only intelligent species that came and went during the billions of years this solar system has been around.
            Last edited by creative; 27 January 2024, 05:04 PM.

            Comment


            • #26
              Originally posted by M.Bahr View Post

              If you reallly want to learn more about this topic i recommend to learn what a consciousness is according to science in the first place.

              That's not a question of science but philosophy. And i have a feeling that the latest advances in philosophy will show you consciousness to be the opposite of what you think it is.

              Comment


              • #27
                Originally posted by david-nk View Post

                You're bitter about something that is not even real. You say machine learning will never work, yet it is already working TODAY.
                ....
                So instead of being bitter about your idea for "real" intelligence not having being pursued enough in the past, if I were you, I would actually start to study the field of machine learning... and actually neuroscience as well, since like many other people, you seem to put the human brain on a pedestal, thinking there is something magical or special about it, when there really is not. It is a fascinating field regardless and you will find that there are many parallels between the process of learning in ANN and how learning happens in real brains of animals and humans, even if the fundamental underlying method is different.
                I respectfully disagree david-nk.

                First of all, if what you are saying were true then companies would not be using billions of dollars worth of AI hardware, astounding amounts of electrical power, and engaging in mass plagiarism of the internet to train their models.

                Second, I created a biological neural simulator in my early 20s on an old Atari 1040 ST with two floppy drives, and around 20 floppy disks. I used the Principles of Neural Science by Kandel and Schwartz, Second Edition, as a reference to create the simulator.

                Four basic software modules - Reception, Conduction, Integration, and Transmission, were utilized to create the simulated neurons.

                The Reception module simulated the presynaptic portion of the synapse, the Conduction module simulated axon and dendrite signal transmission, including anterograde and retrograde molecular transport, the Integration module simulated the soma, which included electrical and chemically gated ion channels, as well as passive channels, and the Transmission module simulated the post synaptic portion of the synapse.

                The final simulation incorporated 10 neurons, which was all my old 1040 ST could handle. And I would spend many, many hours swapping different floppy disks to carry out 3 or 4 seconds of simulation, but it was exhilarating to watch the neurons firing and integrating to produce the correct results as specified in the Principles of Neural Science.

                Unfortunately after about a year and a half I had to suspend my work on the simulator because I have no formal education beyond a high school diploma and therefore could not get funding to continue my work. However once reentering the work force as a normal R&D hardware/firmware/software engineer I was never in a position to return to it.

                So I do understand neuroscience quite well, and machine learning has absolutely nothing to do with it. It's like saying throwing gasoline into a fire pit is the same as an internal combustion engine. Yes, they both use gasoline to create energy, but that is their only common connection, as they are two completely different beasts.
                Last edited by muncrief; 27 January 2024, 01:54 PM.

                Comment


                • #28
                  Originally posted by Bobby Bob View Post

                  It's weird how some people can talk with such absolute confidence about this topic without even being able to get terminology straight, or even using it inconsistently...

                  Yes, AI exists today, and it has for decades. Bots in games? That's AI. Pathfinding? That's AI. Chatbots? That's AI. Hand writing recognition? That's AI.

                  What do you think the word 'artificial' means? In this context, it literally means artificial as in 'fake'. Fake Intelligence. Not real intelligence. That's why we have a term for the real stuff that can do actual logic and reasoning in the same way humans can - it's called 'general intelligence', a term you even used in that very same ranting wall of text, correctly even. You could easily fix your entire post by just doing a find and replace on 'AI' and 'artificial intelligence' and replacing them both with GI and general intelligence and everything else you said would be correct.

                  What is it about all this generative AI that has suddenly created this wave of people who keep mistaking the term 'artificial intelligence' for 'real intelligence'? If you only googled the terms you're using you would easily find that you're using the term completely wrong.
                  I assumed the readers of my post would understand my references to AI and general AI from the context of the sentence structures Bobby Bob .

                  Comment


                  • #29
                    Originally posted by partcyborg View Post

                    Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?
                    partcyborg, I was referencing connectomics as described in articles like https://www.news-medical.net/news/20...use-brain.aspx and https://www.anl.gov/article/small-brain-big-data.
                    Last edited by muncrief; 27 January 2024, 03:54 PM.

                    Comment


                    • #30
                      Instead of arguing about what AI is, call XDNA a Potentially Useful INT8/INT4/BF16 Accelerator.

                      Comment

                      Working...
                      X