Announcement

Collapse
No announcement yet.

AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Chugworth View Post
    The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.
    llama.cpp can run several smaller models on consumer hardware. It depends what you mean with "anything like"; you can't expect the same output quality but it's definitely a similar user experience.

    Comment


    • #32
      While I certainly concur that it seems deplorable the extent of ignorance for "AI" and instead we have what you (et.al.) have called "machine learning".
      Actually in a non-technical sense I wouldn't dignify the end result as either "AI" or "ML" because although the "ML" models do get trained, they don't
      (in general use) thereafter LEARN anything. I could start a thousand different conversations with chatgpt et.al. and first explain how I'd like it to respond to a request or proceed with a series of requests to attempt to satisfy my directive. It would "learn" nothing and every single conversation it'd know nothing different than the
      time before and whatever effort I'd have to do to give it context / state about proceeding with some ongoing project / workflow would be repeated necessarily ad infinitum.

      I don't expect it to be very "intelligent" yet, but at least it should be able to be simply trained to "learn" context of an interaction.

      Anyway I think the "mass plagiarism" is rather a harsh critique, I see where you're coming from but as you said yourself these models are fed a vast amount of training
      data, e.g. trillions of pieces of input data to generate models with "merely" billions of learned parameters.

      Information theory tells us any data has at most a certain amount of information assiociated with it, at most 1 bit data = one bit information, though it could be less
      efficient and there could be vastly less information present; we'd then say that the compressability of the data depends on the entropy and information content.

      So although one can infer there is some redundancy in the training data, some of these ML models are THOUSANDS of times smaller in model data size than the information they've been trained on. The training information generally could not be compressed that small by any known compression scheme.

      So although the ML model may remember fragmentary and qualitative aspects of text, images, et. al. it's CERTAINLY not "just copying" its input. It is learning generalities, qualitative correlations, statistical patterns, yes, from the sum of its experiences, but it's not just some data base that's recreating rote memorizations of its training
      data per. architecture looking at the broad context of how much data goes in to produce how much model out.

      On the contrary you've already said the human brain is vastly more complex and capable of true learning and remembering more actual information.
      We "train" all our people's "models" by sending them to school, making them study painting, sculpture, photography, vocal music, instrumental music, poetry, prose, philosophy, et. al.

      But we don't do that to commit "mass plagiarism" on popular culture, the work that has gone before, we do that to provide a context of ideas, information, themes,
      and history to educate, inform, inspire, and enable a common vernacular of meaning to words, images, sounds, language, et. al.

      So if you teach your child some Chinese language, is that "mass plagiarism" of some 2000 BC writer who "created" that language? When a high school art student
      draws a bowl of apples or picture of a starry sky is that "mass plagiarism" of some renaissance artist? When they write a poem is it "mass plagiarism" of Homer, Shakespeare, Tennyson, et. al.? Surely they've been "trained" on things but they don't remember all of it literally, could not if they had tried, but they've qualitatively been introduced to generalities, qualities, styles, et. al. For the most part that's not so different than the ML models.

      Maybe the ML models have fragments of things they remember literally to some close / partial approximation; 2+2=4, "once upon a time", the way an apple may look, etc.
      but it's not going to recite verbatim Hamlet or Beethoven's Ninth symphony, or the entirety of the Illiad (as general concepts, particulars could vary I suppose).
      Just like most teenage art student's couldn't create a convincing duplicate of the Mona Lisa if they tried despite having started at for some hours in the past.

      You bemoan AI scientists not creating "true AI" but if we HAD created (or shall) true AI, should we not educate it in the same way as we do our human children by
      training them on history, popular culture, literature, art, language, math, science, ....?


      Originally posted by muncrief View Post
      There is zero artificial intelligence today. ...
      Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.
      ​​

      Comment


      • #33
        True and yet it ironically points out one of the greatest failings of current ML and frankly a lot of current human "digital society".

        Although we've done amazing things like creating the library at Alexandria and great libraries from time to time ever since, we're really in a dark age of folly now.

        We have desktop data stores of 10-100+ terabyte scale being moderately common, commercial ones hundreds of thousands times larger.
        An average PC may well have space to hold more text that a human could ever read in their lifetime if they did nothing otherwise with their abilities.

        And yet we're really phenomally sucking at creating ORGANIZED libraries of human knowledge. We create the internet yet we search all that with, what,
        google or worse web sites without the most basic rudiment of actual organization of the content or advanced capacities of query?

        We're creating personal "AI"s and cloud based ones too with like what 4k-200k "tokens" of context / prompt memory and nothing more?
        Sure there's RAG and stuff but it seems pathetic.

        We shouldn't have to be "digging through search results" we should have efficient ways to have organized the "sum of human knowledge" and advanced ways to search it.
        Nevertheless there'd still be a lot to search, petabytes, whatever. But the one thing our "digital society" should be good at -- organizing / sharing information
        we're not (besides in the domain of actual libraries and databases) and the one thing ML "assistants" should be good at -- NAVIGATING this mass of
        human knowledge and doing specific research based on synthesizing, searching, correlating it -- they SUCK at.

        I'd take SQL or some NOSQL query as a vast improvement over Google search but we don't even have that.

        And out ML models (in the breadth of them) can't even manage XML / JSON / PDF et. al. well without a lot of poorly working layers of things to force them to kinda sorta work.



        Originally posted by Chugworth View Post
        Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

        The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.

        Comment


        • #34
          It's by no means clear (and is rather anthopocentric and vain) that we could / should reverse engineer the human brain, we're talking about intelligence, after all.
          Many roads can lead to Rome. You have to have input data / sense to process, you have to have memory or something that functions like it, and you have to build
          a model of the past inputs to predict the current / future ones wrt. some context until your model achieves some greater level of understanding of the
          system that presents that input data / sense input.
          By whatever means given sufficient compute speed, input data, memory, and ability to non-self-destructively experiment (i.e. brain doesn't format itself into lockup state),
          and sufficient time, it'll happen. And as the models improve the v2 can be better than v1.

          Originally posted by partcyborg View Post

          Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?

          Comment


          • #35
            That's an extreme conclusion.
            I'll agree that there may be now and here no such entity, but surely it's possible that an intelligence that is based on digital computation could occur.
            Even now we've got 64 bit computers with that bit depth the most common data resolution used. That's a LOT of levels which I think is going to be
            better than or indistinguishable from "analog" in almost every case. If it represented in base 10 that's 19 significant digits, or if the minimum level as a voltage was
            1nV that'd range the maximum at 10GV. At such resolution you're beyond "scalable" analog phenomena of voltage, current, counts of physical objects moving, even for a count of photons that'd be rather a lot.

            You mentioned quantum systems and while it's true there are possibly areas they can outperform classical computers it's not clear that organic brains on this planet are so greatly involved in such domains that "modest" classical Turing machines can't exceed them.

            Go look at the formula for iteratively calculating Pi in base 10; now think about the first thousand digits. Write them down. Now try that on a computer.
            Or any number of other things our brains can "understand" the model of yet mostly we're not that fast at information retrieval, calculation, reaction, et. al.
            Don't give up on digital just yet.

            Originally posted by M.Bahr View Post
            There is no entity derived from "ones" and "zeros" that could learn something in the sense of learning like a living being could.

            Comment


            • #36
              Originally posted by pong View Post
              While I certainly concur that it seems deplorable the extent of ignorance for "AI" and instead we have what you (et.al.) have called "machine learning".
              Actually in a non-technical sense I wouldn't dignify the end result as either "AI" or "ML" because although the "ML" models do get trained, they don't
              (in general use) thereafter LEARN anything. I could start a thousand different conversations with chatgpt et.al. and first explain how I'd like it to respond to a request or proceed with a series of requests to attempt to satisfy my directive. It would "learn" nothing and every single conversation it'd know nothing different than the
              time before and whatever effort I'd have to do to give it context / state about proceeding with some ongoing project / workflow would be repeated necessarily ad infinitum.

              I don't expect it to be very "intelligent" yet, but at least it should be able to be simply trained to "learn" context of an interaction.

              Anyway I think the "mass plagiarism" is rather a harsh critique, I see where you're coming from but as you said yourself these models are fed a vast amount of training
              data, e.g. trillions of pieces of input data to generate models with "merely" billions of learned parameters.

              Information theory tells us any data has at most a certain amount of information assiociated with it, at most 1 bit data = one bit information, though it could be less
              efficient and there could be vastly less information present; we'd then say that the compressability of the data depends on the entropy and information content.

              So although one can infer there is some redundancy in the training data, some of these ML models are THOUSANDS of times smaller in model data size than the information they've been trained on. The training information generally could not be compressed that small by any known compression scheme.

              So although the ML model may remember fragmentary and qualitative aspects of text, images, et. al. it's CERTAINLY not "just copying" its input. It is learning generalities, qualitative correlations, statistical patterns, yes, from the sum of its experiences, but it's not just some data base that's recreating rote memorizations of its training
              data per. architecture looking at the broad context of how much data goes in to produce how much model out.

              On the contrary you've already said the human brain is vastly more complex and capable of true learning and remembering more actual information.
              We "train" all our people's "models" by sending them to school, making them study painting, sculpture, photography, vocal music, instrumental music, poetry, prose, philosophy, et. al.

              But we don't do that to commit "mass plagiarism" on popular culture, the work that has gone before, we do that to provide a context of ideas, information, themes,
              and history to educate, inform, inspire, and enable a common vernacular of meaning to words, images, sounds, language, et. al.

              So if you teach your child some Chinese language, is that "mass plagiarism" of some 2000 BC writer who "created" that language? When a high school art student
              draws a bowl of apples or picture of a starry sky is that "mass plagiarism" of some renaissance artist? When they write a poem is it "mass plagiarism" of Homer, Shakespeare, Tennyson, et. al.? Surely they've been "trained" on things but they don't remember all of it literally, could not if they had tried, but they've qualitatively been introduced to generalities, qualities, styles, et. al. For the most part that's not so different than the ML models.

              Maybe the ML models have fragments of things they remember literally to some close / partial approximation; 2+2=4, "once upon a time", the way an apple may look, etc.
              but it's not going to recite verbatim Hamlet or Beethoven's Ninth symphony, or the entirety of the Illiad (as general concepts, particulars could vary I suppose).
              Just like most teenage art student's couldn't create a convincing duplicate of the Mona Lisa if they tried despite having started at for some hours in the past.

              You bemoan AI scientists not creating "true AI" but if we HAD created (or shall) true AI, should we not educate it in the same way as we do our human children by
              training them on history, popular culture, literature, art, language, math, science, ....?
              I appreciate your thoughtful response pong, however I see two primary overriding flaws in the reasoning, and a few other peripheral ones.

              First of all, a human being does not have to be taught anything at all, though we do teach previous knowledge so it can be built upon rather than being discovered over and over again. Which, of course, is the way humanity existed for many generations until various humans independently discovered ways to record it for following generations.

              For example, if a human being were born in isolation on an island they would wander around and discover things independently. Driven by both curiosity and the need to survive they would explore their environment and learn which foods were edible, which animals were dangerous, explore the depths of human emotions like fear and love and hate, wonder about how they and the world were created, etc. While if a robot perfectly mimicking the physical capabilities of a human, but equipped only with any of the various flavors of AI instead of a human brain, were left in isolation it would just sit there. It would have no desire to do anything, and could not learn anything.

              Second, the things we teach children in school are not plagiarism because they have permission from the authors, or voluntary instruction by teachers. While the companies creating AI systems have simply scoured the internet gobbling up everything they can find, most often with no permission from, or compensation for, the human creators.

              As for information theory, I do not believe it applies to the human brain, and that's why we do not yet understand how our brains function. We know much about the cellular and physical structure, but almost nothing about the way those cells and structures interact to create the vast and seemingly endless well of being we call consciousness.

              Finally, the assertion that various flavors of AI "learn" because their output is not a one to one copy of their input is simply the result of associative networks matching features of various input data, not actually learning anything about it.

              Think of it this way - Human beings can independently discover and create AI, but AI cannot independently discover and create human beings.

              "Knowledge is free.

              It's the certificate that costs."
              SearingTruth
              Last edited by muncrief; 28 January 2024, 02:42 PM.

              Comment


              • #37
                Yes, imagine the science fiction -- millions of non-human intelligences in the Sol system besides genus Homo.
                Oh, wait, it's not fiction, they've always been all around us as we've been shedding our fur.

                Originally posted by creative View Post
                Is it just me or are other people having a hard time taking the whole AI thing seriously? I think if anything happens some unknown species out of our range of developed instruments most likely from Europa is going to assume a core code pattern and trick us all into thinking we made something great all while taking over the world.

                Imagine that as a science fiction novel, mankind conquered by jellyfish from Europa.

                I don't think we are the only intelligent species that came and went during the billions of years this solar system has been around.

                Comment


                • #38
                  Originally posted by muncrief View Post
                  For example, if a human being were born in isolation on an island they would wander around and discover things independently. Driven by both curiosity and the need to survive they would explore their environment and learn which foods were edible, which animals were dangerous, explore the depths of human emotions like fear and love and hate, wonder about how they and the world were created, etc. While if a robot perfectly mimicking the physical capabilities of a human, but equipped only with any of the various flavors of AI instead of a human brain, were left in isolation it would just sit there. It would have no desire to do anything, and could not learn anything.
                  A human newborn would also just sit there (or rather cry its lungs out) and perish without knowing what to do to survive.

                  Humans are born in a much less complete mental development state than most other animals and absolutely REQUIRE parental care to survive early out-of-womb existance... which makes us a terrible example if you're trying to prove that AI is bad just because it lacks a functional non-learned initial state... which is also a poor argument IMHO, and any pre-trained model running on one computer based on training done on other copies of a model on another computer can beat this idea.

                  The biological pre-training you seem to be alluding to (being born already knowing something) can be refered to as "instinct" in popular terms and the chain of events leading from a gene to an innate/instinctive behaviour is fascinating, albeit tortuous and often frail and innacurate.

                  And you're also glossing over the fact that fetuses already learn while inside the womb, while the brain is being formed it's already switched on and functioning, perceiving the world around it as best as it can at each development stage. There is a zillion things it's dealing with at that stage, including lights, sounds, tactile pressure, temperature, vibrations, chemical variations in its own body, hormone signals from the mother's blood into their own, etc, etc. That's already training the brain.

                  And I'd like to propose that a single human is absolutely incapable of wrapping their head around a fraction of the subjects ChatGPT can deep-dive into at the same time. How many humans can single-handedly know in-depth about all sorts of things from Rust vs. C to Quantum Physics to Sword Forging Techniques to Makeup Products Harmonization to Geopolitics to GPS Triangulation to ... you get the point

                  Comment


                  • #39
                    As for the original topic:

                    1) thankfully AMD listened to its users and made the effort to bring support to Linux

                    2) I still think it was a horrible sign they sent by even needing to ask users if this was wanted

                    3) yes, AI/machine learning/matrix computation can be useful even on linux and the opensource world and in common applications, even once spyware and targeted ad categories are excluded from the definition of "useful"

                    Comment


                    • #40
                      Originally posted by marlock View Post
                      A human newborn would also just sit there (or rather cry its lungs out) and perish without knowing what to do to survive.

                      Humans are born in a much less complete mental development state than most other animals and absolutely REQUIRE parental care to survive early out-of-womb existance... which makes us a terrible example if you're trying to prove that AI is bad just because it lacks a functional non-learned initial state... which is also a poor argument IMHO, and any pre-trained model running on one computer based on training done on other copies of a model on another computer can beat this idea.

                      The biological pre-training you seem to be alluding to (being born already knowing something) can be refered to as "instinct" in popular terms and the chain of events leading from a gene to an innate/instinctive behaviour is fascinating, albeit tortuous and often frail and innacurate.

                      And you're also glossing over the fact that fetuses already learn while inside the womb, while the brain is being formed it's already switched on and functioning, perceiving the world around it as best as it can at each development stage. There is a zillion things it's dealing with at that stage, including lights, sounds, tactile pressure, temperature, vibrations, chemical variations in its own body, hormone signals from the mother's blood into their own, etc, etc. That's already training the brain.

                      And I'd like to propose that a single human is absolutely incapable of wrapping their head around a fraction of the subjects ChatGPT can deep-dive into at the same time. How many humans can single-handedly know in-depth about all sorts of things from Rust vs. C to Quantum Physics to Sword Forging Techniques to Makeup Products Harmonization to Geopolitics to GPS Triangulation to ... you get the point
                      And if an AI robot were simply left on the ground without a power source, or initial training to be able to walk or reach out and grasp things, it would be in an even worse fatal state marlock. I was of course referring to both the human and AI robot possessing equal physical autonomy, and their ability to learn from that stage forward. And I'm not certain of your reasoning behind the statement that ChatGPT could deep-dive into subjects that a human could not, as that would require massive training of the ChatGPT system first. And even then the ChatGPT system would still know nothing, and simply be grammatically associating a query with the data it had been fed. An AI system of any flavor is simply not capable of creative thought, or thought of any kind. It can only blindly associate inputs and output those associations.

                      Comment

                      Working...
                      X