Announcement

Collapse
No announcement yet.

AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • marlock
    replied
    and you seem to have failed to grasp that a human baby isn't just physically inept, it is mentally far more helpless than other animals, making it a poor argument mind to mind with an untrained AI (which is an unfair comparison to begin with because a newborn has already been trained in-womb a lot)

    it's not just uncalcified bones that prevent it from crawling, it just isn't born knowing how to coordinate that sort of movement yet, whereas calf can sometimes do it in minutes

    Leave a comment:


  • marlock
    replied
    and you are also being a bit obtuse IMHO about what neurons do vs. what eletronic-based mathematical weights can achieve... just because the material basis for two physical phenomena are absurdly different doesn't mean their end results can't be functionally equivalent to some degree...

    look up "fairy ring pixie circle complex system emerging pattern" for a didatic example of functionally similar emerging patterns from radically different underlying physical phenomena

    in more specific terms, when a neuron creats a synapse to another neuron it is functionally attributing a mathematical weight to that impulse pathway which isgenerally helping link some of that biological body's inputs to some of that same biological body's outputs... you can argue several functional differences between current-gen LLMs like ChatGPT and a human brain (eg: neuron synapses have timing-dependance between an impulse and previous impulse through the same neuron, whereas ChatGPT is sort of stateless), but those differences in implementation detail are not synonym to one being thinking and the other not being thinking

    that is the same class of mistake several scientists made (defining "Life" by its material basis instead of a functional definition, eg: "has DNA material") that caused most to reject seeing viruses as Life, then later hampered even admiting prions can exist

    Leave a comment:


  • marlock
    replied
    IMHO you are plain wrong about ChatGPT not being creative... it can create stuff it never saw by piecing together stuff it saw, even across contexts, as i've seen first-hand several times... even the so-called halucinations are proof of it

    as a colleague of mine (an environmental researcher) once said: "it quoted an article that just didn't exist anywhere in the world, but that article should definitely exist!"

    ChatGPT even lied that the article was from a real-world author and this author did indeed work on related subjects, coherent but not the core of the invented piece

    this is not an uncommon sight if you use it enough

    Leave a comment:


  • muncrief
    replied
    Originally posted by marlock View Post
    A human newborn would also just sit there (or rather cry its lungs out) and perish without knowing what to do to survive.

    Humans are born in a much less complete mental development state than most other animals and absolutely REQUIRE parental care to survive early out-of-womb existance... which makes us a terrible example if you're trying to prove that AI is bad just because it lacks a functional non-learned initial state... which is also a poor argument IMHO, and any pre-trained model running on one computer based on training done on other copies of a model on another computer can beat this idea.

    The biological pre-training you seem to be alluding to (being born already knowing something) can be refered to as "instinct" in popular terms and the chain of events leading from a gene to an innate/instinctive behaviour is fascinating, albeit tortuous and often frail and innacurate.

    And you're also glossing over the fact that fetuses already learn while inside the womb, while the brain is being formed it's already switched on and functioning, perceiving the world around it as best as it can at each development stage. There is a zillion things it's dealing with at that stage, including lights, sounds, tactile pressure, temperature, vibrations, chemical variations in its own body, hormone signals from the mother's blood into their own, etc, etc. That's already training the brain.

    And I'd like to propose that a single human is absolutely incapable of wrapping their head around a fraction of the subjects ChatGPT can deep-dive into at the same time. How many humans can single-handedly know in-depth about all sorts of things from Rust vs. C to Quantum Physics to Sword Forging Techniques to Makeup Products Harmonization to Geopolitics to GPS Triangulation to ... you get the point
    And if an AI robot were simply left on the ground without a power source, or initial training to be able to walk or reach out and grasp things, it would be in an even worse fatal state marlock. I was of course referring to both the human and AI robot possessing equal physical autonomy, and their ability to learn from that stage forward. And I'm not certain of your reasoning behind the statement that ChatGPT could deep-dive into subjects that a human could not, as that would require massive training of the ChatGPT system first. And even then the ChatGPT system would still know nothing, and simply be grammatically associating a query with the data it had been fed. An AI system of any flavor is simply not capable of creative thought, or thought of any kind. It can only blindly associate inputs and output those associations.

    Leave a comment:


  • marlock
    replied
    As for the original topic:

    1) thankfully AMD listened to its users and made the effort to bring support to Linux

    2) I still think it was a horrible sign they sent by even needing to ask users if this was wanted

    3) yes, AI/machine learning/matrix computation can be useful even on linux and the opensource world and in common applications, even once spyware and targeted ad categories are excluded from the definition of "useful"

    Leave a comment:


  • marlock
    replied
    Originally posted by muncrief View Post
    For example, if a human being were born in isolation on an island they would wander around and discover things independently. Driven by both curiosity and the need to survive they would explore their environment and learn which foods were edible, which animals were dangerous, explore the depths of human emotions like fear and love and hate, wonder about how they and the world were created, etc. While if a robot perfectly mimicking the physical capabilities of a human, but equipped only with any of the various flavors of AI instead of a human brain, were left in isolation it would just sit there. It would have no desire to do anything, and could not learn anything.
    A human newborn would also just sit there (or rather cry its lungs out) and perish without knowing what to do to survive.

    Humans are born in a much less complete mental development state than most other animals and absolutely REQUIRE parental care to survive early out-of-womb existance... which makes us a terrible example if you're trying to prove that AI is bad just because it lacks a functional non-learned initial state... which is also a poor argument IMHO, and any pre-trained model running on one computer based on training done on other copies of a model on another computer can beat this idea.

    The biological pre-training you seem to be alluding to (being born already knowing something) can be refered to as "instinct" in popular terms and the chain of events leading from a gene to an innate/instinctive behaviour is fascinating, albeit tortuous and often frail and innacurate.

    And you're also glossing over the fact that fetuses already learn while inside the womb, while the brain is being formed it's already switched on and functioning, perceiving the world around it as best as it can at each development stage. There is a zillion things it's dealing with at that stage, including lights, sounds, tactile pressure, temperature, vibrations, chemical variations in its own body, hormone signals from the mother's blood into their own, etc, etc. That's already training the brain.

    And I'd like to propose that a single human is absolutely incapable of wrapping their head around a fraction of the subjects ChatGPT can deep-dive into at the same time. How many humans can single-handedly know in-depth about all sorts of things from Rust vs. C to Quantum Physics to Sword Forging Techniques to Makeup Products Harmonization to Geopolitics to GPS Triangulation to ... you get the point

    Leave a comment:


  • pong
    replied
    Yes, imagine the science fiction -- millions of non-human intelligences in the Sol system besides genus Homo.
    Oh, wait, it's not fiction, they've always been all around us as we've been shedding our fur.

    Originally posted by creative View Post
    Is it just me or are other people having a hard time taking the whole AI thing seriously? I think if anything happens some unknown species out of our range of developed instruments most likely from Europa is going to assume a core code pattern and trick us all into thinking we made something great all while taking over the world.

    Imagine that as a science fiction novel, mankind conquered by jellyfish from Europa.

    I don't think we are the only intelligent species that came and went during the billions of years this solar system has been around.

    Leave a comment:


  • muncrief
    replied
    Originally posted by pong View Post
    While I certainly concur that it seems deplorable the extent of ignorance for "AI" and instead we have what you (et.al.) have called "machine learning".
    Actually in a non-technical sense I wouldn't dignify the end result as either "AI" or "ML" because although the "ML" models do get trained, they don't
    (in general use) thereafter LEARN anything. I could start a thousand different conversations with chatgpt et.al. and first explain how I'd like it to respond to a request or proceed with a series of requests to attempt to satisfy my directive. It would "learn" nothing and every single conversation it'd know nothing different than the
    time before and whatever effort I'd have to do to give it context / state about proceeding with some ongoing project / workflow would be repeated necessarily ad infinitum.

    I don't expect it to be very "intelligent" yet, but at least it should be able to be simply trained to "learn" context of an interaction.

    Anyway I think the "mass plagiarism" is rather a harsh critique, I see where you're coming from but as you said yourself these models are fed a vast amount of training
    data, e.g. trillions of pieces of input data to generate models with "merely" billions of learned parameters.

    Information theory tells us any data has at most a certain amount of information assiociated with it, at most 1 bit data = one bit information, though it could be less
    efficient and there could be vastly less information present; we'd then say that the compressability of the data depends on the entropy and information content.

    So although one can infer there is some redundancy in the training data, some of these ML models are THOUSANDS of times smaller in model data size than the information they've been trained on. The training information generally could not be compressed that small by any known compression scheme.

    So although the ML model may remember fragmentary and qualitative aspects of text, images, et. al. it's CERTAINLY not "just copying" its input. It is learning generalities, qualitative correlations, statistical patterns, yes, from the sum of its experiences, but it's not just some data base that's recreating rote memorizations of its training
    data per. architecture looking at the broad context of how much data goes in to produce how much model out.

    On the contrary you've already said the human brain is vastly more complex and capable of true learning and remembering more actual information.
    We "train" all our people's "models" by sending them to school, making them study painting, sculpture, photography, vocal music, instrumental music, poetry, prose, philosophy, et. al.

    But we don't do that to commit "mass plagiarism" on popular culture, the work that has gone before, we do that to provide a context of ideas, information, themes,
    and history to educate, inform, inspire, and enable a common vernacular of meaning to words, images, sounds, language, et. al.

    So if you teach your child some Chinese language, is that "mass plagiarism" of some 2000 BC writer who "created" that language? When a high school art student
    draws a bowl of apples or picture of a starry sky is that "mass plagiarism" of some renaissance artist? When they write a poem is it "mass plagiarism" of Homer, Shakespeare, Tennyson, et. al.? Surely they've been "trained" on things but they don't remember all of it literally, could not if they had tried, but they've qualitatively been introduced to generalities, qualities, styles, et. al. For the most part that's not so different than the ML models.

    Maybe the ML models have fragments of things they remember literally to some close / partial approximation; 2+2=4, "once upon a time", the way an apple may look, etc.
    but it's not going to recite verbatim Hamlet or Beethoven's Ninth symphony, or the entirety of the Illiad (as general concepts, particulars could vary I suppose).
    Just like most teenage art student's couldn't create a convincing duplicate of the Mona Lisa if they tried despite having started at for some hours in the past.

    You bemoan AI scientists not creating "true AI" but if we HAD created (or shall) true AI, should we not educate it in the same way as we do our human children by
    training them on history, popular culture, literature, art, language, math, science, ....?
    I appreciate your thoughtful response pong, however I see two primary overriding flaws in the reasoning, and a few other peripheral ones.

    First of all, a human being does not have to be taught anything at all, though we do teach previous knowledge so it can be built upon rather than being discovered over and over again. Which, of course, is the way humanity existed for many generations until various humans independently discovered ways to record it for following generations.

    For example, if a human being were born in isolation on an island they would wander around and discover things independently. Driven by both curiosity and the need to survive they would explore their environment and learn which foods were edible, which animals were dangerous, explore the depths of human emotions like fear and love and hate, wonder about how they and the world were created, etc. While if a robot perfectly mimicking the physical capabilities of a human, but equipped only with any of the various flavors of AI instead of a human brain, were left in isolation it would just sit there. It would have no desire to do anything, and could not learn anything.

    Second, the things we teach children in school are not plagiarism because they have permission from the authors, or voluntary instruction by teachers. While the companies creating AI systems have simply scoured the internet gobbling up everything they can find, most often with no permission from, or compensation for, the human creators.

    As for information theory, I do not believe it applies to the human brain, and that's why we do not yet understand how our brains function. We know much about the cellular and physical structure, but almost nothing about the way those cells and structures interact to create the vast and seemingly endless well of being we call consciousness.

    Finally, the assertion that various flavors of AI "learn" because their output is not a one to one copy of their input is simply the result of associative networks matching features of various input data, not actually learning anything about it.

    Think of it this way - Human beings can independently discover and create AI, but AI cannot independently discover and create human beings.

    "Knowledge is free.

    It's the certificate that costs."
    SearingTruth
    Last edited by muncrief; 28 January 2024, 02:42 PM.

    Leave a comment:


  • pong
    replied
    That's an extreme conclusion.
    I'll agree that there may be now and here no such entity, but surely it's possible that an intelligence that is based on digital computation could occur.
    Even now we've got 64 bit computers with that bit depth the most common data resolution used. That's a LOT of levels which I think is going to be
    better than or indistinguishable from "analog" in almost every case. If it represented in base 10 that's 19 significant digits, or if the minimum level as a voltage was
    1nV that'd range the maximum at 10GV. At such resolution you're beyond "scalable" analog phenomena of voltage, current, counts of physical objects moving, even for a count of photons that'd be rather a lot.

    You mentioned quantum systems and while it's true there are possibly areas they can outperform classical computers it's not clear that organic brains on this planet are so greatly involved in such domains that "modest" classical Turing machines can't exceed them.

    Go look at the formula for iteratively calculating Pi in base 10; now think about the first thousand digits. Write them down. Now try that on a computer.
    Or any number of other things our brains can "understand" the model of yet mostly we're not that fast at information retrieval, calculation, reaction, et. al.
    Don't give up on digital just yet.

    Originally posted by M.Bahr View Post
    There is no entity derived from "ones" and "zeros" that could learn something in the sense of learning like a living being could.

    Leave a comment:


  • pong
    replied
    It's by no means clear (and is rather anthopocentric and vain) that we could / should reverse engineer the human brain, we're talking about intelligence, after all.
    Many roads can lead to Rome. You have to have input data / sense to process, you have to have memory or something that functions like it, and you have to build
    a model of the past inputs to predict the current / future ones wrt. some context until your model achieves some greater level of understanding of the
    system that presents that input data / sense input.
    By whatever means given sufficient compute speed, input data, memory, and ability to non-self-destructively experiment (i.e. brain doesn't format itself into lockup state),
    and sufficient time, it'll happen. And as the models improve the v2 can be better than v1.

    Originally posted by partcyborg View Post

    Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?

    Leave a comment:

Working...
X