AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • pong
    replied
    True and yet it ironically points out one of the greatest failings of current ML and frankly a lot of current human "digital society".

    Although we've done amazing things like creating the library at Alexandria and great libraries from time to time ever since, we're really in a dark age of folly now.

    We have desktop data stores of 10-100+ terabyte scale being moderately common, commercial ones hundreds of thousands times larger.
    An average PC may well have space to hold more text that a human could ever read in their lifetime if they did nothing otherwise with their abilities.

    And yet we're really phenomally sucking at creating ORGANIZED libraries of human knowledge. We create the internet yet we search all that with, what,
    google or worse web sites without the most basic rudiment of actual organization of the content or advanced capacities of query?

    We're creating personal "AI"s and cloud based ones too with like what 4k-200k "tokens" of context / prompt memory and nothing more?
    Sure there's RAG and stuff but it seems pathetic.

    We shouldn't have to be "digging through search results" we should have efficient ways to have organized the "sum of human knowledge" and advanced ways to search it.
    Nevertheless there'd still be a lot to search, petabytes, whatever. But the one thing our "digital society" should be good at -- organizing / sharing information
    we're not (besides in the domain of actual libraries and databases) and the one thing ML "assistants" should be good at -- NAVIGATING this mass of
    human knowledge and doing specific research based on synthesizing, searching, correlating it -- they SUCK at.

    I'd take SQL or some NOSQL query as a vast improvement over Google search but we don't even have that.

    And out ML models (in the breadth of them) can't even manage XML / JSON / PDF et. al. well without a lot of poorly working layers of things to force them to kinda sorta work.



    Originally posted by Chugworth View Post
    Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

    The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.

    Leave a comment:


  • pong
    replied
    While I certainly concur that it seems deplorable the extent of ignorance for "AI" and instead we have what you (et.al.) have called "machine learning".
    Actually in a non-technical sense I wouldn't dignify the end result as either "AI" or "ML" because although the "ML" models do get trained, they don't
    (in general use) thereafter LEARN anything. I could start a thousand different conversations with chatgpt et.al. and first explain how I'd like it to respond to a request or proceed with a series of requests to attempt to satisfy my directive. It would "learn" nothing and every single conversation it'd know nothing different than the
    time before and whatever effort I'd have to do to give it context / state about proceeding with some ongoing project / workflow would be repeated necessarily ad infinitum.

    I don't expect it to be very "intelligent" yet, but at least it should be able to be simply trained to "learn" context of an interaction.

    Anyway I think the "mass plagiarism" is rather a harsh critique, I see where you're coming from but as you said yourself these models are fed a vast amount of training
    data, e.g. trillions of pieces of input data to generate models with "merely" billions of learned parameters.

    Information theory tells us any data has at most a certain amount of information assiociated with it, at most 1 bit data = one bit information, though it could be less
    efficient and there could be vastly less information present; we'd then say that the compressability of the data depends on the entropy and information content.

    So although one can infer there is some redundancy in the training data, some of these ML models are THOUSANDS of times smaller in model data size than the information they've been trained on. The training information generally could not be compressed that small by any known compression scheme.

    So although the ML model may remember fragmentary and qualitative aspects of text, images, et. al. it's CERTAINLY not "just copying" its input. It is learning generalities, qualitative correlations, statistical patterns, yes, from the sum of its experiences, but it's not just some data base that's recreating rote memorizations of its training
    data per. architecture looking at the broad context of how much data goes in to produce how much model out.

    On the contrary you've already said the human brain is vastly more complex and capable of true learning and remembering more actual information.
    We "train" all our people's "models" by sending them to school, making them study painting, sculpture, photography, vocal music, instrumental music, poetry, prose, philosophy, et. al.

    But we don't do that to commit "mass plagiarism" on popular culture, the work that has gone before, we do that to provide a context of ideas, information, themes,
    and history to educate, inform, inspire, and enable a common vernacular of meaning to words, images, sounds, language, et. al.

    So if you teach your child some Chinese language, is that "mass plagiarism" of some 2000 BC writer who "created" that language? When a high school art student
    draws a bowl of apples or picture of a starry sky is that "mass plagiarism" of some renaissance artist? When they write a poem is it "mass plagiarism" of Homer, Shakespeare, Tennyson, et. al.? Surely they've been "trained" on things but they don't remember all of it literally, could not if they had tried, but they've qualitatively been introduced to generalities, qualities, styles, et. al. For the most part that's not so different than the ML models.

    Maybe the ML models have fragments of things they remember literally to some close / partial approximation; 2+2=4, "once upon a time", the way an apple may look, etc.
    but it's not going to recite verbatim Hamlet or Beethoven's Ninth symphony, or the entirety of the Illiad (as general concepts, particulars could vary I suppose).
    Just like most teenage art student's couldn't create a convincing duplicate of the Mona Lisa if they tried despite having started at for some hours in the past.

    You bemoan AI scientists not creating "true AI" but if we HAD created (or shall) true AI, should we not educate it in the same way as we do our human children by
    training them on history, popular culture, literature, art, language, math, science, ....?


    Originally posted by muncrief View Post
    There is zero artificial intelligence today. ...
    Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.
    ​​

    Leave a comment:


  • p91paul
    replied
    Originally posted by Chugworth View Post
    The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.
    llama.cpp can run several smaller models on consumer hardware. It depends what you mean with "anything like"; you can't expect the same output quality but it's definitely a similar user experience.

    Leave a comment:


  • jaxa
    replied
    Instead of arguing about what AI is, call XDNA a Potentially Useful INT8/INT4/BF16 Accelerator.

    Leave a comment:


  • muncrief
    replied
    Originally posted by partcyborg View Post

    Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?
    partcyborg, I was referencing connectomics as described in articles like https://www.news-medical.net/news/20...use-brain.aspx and https://www.anl.gov/article/small-brain-big-data.
    Last edited by muncrief; 27 January 2024, 03:54 PM.

    Leave a comment:


  • muncrief
    replied
    Originally posted by Bobby Bob View Post

    It's weird how some people can talk with such absolute confidence about this topic without even being able to get terminology straight, or even using it inconsistently...

    Yes, AI exists today, and it has for decades. Bots in games? That's AI. Pathfinding? That's AI. Chatbots? That's AI. Hand writing recognition? That's AI.

    What do you think the word 'artificial' means? In this context, it literally means artificial as in 'fake'. Fake Intelligence. Not real intelligence. That's why we have a term for the real stuff that can do actual logic and reasoning in the same way humans can - it's called 'general intelligence', a term you even used in that very same ranting wall of text, correctly even. You could easily fix your entire post by just doing a find and replace on 'AI' and 'artificial intelligence' and replacing them both with GI and general intelligence and everything else you said would be correct.

    What is it about all this generative AI that has suddenly created this wave of people who keep mistaking the term 'artificial intelligence' for 'real intelligence'? If you only googled the terms you're using you would easily find that you're using the term completely wrong.
    I assumed the readers of my post would understand my references to AI and general AI from the context of the sentence structures Bobby Bob .

    Leave a comment:


  • muncrief
    replied
    Originally posted by david-nk View Post

    You're bitter about something that is not even real. You say machine learning will never work, yet it is already working TODAY.
    ....
    So instead of being bitter about your idea for "real" intelligence not having being pursued enough in the past, if I were you, I would actually start to study the field of machine learning... and actually neuroscience as well, since like many other people, you seem to put the human brain on a pedestal, thinking there is something magical or special about it, when there really is not. It is a fascinating field regardless and you will find that there are many parallels between the process of learning in ANN and how learning happens in real brains of animals and humans, even if the fundamental underlying method is different.
    I respectfully disagree david-nk.

    First of all, if what you are saying were true then companies would not be using billions of dollars worth of AI hardware, astounding amounts of electrical power, and engaging in mass plagiarism of the internet to train their models.

    Second, I created a biological neural simulator in my early 20s on an old Atari 1040 ST with two floppy drives, and around 20 floppy disks. I used the Principles of Neural Science by Kandel and Schwartz, Second Edition, as a reference to create the simulator.

    Four basic software modules - Reception, Conduction, Integration, and Transmission, were utilized to create the simulated neurons.

    The Reception module simulated the presynaptic portion of the synapse, the Conduction module simulated axon and dendrite signal transmission, including anterograde and retrograde molecular transport, the Integration module simulated the soma, which included electrical and chemically gated ion channels, as well as passive channels, and the Transmission module simulated the post synaptic portion of the synapse.

    The final simulation incorporated 10 neurons, which was all my old 1040 ST could handle. And I would spend many, many hours swapping different floppy disks to carry out 3 or 4 seconds of simulation, but it was exhilarating to watch the neurons firing and integrating to produce the correct results as specified in the Principles of Neural Science.

    Unfortunately after about a year and a half I had to suspend my work on the simulator because I have no formal education beyond a high school diploma and therefore could not get funding to continue my work. However once reentering the work force as a normal R&D hardware/firmware/software engineer I was never in a position to return to it.

    So I do understand neuroscience quite well, and machine learning has absolutely nothing to do with it. It's like saying throwing gasoline into a fire pit is the same as an internal combustion engine. Yes, they both use gasoline to create energy, but that is their only common connection, as they are two completely different beasts.
    Last edited by muncrief; 27 January 2024, 01:54 PM.

    Leave a comment:


  • varikonniemi
    replied
    Originally posted by M.Bahr View Post

    If you reallly want to learn more about this topic i recommend to learn what a consciousness is according to science in the first place.

    That's not a question of science but philosophy. And i have a feeling that the latest advances in philosophy will show you consciousness to be the opposite of what you think it is.

    Leave a comment:


  • creative
    replied
    Is it just me or are other people having a hard time taking the whole AI thing seriously? I think if anything happens some unknown species out of our range of developed instruments most likely from Europa is going to assume a core code pattern and trick us all into thinking we made something great all while taking over the world.

    Imagine that as a science fiction novel, mankind conquered by jellyfish from Europa.

    I don't think we are the only intelligent species that came and went during the billions of years this solar system has been around.
    Last edited by creative; 27 January 2024, 05:04 PM.

    Leave a comment:


  • M.Bahr
    replied
    Originally posted by quaz0r View Post

    That is precisely what a "living being" is in fact. A biological animal like a human, as you are referencing here, is an emergent phenomenon, a complex pattern emerging from a set of simple interactions occurring at scale in both time and space.

    Whether you want to imagine a human, an organ, a cell, a molecule, an atom, or something even smaller and simpler still, each level is an emergent pattern of greater complexity arising from the interactions of the simpler patterns beneath.

    Does acknowledging this reality mean that a chat bot running on your desktop computer in 2024 is the same level of complexity as a human? Of course not. Does it mean that your desktop computer experiences itself the same way a human experiences itself? Of course not.

    Trying to argue whether ChatGPT is the same thing as a human is missing the point entirely, and intentionally so.

    The correct realization to have here is that complex systems emerge from the interaction of simple systems at scale. This quite literally explains the universe and everything in it at every scale, except of course why anything exists in the first place.

    A companion to this realization, also relevant to the AI discussion, is to understand that patterns are not one and the same as the medium they are carried on. The word "intelligence" can be carved into a rock, written in a book, stored on a hard drive, conveyed through the compression of air or the pulsing of light, or spelled out on your lawn by forming a batch of fallen tree leaves into the appropriate letters.

    The patterns that you assume are fundamentally inextricable from your current physical manifestation can absolutely be replicated, despite the fact that we aren't yet capable of doing so.
    I can see from your reply that you don't seem to have much knowledge about the matter. Most of your biased thoughts seem to be derived from popular mainstream sci-fi videos consumed by the mainstream. As a matter of fact i read many claims, assumptions, speculations and philosophies but no evidence on the grounds of hard science. You even make up own definitions for the classification of a living being. If you reallly want to learn more about this topic i recommend to learn what a consciousness is according to science in the first place.

    Leave a comment:

Working...
X