AMD Publishes XDNA Linux Driver: Support For Ryzen AI On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • partcyborg
    replied
    Originally posted by muncrief View Post

    I believe you missed the primary point of my post.

    If we'd been more intelligent as a species, and less greedy and impatient, and begun work on reverse engineering the brain 50 years ago, instead of pursuing machine learning, we would already have, or be much closer to, actual general artificial intelligence.

    There is indeed danger in all advancement of human knowledge, and producing true general artificial intelligence will pose many science-fiction type threats to humanity.

    However as I said pursuing machine learning was an incredible waste of resources and time, and poses threats because it is illusory and will never work, but people have already been deceived and are using it in critical systems that it is simply not appropriate for. Writing a novel or painting a picture is one thing, but controlling vehicles and other critical systems is simply foolish and deadly without cause or true benefit.

    But make no mistake, real general AI will pose threats because it will be like humans, with all our inherent gifts and flaws. It will have the potential to be incredibly good and helpful, or evil and dangerous.

    Just as each of us are.

    But the sooner we confront our true future, the sooner we can begin to learn to cope with, and hopefully control, it.
    Ok, I'll bite. What specific scientific work on "reverse engineering the human brain" have we "completely ignored" for the last 50 years?

    Leave a comment:


  • oleid
    replied
    The documentation doesn't yet outline any upstreaming plans they have for this driver to get it into the mainline kernel or if they will just be maintaining it out-of-tree or what all their Linux support plans entail.
    Michael
    That's true. But one of the open issues gives insight:

    The Linux kernel required to run this xdna-driver is based on an old release candidate of Linux 6.7 but Linux 6.7.2 has been released since with a lot of fixes (https://cdn.kernel.org/pub/linux/ker...


    The dependency on patches in that repo is temporary, the patches in it are on their way upstream. Some are making it for 6.8, but a few will go out to 6.9 (or later depending upon code review).

    Leave a comment:


  • timrichardson
    replied
    Machine learning is

    Leave a comment:


  • muncrief
    replied
    Originally posted by stormcrow View Post

    ... which is pretty much every scientific advance in human history ... If we followed your advice no one would ever do anything at all.

    I don't like the current LLM & cryptocurrency gold rush horrific waste any more than anyone else. There are remedies for this, both socially and regulatory. If we followed your advice we'd have never flown in the first balloons, let alone put humans in orbit or on the moon. After all, flying risks more people than just those in the machine, but everyone in the path on the ground as well. Fire? Probably our greatest discovery... also the single biggest killer in all of history. How many people have cut their fingers slicing bread over the past five thousand years? Yes, I'm deliberately pointing out the absurdity in your rant. Humans are really bad at being proactive to curb risk in a rational way, but that's no reason to not even try.
    I believe you missed the primary point of my post.

    If we'd been more intelligent as a species, and less greedy and impatient, and begun work on reverse engineering the brain 50 years ago, instead of pursuing machine learning, we would already have, or be much closer to, actual general artificial intelligence.

    There is indeed danger in all advancement of human knowledge, and producing true general artificial intelligence will pose many science-fiction type threats to humanity.

    However as I said pursuing machine learning was an incredible waste of resources and time, and poses threats because it is illusory and will never work, but people have already been deceived and are using it in critical systems that it is simply not appropriate for. Writing a novel or painting a picture is one thing, but controlling vehicles and other critical systems is simply foolish and deadly without cause or true benefit.

    But make no mistake, real general AI will pose threats because it will be like humans, with all our inherent gifts and flaws. It will have the potential to be incredibly good and helpful, or evil and dangerous.

    Just as each of us are.

    But the sooner we confront our true future, the sooner we can begin to learn to cope with, and hopefully control, it.

    Leave a comment:


  • stormcrow
    replied
    Originally posted by muncrief View Post
    ...
    In the meantime, as I said in my OP, we can expect a lot of destruction and death because of our lack of foresight and vision, and the unfortunate relative ease involved in fooling human beings.
    ... which is pretty much every scientific advance in human history ... If we followed your advice no one would ever do anything at all.

    I don't like the current LLM & cryptocurrency gold rush horrific waste any more than anyone else. There are remedies for this, both socially and regulatory. If we followed your advice we'd have never flown in the first balloons, let alone put humans in orbit or on the moon. After all, flying risks more people than just those in the machine, but everyone in the path on the ground as well. Fire? Probably our greatest discovery... also the single biggest killer in all of history. How many people have cut their fingers slicing bread over the past five thousand years? Yes, I'm deliberately pointing out the absurdity in your rant. Humans are really bad at being proactive to curb risk in a rational way, but that's no reason to not even try.

    Leave a comment:


  • quaz0r
    replied
    Originally posted by naysayers
    ignorant assertions about what intelligence means, copyright pearl-clutching, and other disingenuous braindead bullshit
    *sips tea*

    Leave a comment:


  • muncrief
    replied
    Originally posted by Chugworth View Post
    Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

    The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.
    That's the problem with machine learning and accumulating massive volumes of human authored information to create the illusion of intelligence - it takes enormous amounts of power and storage space.

    While the human brain runs on about 100 watts, and fits in the tiny space of our head.

    The last 50 years of R&D on machine learning have been an enormous waste of resources, time, and money. All of which will be discarded two or three decades from now when the human brain is actually reverse engineered.

    In the meantime, as I said in my OP, we can expect a lot of destruction and death because of our lack of foresight and vision, and the unfortunate relative ease involved in fooling human beings.
    Last edited by muncrief; 25 January 2024, 10:33 PM.

    Leave a comment:


  • Chugworth
    replied
    Originally posted by muncrief View Post
    There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

    So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.
    Call it what you want, but it's something useful. I've actually used Bard numerous times to get various types of information, and it saved me the time of digging through search results. I even felt that some of the responses were more clear and to-the-point of my question than the information I would have read on the source websites.

    The thing that I don't like about most of the "AI" implementations available now is the cloud-based nature of them. I know we're a long way from having anything like Bard or ChatGPT that can run from our own hardware, but any step in that direction is a positive development.

    Leave a comment:


  • stormcrow
    replied
    Originally posted by muncrief View Post
    ... wall of text...​
    Can't have AGI if you don't even know what either intelligence or sentience is or how to properly define and quantify it. "I know it when I see it," isn't good enough (as the Turing test proves - humans are very easily fooled). Cognitive science hasn't even come that far. That's why the term "artificial intelligence" was used in the first literature about machine learning and self adaptive programming. It was an acknowledgement that what they're using and talking about had nothing to do with intuitive thinking nor could they precisely replicate what humans do. We don't even know what "it" is nor how our brains do what they do. Until very recently we were arrogantly sure that "lower" life forms couldn't feel emotions, and that we were the only species that used tools and could learn new techniques with tools. Cognitive science and mental health science are still in a comparative dark age when held against the physical sciences thanks to religious taboos, arrogance, and stigma.

    Anyone that takes a breath and steps back a moment can see we're really no closer to a synthetic sentient being than we were in the 80s. Those saying otherwise are mostly trying to persuade people to fund their research just like they were in the 70s & 80s (and every other bubble in history). Cramming all the information in every library and database in the world into a single model doesn't make a sentient being and it never will, but it will make for better, more adaptive templating (even generative AI 'art' is really just advanced statistical templating) and predictive autocorrection at least.

    The AI funding winter will return again, but at some point hackers will have a use for NPUs beyond the commercially blessed data models companies are using to try to pursue revenue growth at all costs, that's when the really interesting stuff will happen for the rest of us. Bubbles usually leave behind some fragments of useful stuff.

    Leave a comment:


  • muncrief
    replied
    There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

    So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

    While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

    Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

    So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

    But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

    However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

    In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

    So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death.​

    Leave a comment:

Working...
X