Announcement

Collapse
No announcement yet.

Intel, AMD, Red Hat / IBM, Meta & Others Launch The AI Alliance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by stormcrow View Post
    *sigh* Can we not go one day without more people blowing smoke and mirror press releases about "AI" that are less intelligent than an amoeba and cluttering up news feeds just to con investors into thinking there's a fire because of how much stinking smoke corporations are churning out to scream "Me too!"?

    This is about open source AI as much as a newly sealed time capsule "to be opened in 3023AD" is open. It's because all of those alliance companies got caught flat foot and are going into panic mode over a technology that's still as much smoke and mirrors as it was in the '80s. They're jealous of OpenAI's publicity and all the credulous investors that always seems to bring in each bubble cycle. The problem is, no one can possibly scream loud enough to be heard over the mob to say "This is a supremely bad idea!" and not because of tinfoil hat fears of Skynet. It's because those systems used to churn out models use up more electricity than entire countries. So much so companies are kicking around ideas of buying or building their own power plants just to keep up with demand. Regardless of how they generate electricity, those plants are generating heat, that heat has to go somewhere just so some half-literate high school grad can lie to their boss with how smart they are by asking a LLM to generate a report on data they never produced to begin with (or a whole host of other lies, deceptions, and biased articles, literature, art, and even laws).
    This is part of why I would like to formally reject the name "AI" and all of the fantasy names that come with it including all this verbiage around "learning" and "neural networks" and instead replace it with the more fitting term Statistical Programming which accurately and correctly describes the true character of these programs. Which can then be used for the sorts of things that Statistical Programming is ACTUALLY useful for as opposed to trying to blow smoke up everyone's asses to sell it on what it fundamentally cannot do. It would also therefore leave nobody in the least surprised when it turns out feeding it it's own data is a massive problem that results in the self-destruction of the statistical model as opposed to "It'Ll tUrN iNtO sKyNeT iF iT StArTs lEaRnInG fRoM iTsElF" that the absolute brainlets pushing "AI" propose.

    Comment


    • #22
      Originally posted by Bamber View Post
      Every time I hear "responsible" I read censorship.
      That paranoia and can be treated

      Well, what they mean by "responsible" is the ability to train the network so that it doesn't produce undesired output. That could as well be that the AI follows the three laws of robotics or the laws of the country you live in (*).
      An totally uncontrolled AI is probably not good.

      (*) Sure, if you live in a non-democratic country, it could as well be "Thou shall not speak ill about our leader".
      Last edited by oleid; 06 December 2023, 03:19 AM.

      Comment


      • #23
        Originally posted by GraysonPeddie View Post
        I would rather have my own private AI system in my own home network. Just connect to Home Assistant, deny Internet access, and that's all I want for my home automation, especially when it comes to voice assistance.
        Go for it!

        BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality - bionic-gpt/bionic-gpt

        gpt4all: run open-source LLMs anywhere. Contribute to nomic-ai/gpt4all development by creating an account on GitHub.


        No voice input or output, yet.
        But I'm thinking those open source solutions will have it in a year from now.

        Comment


        • #24
          Originally posted by oleid View Post

          Go for it!

          BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality - bionic-gpt/bionic-gpt

          gpt4all: run open-source LLMs anywhere. Contribute to nomic-ai/gpt4all development by creating an account on GitHub.


          No voice input or output, yet.
          But I'm thinking those open source solutions will have it in a year from now.
          gpt4all is quite nice for a first dabble and for experimenting with different models. Can not control your smart home, though.

          Comment


          • #25
            You want "safe" AI?

            You want "responsible" AI?

            Make the companies, corporations and C-suites (every member thereof) of the producing entity civilly and/or criminally responsible for the results when the AI "hallucinates" something which would get a human in legal trouble. And set the minimum sentence to either (a) total asset forfeiture to the wronged party or (b) life without parole.

            Then watch as suddenly they care about the accuracy of their "AI" results...

            Comment


            • #26
              Originally posted by reba View Post

              gpt4all is quite nice for a first dabble and for experimenting with different models. Can not control your smart home, though.
              I wanted to test https://mycroft.ai/ for some time. Maybe it can do what you ask for?

              Comment


              • #27
                Originally posted by Luke_Wolf View Post

                This is part of why I would like to formally reject the name "AI" and all of the fantasy names that come with it including all this verbiage around "learning" and "neural networks" and instead replace it with the more fitting term Statistical Programming which accurately and correctly describes the true character of these programs. Which can then be used for the sorts of things that Statistical Programming is ACTUALLY useful for as opposed to trying to blow smoke up everyone's asses to sell it on what it fundamentally cannot do. It would also therefore leave nobody in the least surprised when it turns out feeding it it's own data is a massive problem that results in the self-destruction of the statistical model as opposed to "It'Ll tUrN iNtO sKyNeT iF iT StArTs lEaRnInG fRoM iTsElF" that the absolute brainlets pushing "AI" propose.
                No wonder why many people have such fantasies. Those are being fed by science fiction stories in media for decades. i understand why the mainstream society falls for those fantasies and project them to our real world. It is thrilling and entertaining. But sadly even IT people with insufficient knowledge fall for that kind of imaginations, fueled by marketing departments promoting their so called A.I. products. It is time to fact check all those buzz words.

                From a sober perspective all of the current A.I. chips still do consist of transistors. The fundamental principies haven't changed much. Even specialized A.I. hardware has still much in common with conventional gpus. Read the debate between linux gpu driver maintainers and some A.I. companies on the proposed distinction in that regard. (https://www.phoronix.com/news/Accel-...021-Discussion)
                “A.I.”, i.e. "machine learning", is a bunch of matrix multiplication and linear algebra. Remove those and you no longer have a GPU. So called A.I. programs can also be executed on compute shaders and conventional cpus as well or on cpus with vector extensions for more speed.

                The essential main progression has been made on the software side. The way how data is being processed has improved dramatically by the help of more sophisticated heuristics. Therefore "Statistical Programming" is the correct term. We narrowed down and fine tuned our filters to get a more desired result. But the basics for those are nothing new either. OCR or text recognition for example exist since the 1970s.
                What we got are more specialized hardware and more sophisticated software to achieve results, which can resemble human actions or even surpass them in certain things. But so does a calculator from the 1970s. Does this mean the calculator was comparable to a human being now?

                If you think one could derive some kind of consciousness just from faster transistors and any form of algorithms, then you are very wrong. In the last decade i read many opinions from people, who think they got a clue about A.I. while they don't. Most of the time they are guessing things based upon their bias, buzz words from a.i. companies and science fiction. In contrast to this thousands of philosophers, scientists in nuclear physics, doctors and all sorts of experts have been debating about "the hard problem of consciousness" for centuries and still couldn't find a solution to that. Many people including IT people don't even know about this issue. https://youtu.be/uhRhtFFhNzQ Understanding computers is not enough here. You also need deeper knowledge in biology to comprehend at least the basics.

                Sam Altman publicly promoting to people they had achieved an A.G.I. is far away from the truth. I find many of his statements and actions controversial to say the least. Meanwhile serious A.I. experts like Yann LeCun expressed their scepticism about ever achieving an A.G.I. And the sober facts are supporting his opinion.

                Personally i am very concerned about A.I., finding naive, hasty and unrestricted acceptance from politics, media and society in good faith. Not because an A.G.I. could lead us to some dystopian future like in fantasy films, but rather because a so called A.I. driver assistant was not "intelligent" enough to "recognize" a living being crossing the streets.
                Last edited by M.Bahr; 12 December 2023, 10:58 AM.

                Comment


                • #28
                  Originally posted by flower View Post

                  Obviously Netflix. They don't want to enable netflix to make ai generated movies
                  If that does happen, I'll get out my eyes and continue to type my bullshit in a braille terminal. Netflix destroys good series, proliferate bullshit and convert teenagers into braindead zombies.

                  Comment


                  • #29
                    Originally posted by peppercats View Post
                    I don't trust a single organization listed there to determine what qualifies as 'safe' and 'responsible'

                    Maybe the University of Tokyo.
                    Japanese universities are quite corrupt too.

                    Comment


                    • #30
                      Originally posted by oleid View Post

                      Well, what they mean by "responsible" is the ability to train the network so that it doesn't produce undesired output.
                      Who decides what an undesirable output is ? probably not you. So, it still doesn't sound good.

                      Originally posted by oleid View Post
                      ​That could as well be that the AI follows the three laws of robotics or the laws of the country you live in (*).
                      No sentience in those AI things, no morals. I doubt this approach would work as you might expect.

                      Originally posted by oleid View Post
                      ​An totally uncontrolled AI is probably not good.
                      Depends on what you call uncontrolled. As long as I can unplug it, as long as it fits my needs... why prefer a castrated version ?

                      Originally posted by oleid View Post
                      (*) Sure, if you live in a non-democratic country, it could as well be "Thou shall not speak ill about our leader".
                      For the record, there is *no* democracy on this planet, not at any significant level. The democratic idea is basically that, an idea(l). An idea used to kill a lot of people. Or, best case, to control the herd.
                      Last edited by lateo; 09 December 2023, 05:02 PM.

                      Comment

                      Working...
                      X