Originally posted by stormcrow
View Post
Announcement
Collapse
No announcement yet.
Intel, AMD, Red Hat / IBM, Meta & Others Launch The AI Alliance
Collapse
X
-
- Likes 3
-
Originally posted by Bamber View PostEvery time I hear "responsible" I read censorship.
Well, what they mean by "responsible" is the ability to train the network so that it doesn't produce undesired output. That could as well be that the AI follows the three laws of robotics or the laws of the country you live in (*).
An totally uncontrolled AI is probably not good.
(*) Sure, if you live in a non-democratic country, it could as well be "Thou shall not speak ill about our leader".Last edited by oleid; 06 December 2023, 03:19 AM.
- Likes 2
Comment
-
Originally posted by GraysonPeddie View PostI would rather have my own private AI system in my own home network. Just connect to Home Assistant, deny Internet access, and that's all I want for my home automation, especially when it comes to voice assistance.
BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality - bionic-gpt/bionic-gpt
gpt4all: run open-source LLMs anywhere. Contribute to nomic-ai/gpt4all development by creating an account on GitHub.
No voice input or output, yet.
But I'm thinking those open source solutions will have it in a year from now.
- Likes 1
Comment
-
Originally posted by oleid View Post
Go for it!
BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality - bionic-gpt/bionic-gpt
gpt4all: run open-source LLMs anywhere. Contribute to nomic-ai/gpt4all development by creating an account on GitHub.
No voice input or output, yet.
But I'm thinking those open source solutions will have it in a year from now.
Comment
-
You want "safe" AI?
You want "responsible" AI?
Make the companies, corporations and C-suites (every member thereof) of the producing entity civilly and/or criminally responsible for the results when the AI "hallucinates" something which would get a human in legal trouble. And set the minimum sentence to either (a) total asset forfeiture to the wronged party or (b) life without parole.
Then watch as suddenly they care about the accuracy of their "AI" results...
- Likes 2
Comment
-
Originally posted by reba View Post
gpt4all is quite nice for a first dabble and for experimenting with different models. Can not control your smart home, though.
- Likes 1
Comment
-
Originally posted by Luke_Wolf View Post
This is part of why I would like to formally reject the name "AI" and all of the fantasy names that come with it including all this verbiage around "learning" and "neural networks" and instead replace it with the more fitting term Statistical Programming which accurately and correctly describes the true character of these programs. Which can then be used for the sorts of things that Statistical Programming is ACTUALLY useful for as opposed to trying to blow smoke up everyone's asses to sell it on what it fundamentally cannot do. It would also therefore leave nobody in the least surprised when it turns out feeding it it's own data is a massive problem that results in the self-destruction of the statistical model as opposed to "It'Ll tUrN iNtO sKyNeT iF iT StArTs lEaRnInG fRoM iTsElF" that the absolute brainlets pushing "AI" propose.
From a sober perspective all of the current A.I. chips still do consist of transistors. The fundamental principies haven't changed much. Even specialized A.I. hardware has still much in common with conventional gpus. Read the debate between linux gpu driver maintainers and some A.I. companies on the proposed distinction in that regard. (https://www.phoronix.com/news/Accel-...021-Discussion)
“A.I.”, i.e. "machine learning", is a bunch of matrix multiplication and linear algebra. Remove those and you no longer have a GPU. So called A.I. programs can also be executed on compute shaders and conventional cpus as well or on cpus with vector extensions for more speed.
The essential main progression has been made on the software side. The way how data is being processed has improved dramatically by the help of more sophisticated heuristics. Therefore "Statistical Programming" is the correct term. We narrowed down and fine tuned our filters to get a more desired result. But the basics for those are nothing new either. OCR or text recognition for example exist since the 1970s.
What we got are more specialized hardware and more sophisticated software to achieve results, which can resemble human actions or even surpass them in certain things. But so does a calculator from the 1970s. Does this mean the calculator was comparable to a human being now?
If you think one could derive some kind of consciousness just from faster transistors and any form of algorithms, then you are very wrong. In the last decade i read many opinions from people, who think they got a clue about A.I. while they don't. Most of the time they are guessing things based upon their bias, buzz words from a.i. companies and science fiction. In contrast to this thousands of philosophers, scientists in nuclear physics, doctors and all sorts of experts have been debating about "the hard problem of consciousness" for centuries and still couldn't find a solution to that. Many people including IT people don't even know about this issue. https://youtu.be/uhRhtFFhNzQ Understanding computers is not enough here. You also need deeper knowledge in biology to comprehend at least the basics.
Sam Altman publicly promoting to people they had achieved an A.G.I. is far away from the truth. I find many of his statements and actions controversial to say the least. Meanwhile serious A.I. experts like Yann LeCun expressed their scepticism about ever achieving an A.G.I. And the sober facts are supporting his opinion.
Personally i am very concerned about A.I., finding naive, hasty and unrestricted acceptance from politics, media and society in good faith. Not because an A.G.I. could lead us to some dystopian future like in fantasy films, but rather because a so called A.I. driver assistant was not "intelligent" enough to "recognize" a living being crossing the streets.Last edited by M.Bahr; 12 December 2023, 10:58 AM.
- Likes 1
Comment
-
Originally posted by flower View Post
Obviously Netflix. They don't want to enable netflix to make ai generated movies
Comment
-
Originally posted by oleid View Post
Well, what they mean by "responsible" is the ability to train the network so that it doesn't produce undesired output.
Originally posted by oleid View PostThat could as well be that the AI follows the three laws of robotics or the laws of the country you live in (*).
Originally posted by oleid View PostAn totally uncontrolled AI is probably not good.
Originally posted by oleid View Post(*) Sure, if you live in a non-democratic country, it could as well be "Thou shall not speak ill about our leader".Last edited by lateo; 09 December 2023, 05:02 PM.
Comment
Comment