Announcement

Collapse
No announcement yet.

Qualcomm Talks Up Their Linux Support For The Snapdragon X Elite

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by qarium View Post
    i did read an article about that there people where surprised that this is not the case and they called this effect Grokking ​it was more or less discovered by accident.

    Grokking is related but different effect to hallucinations in AI. Yes there are two major types of AI hallucinations only one is related to Grokking.

    The problem with hallucinations with AI you ask a question one way and the AI gives the right answer you change the wording slightly and now it gives the incorrect answer.

    Yes 88% percent of the time with current methods you can detect the AI has gone and done hallucinations problem is the 22% where the AI has hallucinated and you fail to detect it.

    Problem is with how logical cpu as even a small amount of hallucination is a big problem.

    The problem with AI hallucinations is the only way to be sure self learning has none is give the AI every line it can be asked ever and at this point you might as well code an expert system it going to be faster. Yes faster to make and faster in requiring less CPU/gpu power to run.

    AI hallucinations is solid barrier to AI usages in particular fields with stable success. Programming is one of these fields. Any field where near enough is not good enough AI hallucinations is a big problem.

    Leave a comment:


  • qarium
    replied
    Originally posted by oiaohm View Post
    Latter studies also found AI hallucinations in LLM can even be guided into making insecure code.

    qarium this is the problem just measuring performance is half it. Remember a program can be faster because you have skipped all the checks you should be performing so you have secure code/binary.
    Normally no because the amount of time humans have to spend to validate semantic patches AI helped with is more time than coding the semantic patches themselves and having other human validate. Its the level of distrust you have to treat AI stuff with to correctly validate it..
    Yes people are going to get themselves into big trouble using AI assistance when coding because the AI will be the one creating broken/insecure code above the skill level of the human using the AI assistance to see that something is wrong.
    This is problem people sell AI as fix all. But when people go and do studies into how safe/correct the AI generated stuff is they find AI hallucinations is a big problem even worse that the hallucinations can be human believable as true and correct when they are completely wrong.
    Training AI to make human-friendly output does not mean the AI cannot have a hallucination and generate a human friendly output that seams fine to human while being completely wrong.
    The AI hallucination is a big problem when using AI for anything that could be life and death to humans.

    Remember you have expert systems then people attempted to make self learning these are rule based machine learning(RBML). Yes expert system without the human making it is RBML. RBML were one of the early items to show AI hallucination.
    Every self learning AI system has shown AI hallucination. Every method that humans have come up to make a self learning AI has suffered from AI hallucinations.
    well yes thank you for the information

    Originally posted by oiaohm View Post
    Its possible that hallucinations is pure not avoidable byproduct of self learning.
    i did read an article about that there people where surprised that this is not the case and they called this effect Grokking ​it was more or less discovered by accident.

    my opinion is that the size of the model need to be big enough for a specific problem and i really mean big bigger the biggest than people imagine

    and also even if you have such a model the grokking effect means the model need more time and i really mean much more time to then slip the switch to always deliver the right result without hallucinations.

    this grokking effect was discovered by people who accidentally did let the models run much much longer in the training phase than they ever could imagine it needs to perform the task.

    if y model is large enough and has enough time the result is no hallucinations anymore for a specific task.

    an article about Grokking: https://syncedreview.com/2023/09/15/...it-efficiency/

    more or less this means make the model larger give it much more time and there is a possibility that Grokking effect kick in and you have a solution without hallucinations

    but for some problems there is a possibility that this will never happen or need 1000 years means you will never experience it in your lifetime.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by qarium View Post
    i did read in the past of researchers who used LLM/NPU systems to optimise classic problems and did get some awesome results who could be used to improve compilers in classic problems and the AI found results who outperformed any human made design.
    Latter studies also found AI hallucinations in LLM can even be guided into making insecure code.


    qarium this is the problem just measuring performance is half it. Remember a program can be faster because you have skipped all the checks you should be performing so you have secure code/binary.

    Originally posted by qarium View Post
    ias i understand it we can use big AI to help humans to develop Semantic patches/expert systems.
    Normally no because the amount of time humans have to spend to validate semantic patches AI helped with is more time than coding the semantic patches themselves and having other human validate. Its the level of distrust you have to treat AI stuff with to correctly validate it..

    Yes people are going to get themselves into big trouble using AI assistance when coding because the AI will be the one creating broken/insecure code above the skill level of the human using the AI assistance to see that something is wrong.

    This is problem people sell AI as fix all. But when people go and do studies into how safe/correct the AI generated stuff is they find AI hallucinations is a big problem even worse that the hallucinations can be human believable as true and correct when they are completely wrong.

    Training AI to make human-friendly output does not mean the AI cannot have a hallucination and generate a human friendly output that seams fine to human while being completely wrong.

    The AI hallucination is a big problem when using AI for anything that could be life and death to humans.


    Remember you have expert systems then people attempted to make self learning these are rule based machine learning(RBML). Yes expert system without the human making it is RBML. RBML were one of the early items to show AI hallucination.

    Every self learning AI system has shown AI hallucination. Every method that humans have come up to make a self learning AI has suffered from AI hallucinations. Its possible that hallucinations is pure not avoidable byproduct of self learning.

    Leave a comment:


  • qarium
    replied
    Originally posted by oiaohm View Post
    The non learning Semantic patches can do 90 percent+ of the work of transmutations.
    Major code changes inside the linux kernel

    are done by coccinelle semantic patches. Yes these are quite major transmutations.
    The problem here is the hallucination problem of AI is direct side effect of being a self learning Intelligence. Remember us humans suffer from hallucinations this is not a AI limited thing but a self learning intelligence thing to suffer from hallucinations.
    How will you prove that AI model does not have a hidden hallucination problem? This is the complexity problem as AI models get more complex it becomes more and more impossible to prove that the AI model does not have a hallucination problem until after you have had it happen.
    There is a classic form of AI that does not have hallucination problem it called the expert system.

    These are not big complex AI models and these cannot in fact learn instead have to be programmed.
    Semantic patches are basically old school expert systems that have to be individually hand crafted. Since these are simple you can prove they don't have hallucination problem really simply. Yes expert systems can still do the wrong thing but they are lot more predictable and in lot cases able to reverse the process.
    Worst thing about AI hallucination with these modern large language models is it possible that you could attempt to use them to save 90% of the coding work and end up that your code base is that badly damaged that it will take 10000 times more work to fix it.
    Compiler optimizations are basically expert systems. Software development work is area where expert systems should reign supreme due to their highly predictable nature.
    qarium this is just one of these cases where newer does not equal better. Yes this is also the problem of people asking questions of chatgpt and the like they are not considering that the answer chatgpt and the like has just given them could be infected with an hallucination so not extra sources to double check if answer is right or not. Yes why there is such thing as peer review with humans is to reduce human hallucination problem this has not 100 percent prevented it. The most documented AI on the earth we have is humans.
    Self learning intelligence hallucination is quite a major barrier on how safe different AI types are to use and there is no magic bullet to fix this problem. Yes making AI that cannot suffer from hallucination normally equal no self learning so lots and lots of human man hours make it with peer reviewing and the more complex the AI the bigger risk it going to have hallucinations because of the human work on it. This creates a AI complexity limit that the AI has to be under a particular level of complexity to be sure it does not have hallucination problem. All the non complex AI designs that are under complexity limit are from 1970s and 1980s.
    There are lot of AI development at the moment but the modern AI that are being developed have high risk of AI hallucination problem . Some of the reason why self driving cars are so dangerous is the type of AI being used and making a self driving car using something pure expert system is also insanely hard.
    yes you are right and thank you for your very well written opinion.

    as i understand it we can use big AI to help humans to develop Semantic patches/expert systems.
    yes this is not a quick fix but i think this way we can use the best of both worlds.
    we just need to spend the human brain hours to validate the end result to make sure there are no hallucinations.

    we just need to make sure that these Large language modells are trained to produce a human-friendly output.
    instead of just make a correct output. human friendly in the meaning that humans can check the result more easily.

    i did read in the past of researchers who used LLM/NPU systems to optimise classic problems and did get some awesome results who could be used to improve compilers in classic problems and the AI found results who outperformed any human made design.

    "The non learning Semantic patches can do 90 percent+ of the work of transmutations."

    thank you for this update then i just remembered the wrong number but 90% is even better.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by qarium View Post
    the AI as i understand it is not used in the end-product. the point is you do not need to do the transmutations work all by hand .... 80% can be solved by AI and 20% you do it by hand.
    The non learning Semantic patches can do 90 percent+ of the work of transmutations.
    Major code changes inside the linux kernel

    are done by coccinelle semantic patches. Yes these are quite major transmutations.

    Originally posted by qarium View Post
    ​the AI modells develop very fast means in the future we maybe have AI modells without hallucination problem.
    The problem here is the hallucination problem of AI is direct side effect of being a self learning Intelligence. Remember us humans suffer from hallucinations this is not a AI limited thing but a self learning intelligence thing to suffer from hallucinations.

    How will you prove that AI model does not have a hidden hallucination problem? This is the complexity problem as AI models get more complex it becomes more and more impossible to prove that the AI model does not have a hallucination problem until after you have had it happen.

    There is a classic form of AI that does not have hallucination problem it called the expert system.

    These are not big complex AI models and these cannot in fact learn instead have to be programmed.

    Semantic patches are basically old school expert systems that have to be individually hand crafted. Since these are simple you can prove they don't have hallucination problem really simply. Yes expert systems can still do the wrong thing but they are lot more predictable and in lot cases able to reverse the process.

    Worst thing about AI hallucination with these modern large language models is it possible that you could attempt to use them to save 90% of the coding work and end up that your code base is that badly damaged that it will take 10000 times more work to fix it.

    Compiler optimizations are basically expert systems. Software development work is area where expert systems should reign supreme due to their highly predictable nature.

    qarium this is just one of these cases where newer does not equal better. Yes this is also the problem of people asking questions of chatgpt and the like they are not considering that the answer chatgpt and the like has just given them could be infected with an hallucination so not extra sources to double check if answer is right or not. Yes why there is such thing as peer review with humans is to reduce human hallucination problem this has not 100 percent prevented it. The most documented AI on the earth we have is humans.

    Self learning intelligence hallucination is quite a major barrier on how safe different AI types are to use and there is no magic bullet to fix this problem. Yes making AI that cannot suffer from hallucination normally equal no self learning so lots and lots of human man hours make it with peer reviewing and the more complex the AI the bigger risk it going to have hallucinations because of the human work on it. This creates a AI complexity limit that the AI has to be under a particular level of complexity to be sure it does not have hallucination problem. All the non complex AI designs that are under complexity limit are from 1970s and 1980s.

    There are lot of AI development at the moment but the modern AI that are being developed have high risk of AI hallucination problem . Some of the reason why self driving cars are so dangerous is the type of AI being used and making a self driving car using something pure expert system is also insanely hard.

    Leave a comment:


  • qarium
    replied
    Originally posted by oiaohm View Post
    In fact that not how they did it with wine or wine hangover or at microsoft. Normally you don't use AI based transmutations of libraries. Problem is ai hallucination issue..
    https://coccinelle.gitlabpages.inria.fr/website/sp.html
    the AI as i understand it is not used in the end-product. the point is you do not need to do the transmutations work all by hand .... 80% can be solved by AI and 20% you do it by hand.

    Originally posted by oiaohm View Post
    Items like Semantic patches is the closest this gets to AI in the process. These items don't really have decision-making ability​.

    Humans at Microsoft had created above.
    Wine has been thunking from 16bit to 32 bit for long time and did not invent it.
    https://rgmroman.narod.ru/qtthunk.htm yes this is Windows 9x thing.
    The process of mapping x86_64 libs to arm libs is the same as mapping 32 bit lib to 64 bit lib and is just a more complex form of 16 bit to 32 bit mapping. This predates modern AI. The name of the progress is thunking/thunks.
    This is based on Wine work at core and this absolutely not AI this is human coded work. Enhanced versions of search and replace are semantic patches.
    Yes all this stuff has nothing to do with fixing a closed source program.
    Patching binary program is normally using something like

    This is not AI either its lot of human drive control to work out what you should patch. These reverse engineering are basically a compiler running in reverse. Of course what makes this extra fun is lot details where thrown away when the compiler made the binary and of course these will be missing when you reverse compiler process.
    qarium yes google and others has talked about AI generating patches this has turned out to be mostly a bad idea.

    Yes papers about this come out with positive light in feb but upstream developers have been getting really annoyed by AI generated patches.
    The problem is ai hallucination. Yes AI comes up with the idea that somethings a bug when it 100 percent correct code and attempts to patch it.
    This does not bode well for items like Copilot and it already has examples of ai hallucinations.
    AI based transmutation turns out to be a double side sword. Yes you ai transmutations to attempt to patch out security flaws until wake up ai transmutations worse double side sword nature means it will have odds following happing..
    1) Correctly patched a security fault.
    2) Patched something that did nothing while claiming to fix a security fault.
    3) Introduced a security fault while claiming to remove such security fault.
    4) Introduced a security fault while removing a different security fault.
    5) Made completely non functional broken garbage.
    ok thank you for your very skilled expert response.
    i really did not mean that the AI runs at runtime or that the AI is in the end product.
    but of course the AI can assist to develop such a transmutation.

    so from your sources provided and your very skilled expert writing it looks like AI based transmutation without hallucination is a future science fiction not yet possible.

    the AI modells develop very fast means in the future we maybe have AI modells without hallucination problem.

    i am pretty sure the other forum users did like your post to so thank you for your writing.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by qarium View Post
    they really solved it its AI based transmutation of libs this is not only reason why you can map 32bit lib apps to 64bit libs... same reason why you can map x86_64 libs to ARM libs these AIs automatically can map 80% of it and 20% need to be mapped by hand on a per lib and per app basis.

    In fact that not how they did it with wine or wine hangover or at microsoft. Normally you don't use AI based transmutations of libraries. Problem is ai hallucination issue..


    Items like Semantic patches is the closest this gets to AI in the process. These items don't really have decision-making ability​.

    Humans at Microsoft had created above.

    Wine has been thunking from 16bit to 32 bit for long time and did not invent it.
    https://rgmroman.narod.ru/qtthunk.htm yes this is Windows 9x thing.

    The process of mapping x86_64 libs to arm libs is the same as mapping 32 bit lib to 64 bit lib and is just a more complex form of 16 bit to 32 bit mapping. This predates modern AI. The name of the progress is thunking/thunks.

    Originally posted by qarium View Post
    they use this technology to make closed source x86_64 games run in Valve Steam Proton
    This is based on Wine work at core and this absolutely not AI this is human coded work. Enhanced versions of search and replace are semantic patches.

    Yes all this stuff has nothing to do with fixing a closed source program.

    Patching binary program is normally using something like

    This is not AI either its lot of human drive control to work out what you should patch. These reverse engineering are basically a compiler running in reverse. Of course what makes this extra fun is lot details where thrown away when the compiler made the binary and of course these will be missing when you reverse compiler process.

    qarium yes google and others has talked about AI generating patches this has turned out to be mostly a bad idea.



    Yes papers about this come out with positive light in feb but upstream developers have been getting really annoyed by AI generated patches.

    The problem is ai hallucination. Yes AI comes up with the idea that somethings a bug when it 100 percent correct code and attempts to patch it.

    This does not bode well for items like Copilot and it already has examples of ai hallucinations.

    AI based transmutation turns out to be a double side sword. Yes you ai transmutations to attempt to patch out security flaws until wake up ai transmutations worse double side sword nature means it will have odds following happing..
    1) Correctly patched a security fault.
    2) Patched something that did nothing while claiming to fix a security fault.
    3) Introduced a security fault while claiming to remove such security fault.
    4) Introduced a security fault while removing a different security fault.
    5) Made completely non functional broken garbage.

    Leave a comment:


  • qarium
    replied
    Originally posted by Ladis View Post
    How can anybody fix it in a closed source software? And Chromium was cut from Google's features.
    they really solved it its AI based transmutation of libs this is not only reason why you can map 32bit lib apps to 64bit libs... same reason why you can map x86_64 libs to ARM libs these AIs automatically can map 80% of it and 20% need to be mapped by hand on a per lib and per app basis.

    they use this technology to make closed source x86_64 games run in Valve Steam Proton

    Leave a comment:


  • Artim
    replied
    Originally posted by kylew77 View Post

    WHAT? I have a masters of applied science in IT and a BS in computer science and am a Linux System Admin on RHEL for my day job. What could make me more qualified?
    Common sense.

    Leave a comment:


  • kylew77
    replied
    Originally posted by Artim View Post

    Thank god you're not part of my family. I'd never let someone with that little knowledge and common sense recommend anything.
    WHAT? I have a masters of applied science in IT and a BS in computer science and am a Linux System Admin on RHEL for my day job. What could make me more qualified?

    Leave a comment:

Working...
X