Announcement

Collapse
No announcement yet.

A 2024 Discussion Whether To Convert The Linux Kernel From C To Modern C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nth_man
    replied
    "Nearly all men can stand adversity, but if you want to test a man's character, give him power".
    -- Abraham Lincoln

    A classical saying In Spanish:
    "Si quieres saber quién es Juanillo, dale un carguillo".
    "Si quieres conocer a Fulanito, dale un carguito".
    -- Refrán español clásico

    Leave a comment:


  • Nth_man
    replied
    "Power doesn't corrupt, power unmasks."
    -- Rubén Blades

    Leave a comment:


  • Anux
    replied
    Originally posted by ssokolow View Post
    ...
    (i.e. The skill isn't in making a polished-looking output, it's in making an output that matches your vision instead of just being a bland re-hashing of whatever traits occurred most often in the training set...
    Exactly, AI can be used as a more advanced brush but leaves all the creativity to the artist.
    My point was, AI itself can not be creative (at least the AI we have right now).

    Leave a comment:


  • lowflyer
    replied
    Originally posted by ssokolow View Post

    No, I'm saying that "change the world one individual at a time" never works unless you give the people doing the changing sufficient tools to wield against the people who are too lazy to change. The world is full of lazy people who've found their way into positions of power.

    ...and that's not counting people who are spoiled, not lazy. Look at the tantrums and malicious compliance Apple has been engaging in over the EU DMA.
    Well, therein is the problem. The same people to which you "give ... sufficient tools to wield against ..." will become the people "who've found their way into positions of power." There are those that tell you anything just to get these positions. I guess you know the saying:

    Originally posted by John Dalber-Acton (Lord Acton)
    Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority.
    He wrote it in 1887 about the papacy. But it holds true for all positions of power. Even if it is the "absolutely selfless and only by pure motives driven EU". The rulers will gladly heed to the call of the populace for more regulations. - Because that empowers them even more. That's exactly how the communists and the third reich came to their powers.

    The problem with Apple: It's a cult. In any cult you'll find the spoiled, the lazy and the ones that are paid. You need to get these people out of the cult. It only works one individual at a time. But It is much harder to get the cult out of the people. The others: Don't bother, let them die by their own self inflicted death.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by lowflyer View Post
    Blindly calling for "systemic protections" is the trap. It is pointing at "the others". It is the enshrinement of the saying "If only all would do like this / be like this ...". Too much regulations makes businesses go bankrupt. Too many ways of appealing makes college degrees useless. Who protects the regulated from the regulators? It's a big mistake to think that the rule-making-people are not "broken" themselves. Introducing rules is the communist/socialist/marxist approach. There are enough examples on this planet that show how this never works.
    No, I'm saying that "change the world one individual at a time" never works unless you give the people doing the changing sufficient tools to wield against the people who are too lazy to change. The world is full of lazy people who've found their way into positions of power.

    ...and that's not counting people who are spoiled, not lazy. Look at the tantrums and malicious compliance Apple has been engaging in over the EU DMA.

    Leave a comment:


  • lowflyer
    replied
    Originally posted by ssokolow View Post
    Certainly, but we mustn't fall into the trap of thinking we can fix individual people. That never works. We need systemic protections. That's why businesses get regulated. That's why colleges have a system for appealing bad marks. etc.
    That's where I disagree. This is not the trap. You're correct saying *WE* can't fix individual people. We cannot. But we don't have to. People can only be "fixed" by themselves. And they do so. People turning themselves around and better themselves is the only way how society gets "fixed".

    Blindly calling for "systemic protections" is the trap. It is pointing at "the others". It is the enshrinement of the saying "If only all would do like this / be like this ...". Too much regulations makes businesses go bankrupt. Too many ways of appealing makes college degrees useless. Who protects the regulated from the regulators? It's a big mistake to think that the rule-making-people are not "broken" themselves. Introducing rules is the communist/socialist/marxist approach. There are enough examples on this planet that show how this never works.

    There needs to be a balance. While a ground layer of moral rules is necessary - anything much beyond that is from the devil.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by lowflyer View Post
    In software there is a distinct benefit of lazyness: It can be a driving force behind optimization. Well humans will always remain humans and errors will be made. I will always accept that. However we can (and should) call them out for ignorance - or "willful blindness".
    Certainly, but we mustn't fall into the trap of thinking we can fix individual people. That never works. We need systemic protections. That's why businesses get regulated. That's why colleges have a system for appealing bad marks. etc.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Anux View Post
    That's not an argument and also totally different from generating random/AI images and then selecting 1 out of 1000 till you find something beautiful. This leads to not involving any art making process, because art starts in your head and everything else is just the tools and restrictions you use to bring the art to paper.

    AI has no means of critical thinking, self-reflection or even understanding anything (human language, society).
    As someone who dabbles with Stable Diffusion for my own private use (I use it as an "automated brainstorming helper" for getting me out of writing-hobby ruts in the same way that a linter is an "automated code reviewer". Seeing where the A.I.'s preconceptions differ from mine for the same prompt is useful for breaking out of ruts.), I can tell you that's not how anyone who actually has any skill with it uses it. It's more a cross between being a nature photographer (i.e. having the skill and patience to recognize good composition and potential for future tweaking) and things people who are unarguably are artists say, such as that their work was always in the wood/marble/whatever and they just exposed it, or the surprising amount of skill at painting that goes into taking a circuitous route through the vagaries of the paint, how it mixes, the canvas, and how the paint interacts with it in order to arrive at what you envisioned... modified by "happy little accidents" along the way.

    For example, some of the things about Stable Diffusion that aren't just "push button until your receive art" include:
    • Refining prompts (You'd be surprised how many preconceptions these models have and how many hidden connections between different keywords... and that's before you get into things like using reverse prompting tools to actually find keywords for what you want, using model inspection tools to discover if the model training internalized a keyword or phrase by tokenizing it in an unhelpful way, etc. At the high ends, it starts to feel more like debugging crashes in a C program.)
    • Gradient prompts (Similar to how you can animate properties in Blender 3D, you can tell Stable Diffusion to "animate" the weights of keywords in the prompt over the number of refinement iterations you specified, because Stable Diffusion rendering is sort of like progressive JPEG rendering or how painters will block in a canvas, then paint large blobs, then iteratively introduce finer and finer detail, so you can do stuff like telling it to start out rendering person A, and then switch to aiming for person B before the identifying details go in.)
    • Regional prompting (If you want something to be true in one part of an image but not another, such as having two different characters in the same scene, you need to write multiple prompts and specify what portions of the image each one applies to. Not all Stable Diffusion frontends support this.)
    • ControlNet (eg. OpenPose, which is the Stable Diffusion equivalent to putting IK bones inside a model in Blender 3D... though it's not always reliable and the tooling for it is still kind of byzantine.)
    • Inpainting (Once you've got an image that's almost right, you can mark regions and re-render them, using the existing content as a biasing weight. Expertise deciding how to tweak the prompt applies. Inpainting may involve using the same model or a custom inpainting model more specialized to the task.)
    • etc. etc. etc.
    Note that both initial images and inpainting can also be used to turn blobs you draw into refined images, but that requires enough of a sense of proportion to be able to size and position the blobs such that you don't get a "garbage in, garbage out" effect where Stable Diffusion will choose to stick to the shape you drew, even if that means producing a deformed monstrosity. (I mainly use it to erase unwanted stuff in the background and then ask SD to please extend the background to naturally fill in the obvious-unless-you-squint-hard-enough copy-paste/paintbrush erasure. (eg. to erase a duplicate of the character I prompted for, because its conception of the requested setting included people in the background and, without regional prompting, it's likely to replicate the character description across each person it needs to draw.)

    Overall, I'd say that art-generating A.I. is barely out of the IMSAI Altair Build It Yourself Kit phrase right now, but it's very much not "push button, receive acclaim" either way. It just looks that way because of the combination of hype and "Unlike traditional art tools, where having vision is easy and making it look good is the hard part", A.I. leans more toward "It'll try to make even the dumbest prompts look good, but they all quickly reveal themselves to be formulaic and boring unless you have a vision and the skill to successfully guide the A.I. into implementing it."

    (i.e. The skill isn't in making a polished-looking output, it's in making an output that matches your vision instead of just being a bland re-hashing of whatever traits occurred most often in the training set... you can see why it's so popular for I-dont-want-to-pay-for-my-stock-images top-of-article filler.)

    In a sense, what we're seeing is just a really fancy version of how, in the 90s, people would upload low-effort output from Photoshop plugins for generating things like marble/cloud/etc. textures.
    Last edited by ssokolow; 05 April 2024, 07:15 PM.

    Leave a comment:


  • Anux
    replied
    Originally posted by Old Grouch View Post
    Your understanding of the concept of a 'philosophical zombie' differs from mine.
    I don't have any apart from it being a philosophical theory. If I am talking to a real human I can easily verify if he understands what I say and that he is not a computer. And by talking about his life and decisions throughout I can easily notice if he has any critical thinking and self-reflection. And those are characteristics needed to produce any meaningful art.
    Otherwise you're just creating beautiful things at best, like colorful flowers.

    As a means of distinguishing an LLM pretending to be human from a human, your suggestion has merit, mainly because most LLM's context windows are too small.
    It's not about context windows. It's about understanding what is talked about rather than making statistics based decisions. The latter will always throw together things that don't belong together simply because the statistic says it's probable.

    Leave a comment:


  • Old Grouch
    replied
    Originally posted by Anux View Post
    Easy, just talk to them about their life.
    Your understanding of the concept of a 'philosophical zombie' differs from mine.

    As a means of distinguishing an LLM pretending to be human from a human, your suggestion has merit, mainly because most LLM's context windows are too small.

    Leave a comment:

Working...
X