Announcement

Collapse
No announcement yet.

More Than 80 Kernel Patches Were Made This Summer By Outreachy Developers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by hussam View Post
    "Damn it! Now my kernel has cooties!" - Most Phoronix readers.
    This program It's available only in US and for US citisens, somewhat in opposition with Linux as operating system in generally which it's a result of an international effort. As example Linus it's a Finnish.

    Comment


    • #12
      Originally posted by hussam View Post
      "Damn it! Now my kernel has cooties!" - Most Phoronix readers.
      Why would that be ? Does anyone here rejects the work of Rear Admiral Grace Hopper ? Some react to Outreachy's ethics, not women in general.

      Comment


      • #13
        Originally posted by onicsis View Post

        This program It's available only in US and for US citisens, somewhat in opposition with Linux as operating system in generally which it's a result of an international effort. As example Linus it's a Finnish.
        It's probably meant to deal with specific issues inside the US. Those issues might not be unique to the US, but that doesn't mean they don't exist here.

        Comment


        • #14
          Originally posted by GunpowaderGuy View Post
          I am going to say it again , ai that decides which applicants to pick ( http://research.baidu.com/Research_A...dex-view?id=57 ) / verifies the desicions made by humans is the way to go to end unfair ( of irrelevant qualities ) discrimination .
          I would 100% agree. Unfortunately, most companies have human hiring managers, and humans are very biased.

          Comment


          • #15
            Originally posted by slideshow
            Who can apply for an internship?
            - Anyone who faces systematic bias or discrimination in the technology industry of their country is invited to apply
            No doubt the irony is lost on them.

            Comment


            • #16
              Originally posted by schmidtbag View Post
              I would 100% agree. Unfortunately, most companies have human hiring managers, and humans are very biased.
              Including, and maybe even especially, those who train AIs.

              Comment


              • #17
                Originally posted by bregma View Post

                Including, and maybe even especially, those who train AIs.
                that is why ai verification and interpretability are such a big deal

                Comment


                • #18
                  Originally posted by GunpowaderGuy View Post

                  that is why ai verification and interpretability are such a big deal
                  So, who verifies and interprets the AIs? Given the toxic environment surrounding the whole of computer technology, it's going to be clones of the chads who trained them and surprise surprise surprise those dudes are just as blind to their own biases.

                  And there you have the very essence of the systematic and institutionalized bias that is the root of the problem that things like Outreachy are trying to solve. Centuries of denial and wishing away haven't worked: maybe just maybe it's time to give a different approach a chance.

                  Comment


                  • #19
                    bregma The first , manual stages , are crowd sourced by the peer reviewers who check how the presented results of a scientific paper corelate with the software and the dataset it was trained on ; to access what merit and flaws , including dataset induced or deliberate bias , the ai model has . This is the bogstandard method scientific work , even far less mission crtical ai systems , go through to eliminate pathological science and fraud.

                    Then auxiliary research would further check the results with automated systems that complements the ability of humans to reason about the desicions made by neural networks , to verify that the software actually meets its goal : discrminating relevent characteristics , not ones that do not matter in a job or intership .

                    Finally the wide public is able to check all of the above .
                    Last edited by GunpowaderGuy; 26 October 2018, 01:38 PM.

                    Comment


                    • #20
                      Peer review is a positive feedback system that reinforces systemic and institutional bias. It can not be used to correct for it. Consider the classic cases of Joseph Lister or Barry Marshall and Robin Warren in the medical field.

                      Checking biased automated systems with biased automated systems is another potential positive feedback loop.

                      Checking that the results of an algorithm match your expectations doesn't mean your expectations were correct. It means you built a machine that meets your expectations. It does nothing to eliminate the systemic or institutional biases that fed your expectations, or wrote the algorithms, or trained the AI. The wrong answer correctly arrived at is still the wrong answer, but you can point to a machine that spits it out and say "but the machine said so!" in the same way some people say "but the Bible said so!". The Bible (1 Timothy 2:12) says men should ignore SJWs, so I guess if a machine says to hire only chads, it must be righteous.

                      And sure, the wide public is sometimes able to check. Assuming the wide public doesn't call out any systemic or institutional bias (like peers reviewing peers), because that would make them warriors for social justice instead, and we all know how anything they say should be dismissed because they question your biases.

                      For the most part, both the AI and the training data of commercial products are secret closed-source stuff and not open to public inspection.

                      Comment

                      Working...
                      X