More Than 80 Kernel Patches Were Made This Summer By Outreachy Developers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GunpowaderGuy
    replied
    And that is without taking into account how sustainable affirmative action ( current definition ) results are regarding its stated goal "create a positive feedback loop that supports more x participating in y " when discrimination results in more discrimination ; either completly underserved , or less so . When more people of group obtain their position becuase of a crutch then they are less likely to be competent .

    Or how wether this all is fair to the people being discriminated against

    Or that it neglects possibly the most important bias still standing in "developed countries ", the personal one that dictates which type of career people choose and how much they will dedicate to it vs their family or personal life
    Last edited by GunpowaderGuy; 26 October 2018, 08:13 PM.

    Leave a comment:


  • GunpowaderGuy
    replied
    bregma Ai verification is about understanding why an ai does something . To understand whether is a good enough way to reach the goal or if it is based on unrealiable assumptions that make it seem like its fullfiling its job competently . Not mostly about whether it fits certain expectations , its says so much in the tittle of the video i shared.

    Most ai systems ( of public interest , the kind that are featured on two minute papers at least ) and many of their datasets are by an large open sourced by their creators nowadays to lift help from the common public onto those projects , instead of them creating their own re implementations ; which still happens most of the time code is not released.

    By how much your arguments are unsound , i would say ( not an affirmation , just stating what happens to people with your line of thinking ) you are probably confusing having ideas refutted or being called out on the mob mentality of your group , with denial and being unfairly dismissed .

    Eg : Even if was true that the way science is currently done is deeply maimed by personal biases and that outreachy can help solve that ( you have backed up neither statement ) . Then that would not still not mean that events like outreachy are worth pursuing . Doing so for that purpose would ammount to trying fighting biases of one kind with others ; instead of improving knowledge through actual methods that eliminate incorrect or fraudulent ideas , like the engineering design process , the scientific method or deconstruction through formal logic

    Regarding what i mean with subversive mob mentality making people to not use use logic : recap , the last one ; both the video and comment section show examples
    Last edited by GunpowaderGuy; 26 October 2018, 08:18 PM.

    Leave a comment:


  • bregma
    replied
    Peer review is a positive feedback system that reinforces systemic and institutional bias. It can not be used to correct for it. Consider the classic cases of Joseph Lister or Barry Marshall and Robin Warren in the medical field.

    Checking biased automated systems with biased automated systems is another potential positive feedback loop.

    Checking that the results of an algorithm match your expectations doesn't mean your expectations were correct. It means you built a machine that meets your expectations. It does nothing to eliminate the systemic or institutional biases that fed your expectations, or wrote the algorithms, or trained the AI. The wrong answer correctly arrived at is still the wrong answer, but you can point to a machine that spits it out and say "but the machine said so!" in the same way some people say "but the Bible said so!". The Bible (1 Timothy 2:12) says men should ignore SJWs, so I guess if a machine says to hire only chads, it must be righteous.

    And sure, the wide public is sometimes able to check. Assuming the wide public doesn't call out any systemic or institutional bias (like peers reviewing peers), because that would make them warriors for social justice instead, and we all know how anything they say should be dismissed because they question your biases.

    For the most part, both the AI and the training data of commercial products are secret closed-source stuff and not open to public inspection.

    Leave a comment:


  • GunpowaderGuy
    replied
    bregma The first , manual stages , are crowd sourced by the peer reviewers who check how the presented results of a scientific paper corelate with the software and the dataset it was trained on ; to access what merit and flaws , including dataset induced or deliberate bias , the ai model has . This is the bogstandard method scientific work , even far less mission crtical ai systems , go through to eliminate pathological science and fraud.

    Then auxiliary research would further check the results with automated systems that complements the ability of humans to reason about the desicions made by neural networks , to verify that the software actually meets its goal : discrminating relevent characteristics , not ones that do not matter in a job or intership .

    Finally the wide public is able to check all of the above .
    Last edited by GunpowaderGuy; 26 October 2018, 01:38 PM.

    Leave a comment:


  • bregma
    replied
    Originally posted by GunpowaderGuy View Post

    that is why ai verification and interpretability are such a big deal
    So, who verifies and interprets the AIs? Given the toxic environment surrounding the whole of computer technology, it's going to be clones of the chads who trained them and surprise surprise surprise those dudes are just as blind to their own biases.

    And there you have the very essence of the systematic and institutionalized bias that is the root of the problem that things like Outreachy are trying to solve. Centuries of denial and wishing away haven't worked: maybe just maybe it's time to give a different approach a chance.

    Leave a comment:


  • GunpowaderGuy
    replied
    Originally posted by bregma View Post

    Including, and maybe even especially, those who train AIs.
    that is why ai verification and interpretability are such a big deal

    Leave a comment:


  • bregma
    replied
    Originally posted by schmidtbag View Post
    I would 100% agree. Unfortunately, most companies have human hiring managers, and humans are very biased.
    Including, and maybe even especially, those who train AIs.

    Leave a comment:


  • brrrrttttt
    replied
    Originally posted by slideshow
    Who can apply for an internship?
    - Anyone who faces systematic bias or discrimination in the technology industry of their country is invited to apply
    No doubt the irony is lost on them.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by GunpowaderGuy View Post
    I am going to say it again , ai that decides which applicants to pick ( http://research.baidu.com/Research_A...dex-view?id=57 ) / verifies the desicions made by humans is the way to go to end unfair ( of irrelevant qualities ) discrimination .
    I would 100% agree. Unfortunately, most companies have human hiring managers, and humans are very biased.

    Leave a comment:


  • Niarbeht
    replied
    Originally posted by onicsis View Post

    This program It's available only in US and for US citisens, somewhat in opposition with Linux as operating system in generally which it's a result of an international effort. As example Linus it's a Finnish.
    It's probably meant to deal with specific issues inside the US. Those issues might not be unique to the US, but that doesn't mean they don't exist here.

    Leave a comment:

Working...
X