Originally posted by hussam
View Post
Announcement
Collapse
No announcement yet.
More Than 80 Kernel Patches Were Made This Summer By Outreachy Developers
Collapse
X
-
-
Originally posted by onicsis View Post
This program It's available only in US and for US citisens, somewhat in opposition with Linux as operating system in generally which it's a result of an international effort. As example Linus it's a Finnish.
- Likes 1
Comment
-
Originally posted by GunpowaderGuy View PostI am going to say it again , ai that decides which applicants to pick ( http://research.baidu.com/Research_A...dex-view?id=57 ) / verifies the desicions made by humans is the way to go to end unfair ( of irrelevant qualities ) discrimination .
Comment
-
Originally posted by GunpowaderGuy View Post
that is why ai verification and interpretability are such a big deal
And there you have the very essence of the systematic and institutionalized bias that is the root of the problem that things like Outreachy are trying to solve. Centuries of denial and wishing away haven't worked: maybe just maybe it's time to give a different approach a chance.
Comment
-
bregma The first , manual stages , are crowd sourced by the peer reviewers who check how the presented results of a scientific paper corelate with the software and the dataset it was trained on ; to access what merit and flaws , including dataset induced or deliberate bias , the ai model has . This is the bogstandard method scientific work , even far less mission crtical ai systems , go through to eliminate pathological science and fraud.
Then auxiliary research would further check the results with automated systems that complements the ability of humans to reason about the desicions made by neural networks , to verify that the software actually meets its goal : discrminating relevent characteristics , not ones that do not matter in a job or intership .
Finally the wide public is able to check all of the above .Last edited by GunpowaderGuy; 26 October 2018, 01:38 PM.
- Likes 1
Comment
-
Peer review is a positive feedback system that reinforces systemic and institutional bias. It can not be used to correct for it. Consider the classic cases of Joseph Lister or Barry Marshall and Robin Warren in the medical field.
Checking biased automated systems with biased automated systems is another potential positive feedback loop.
Checking that the results of an algorithm match your expectations doesn't mean your expectations were correct. It means you built a machine that meets your expectations. It does nothing to eliminate the systemic or institutional biases that fed your expectations, or wrote the algorithms, or trained the AI. The wrong answer correctly arrived at is still the wrong answer, but you can point to a machine that spits it out and say "but the machine said so!" in the same way some people say "but the Bible said so!". The Bible (1 Timothy 2:12) says men should ignore SJWs, so I guess if a machine says to hire only chads, it must be righteous.
And sure, the wide public is sometimes able to check. Assuming the wide public doesn't call out any systemic or institutional bias (like peers reviewing peers), because that would make them warriors for social justice instead, and we all know how anything they say should be dismissed because they question your biases.
For the most part, both the AI and the training data of commercial products are secret closed-source stuff and not open to public inspection.
Comment
Comment