News
Context reduces racial bias in hate speech detection algorithms Date: July 7, 2020 Source: University of Southern California Summary: When it comes to accurately flagging hate speech on social ...
They found the single algorithm that included three categories—hate speech, offensive speech, and ordinary speech—as opposed to two, did a better job of avoiding false positives.
Instead, we found widespread bias in a variety of hate speech detection datasets, which if you train machine learning models on them, those models will be biased against African American English ...
"Our group of Black Lives Matter activists actually met with Facebook representatives in February 2016, not 2014 as it says in The Washington Post article, to discuss the appalling levels of ...
Hate speech reported by users and others -- including against white people, Americans and men -- will still be removed. Facebook uses a mix of human reviewers and technology to remove harmful content.
The EU rights watchdog said on Thursday that applications that use artificial intelligence (AI) to predict crime and moderate online hate-speech should be free of bias to avoid discrimination.
Researchers from Finland’s Aalto University have analyzed various anti-hate speech systems, including tools built by Google’s Counter Abuse team. Their findings demonstrate that the technology ...
The accuracy of those systems remains a mystery. Facebook doesn’t release, and says it can’t estimate, the total volume of hate speech posted by its 1.7 billion daily active users.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results