A new Cornell University study reveals that some artificial intelligence systems created by universities to identify “prejudice” and “hate speech” online might be racially biased themselves and that their implementation could backfire, leading to the over-policing of minority voices online.
A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually "discriminate against the groups who are often the targets of the abuse we are trying to detect," according to the study abstract.
The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.
More
8 comments:
Hate speech is free speech another Fascist maneuver
"AI systems to detect 'hate speech' could have 'disproportionate negative impact' on African Americans: Study"
DUH!!
Anonymous said...
Hate speech is free speech another Fascist maneuver
November 18, 2019 at 6:16 PM
Unless a Republican is the speaker, right you Douche Bag Socialist.
what ever happened with the hate incident at Salisbury University?
Probably discovered that it was a minority who actually did it and they swept it under the rug because it doesn't fit the narrative.
What ever happen to "Sticks and stones may break my bones but word will never harm me".
JRC
Free speech includes hate speech. They want 1A and 2A gone
AI means artificial intelligence 9:10
Post a Comment