Attention

The opinions expressed by columnists are their own and do not represent our advertisers

Monday, November 18, 2019

AI systems to detect 'hate speech' could have 'disproportionate negative impact' on African Americans: Study

A new Cornell University study reveals that some artificial intelligence systems created by universities to identify “prejudice” and “hate speech” online might be racially biased themselves and that their implementation could backfire, leading to the over-policing of minority voices online.

A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually "discriminate against the groups who are often the targets of the abuse we are trying to detect," according to the study abstract.

The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.

More

8 comments:

Anonymous said...

Hate speech is free speech another Fascist maneuver

Anonymous said...

"AI systems to detect 'hate speech' could have 'disproportionate negative impact' on African Americans: Study"


DUH!!

Anonymous said...

Anonymous said...
Hate speech is free speech another Fascist maneuver

November 18, 2019 at 6:16 PM

Unless a Republican is the speaker, right you Douche Bag Socialist.

Anonymous said...

what ever happened with the hate incident at Salisbury University?

Anonymous said...

Probably discovered that it was a minority who actually did it and they swept it under the rug because it doesn't fit the narrative.

Anonymous said...

What ever happen to "Sticks and stones may break my bones but word will never harm me".

JRC

Anonymous said...

Free speech includes hate speech. They want 1A and 2A gone

Anonymous said...

AI means artificial intelligence 9:10