Popular Posts

Monday, November 18, 2019

AI systems to detect 'hate speech' could have 'disproportionate negative impact' on African Americans: Study

A new Cornell University study reveals that some artificial intelligence systems created by universities to identify “prejudice” and “hate speech” online might be racially biased themselves and that their implementation could backfire, leading to the over-policing of minority voices online.

A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually "discriminate against the groups who are often the targets of the abuse we are trying to detect," according to the study abstract.

The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.

More

8 comments:

  1. Hate speech is free speech another Fascist maneuver

    ReplyDelete
  2. "AI systems to detect 'hate speech' could have 'disproportionate negative impact' on African Americans: Study"


    DUH!!

    ReplyDelete
  3. Anonymous said...
    Hate speech is free speech another Fascist maneuver

    November 18, 2019 at 6:16 PM

    Unless a Republican is the speaker, right you Douche Bag Socialist.

    ReplyDelete
  4. what ever happened with the hate incident at Salisbury University?

    ReplyDelete
    Replies
    1. Probably discovered that it was a minority who actually did it and they swept it under the rug because it doesn't fit the narrative.

      Delete
  5. What ever happen to "Sticks and stones may break my bones but word will never harm me".

    JRC

    ReplyDelete
  6. Free speech includes hate speech. They want 1A and 2A gone

    ReplyDelete
    Replies
    1. AI means artificial intelligence 9:10

      Delete

Note: Only a member of this blog may post a comment.