Technology

AI mistakes ‘black and white’ chess chat for racism

Online discussions about black and white chess pieces are confusing artificial intelligence algorithms trained to detect racism and other hate speech, according to new research.

Computer scientists at Carnegie Mellon University began investigating the AI glitch after a popular chess channel on YouTube was blocked for “harmful and dangerous” content last June.

Croatian chess player Antonio Radic, who goes by the online alias Agadmator, hosts the world’s most popular YouTube chess channel, with more than 1 million subscribers.

On 28 June, 2020, Radic was blocked from YouTube while presenting a chess show with Grandmaster Hikaru Nakamura, though no specific reason was given by the Google-owned video platform.

Radic’s channel was reinstated after 24 hours, leading the chess champion to speculate that he had been temporarily banned for a referral to “black against white”, even though he was talking about chess at the time.

YouTube’s moderation system relies on both humans and AI algorithms, meaning any AI system could misinterpret the comments if not trained correctly to understand context.

“If they rely on artificial intelligence to detect racist language, this kind of accident can happen,” said Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute.

INDY/LIFE Newsletter

Be inspired with the latest lifestyle trends every week

Please enter your email addressPlease enter a valid email addressPlease enter a valid email address SIGN UPThanks for signing up to the INDY/LIFE newsletter{{#verifyErrors}} {{message}} {{/verifyErrors}} {{^verifyErrors}} {{message}} {{/verifyErrors}}I would like to be emailed about offers, events and updates from The Independent.

Read our privacy notice

INDY/LIFE Newsletter

Be inspired with the latest lifestyle trends every week

SIGN UPThanks for signing up to the INDY/LIFE newsletter{{#verifyErrors}} {{message}} {{/verifyErrors}} {{^verifyErrors}} {{message}} {{/verifyErrors}}I would like to be emailed about offers, events and updates from The Independent.

Read our privacy notice

KhudaBukhsh tested this theory by using a state-of-the-art speech classifier to screen more than 680,000 comments gathered from five popular chess-focussed YouTube channels.

After manually reviewing a selection of 1,000 comments that had been classed by the AI as hate speech, they found that 82 per cent of them had been misclassified due to the use of words like “black”, “white”, “attack” and “threat” – all of which commonly used in chess parlance.

The paper was presented this month at the Association for the Advancement of AI annual conference.

Source:

www.independent.co.uk

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button