Regulations Won’t Give AI A Conscience

Marc Böhlen is part of the team developing universal benchmarks for ethical artificial intelligence. He tells us why they won’t work.

From the algorithms used by social media platforms to the machine-learning that powers home automation, artificial intelligence has quietly embedded itself into our lives. But as the technology involved grows more advanced and its reach widens, the question of how to regulate has become increasingly urgent.

The pitfalls of AI are well-documented. Race and gender prejudices have been discovered in a number of systems built using machine learning – from facial recognition software to internet search engines.

Last week, the UK’s Digital, Culture, Media and Sport select committee released its long-awaited fake news report. The committee lays significant blame on Facebook for fuelling the spread of false information citing the tech giant’s reliance on algorithms over human moderators as a factor. But while governments are now moving to legislate for greater oversight of social platforms, there has been less focus on how we govern the use of AI at large.

A crucial first step could be the development of a set of industry standards for AI. In September,, the Institute of Electrical and Electronics Engineers (IEEE), one of the largest industry standards bodies, is due to publish an ethical framework for tech companies, developers and policy makers on building products and services that use AI.

Read More at Open Democracy

Read the rest at Open Democracy