How Far Should We Take AI?

As artificial intelligence starts to surpass our own intellect, robotics experts warn us of the risks of autonomous weaponry. So how human should we make machines and will we know when it is time to stop?

Killer robots are a staple of science fiction, but a recent letter by a group of more than 100 robotics experts, including founder of SpaceX Elon Musk, has warned the United Nations about the threat posed by lethal autonomous weapons, and requests that the use of artificial intelligence (AI) as a way to manage weaponry be added to the list of weapons banned by the UN Convention on Certain Conventional Weapons. So why is now, when AI is being used for so much good, that they are so concerned about the threat it poses? To answer that we need to understand how we have arrived at where we are today and the rise of the conscience machines.

Read More at BBC Science Focus

Read the rest at BBC Science Focus