On Simulation and the Singularity

Engineer and robo-ethicist, Alan Winfield, on the simulation (and energy costs) of human intelligence, the singularity and simulationism

For many researchers the Holy Grail of robotics and AI is the creation of artificial persons: artefacts with equivalent general competencies as humans. Such artefacts would literally be simulations of humans. Some researchers are motivated by the utility of AGI; others have an almost religious faith in the transhumanist promise of the technological singularity. Others, like myself, are driven only by scientific curiosity. Simulations of intelligence provide us with working models of (elements of) natural intelligence. As Richard Feynman famously said ‘What I cannot create, I do not understand’. Used in this way simulations are like microscopes for the study of intelligence; they are scientific instruments.

Like all scientific instruments simulation needs to be used with great care; simulations need to be calibrated, validated and – most importantly – their limitations understood. Without that understanding any claims to new insights into the nature of intelligence – or for the quality and fidelity of an artificial intelligence as a model of some aspect of natural intelligence – should be regarded with suspicion.

In this essay I have critically reflected on some of the predictions for human-equivalent AI (AGI); the paths to AGI (and especially via artificial evolution); the technological singularity, and the idea that we are ourselves simulations in a simulated universe (simulationism). The quest for human-equivalent AI clearly faces many challenges. One (perhaps stating the obvious) is that it is a very hard problem. Another, as I have argued in this essay, is that the energy costs are likely to limit progress.

Read More at Alan Winfield’s Blog

Read the rest at Alan Winfield’s Blog