AI Will ‘Never’ Rule the World

A new book argues that artificial intelligence will never surpass human intelligence.

Barry Smith, Ph.D., SUNY Distinguished Professor in the Department of Philosophy in UB’s College of Arts and Sciences, and Jobst Landgrebe, Ph.D., founder of Cognotekt, a German AI company, have co-authored Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Their book presents a powerful argument against the possibility of engineering machines that can surpass human intelligence.

Machine learning and all other working software applications—the proud accomplishments of those involved in AI research—are for Smith and Landgrebe far from anything resembling the capacity of humans. Further, they argue that any incremental progress that’s unfolding in the field of AI research will in practical terms bring it no closer to the full functioning possibility of the human brain.

Smith and Landgrebe offer a critical examination of AI’s unjustifiable projections, such as machines detaching themselves from humanity, self-replicating, and becoming “full ethical agents.” There cannot be a machine will, they say. Every single AI application rests on the intentions of human beings—including intentions to produce random outputs.

This means the Singularity, a point when AI becomes uncontrollable and irreversible (like a Skynet moment from the “Terminator” movie franchise) is not going to occur. Wild claims to the contrary serve only to inflate AI’s potential and distort public understanding of the technology’s nature, possibilities and limits.

Reaching across the borders of several scientific disciplines, Smith and Landgrebe argue that the idea of a general artificial intelligence (AGI)—the ability of computers to emulate and go beyond the general intelligence of humans—rests on fundamental mathematical impossibilities that are analogous in physics to the impossibility of building a perpetual motion machine. AI that would match the general intelligence of humans is impossible because of the mathematical limits on what can be modeled and is “computable.” These limits are accepted by practically everyone working in the field; yet they have thus far failed to appreciate their consequences for what an AI can achieve.

Read More at Tech Xplore

Read the rest at Tech Xplore