Amid an explosion of panic about artificial intelligence, ChatGPT, and runaway algos, the celebrated Sci-Fi writer Ted Chiang considers how to best think about AI.
“The Evolution of Human Science” was written in response to an idea there was around 2000, when people were talking about the singularity and that we would transcend into something much greater. I was mostly thinking, well, why is everyone so certain they’re going to be the ones to transcend? Maybe transcendence isn’t going to be available to all of us, so what would it be like to live in a world where there are these incomprehensible things going on, and you’re sort of on the sidelines?
But I don’t think that that is actually applicable to our current situation here, because there are no super intelligent machines.There’s no software that anyone has built that is smarter than humans. What we have created are vast systems of control. Our entire economy is this kind of engine that we can’t really stop. That’s a different thing than saying we’ve created machines smarter than us. We have built a giant treadmill that we can’t get off. Maybe.
It probably is possible to get off, but we have to recognize that we are all on this treadmill of our own making, and then we have to agree that we all want to get off. There are other countries that have a healthier relationship to the narrative of progress; there are countries where they have much healthier attitudes toward work than we have in the U.S. So I think those things are possible. But we have created a system, and now it is all we know. It’s hard for us to imagine life outside of it. And we are only building more tools that strengthen and reinforce that system.
Read More at Vanity Fair
Read the rest at Vanity Fair