The age of thinking machines will be a new age of mystics extracting truth from hidden worlds.
The new generative AI models like GPT-3 and DALL-E have amazing powers. They can generate essays, stories, scripts, summaries, paintings, drawings, renders, and photography – seemingly able to infer the wishes of the user from simple text prompts given by humans. Trained on unimaginably huge libraries of text and images, they can simulate any combination of the knowledge they’ve seen. They can compose concepts, abstract and concrete, into higher level concepts, which can themselves be composed.
But these models are not “intelligences.” People mistake them for entities with volition, even sentience. This is because of the anthropomorphic fallacy: people tend to think of other things as humans if you give them half an excuse. But it is also because of a linguistic mistake: we call them AI, “artificial intelligence.”
The more accurate term is language models. They read a ton of text and use statistics to guess the next word at each step, like an enormous autocomplete engine. They attempt to predict what language-at-large will do, not what an intelligent agent would do.
Language models are not beings, but world simulators. Your prompt sets the parameters of the simulation, of what word should come next given the situation described. The language model projects our world back to us, one word at a time.
By choosing the right prompts, you can direct this mirror world. The art is in crafting language, combining words in just the right way to express your desire to the model.
Read More at Return
Read the rest at Return