Jack Gallant, a computational neuroscientist at the University of California, never set out to create a mind-reading machine.
Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed.
Dr. Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at.
The results, published in 2011, are remarkable.
What was going to happen, Dr. Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories?
Read More at NY Times
Read the rest at NY Times