Artificial Neural Networks Provide a New Theory of Dreams


Machine learning experts struggle to deal with “overfitting” in neural networks. Evolution solved it with dreams, says new theory.

Overfitting – using irrelevant detail to construct a model – is the bane of machine learning experts who have devised a wide range of techniques to get around it. The human brain also sometimes links events that have little or no causal connection. Erik Hoel, a neuroscientist at Tufts University in Massachusetts, thinks that the human brain prevents overfitting by dreaming. He says dreaming evolved specifically to deal with this problem, which is common to all neural networks. If his theory is correct, it answers one of the great unsolved problems in neuroscience: why we dream at all.

His new idea is that the purpose of dreams is to help the brain to make generalizations based on specific experiences. And they do this in a similar way to machine learning experts preventing overfitting in artificial neural networks.

The most common way to tackle overfitting is to add some noise to the learning process, to make it harder for the neural network to focus on irrelevant detail. In practice, researchers add noise to images or feed the computer with corrupted data and even remove random nodes in the neural network, a process known as dropout.

The nature of dream substitutes is itself fascinating. Hoel says that fiction in general — books, plays, films etc — might perform a similar role to dreams. “They are, after all, explicitly false information,” he points out.

Just why humans create and enjoy fiction has always been something of a puzzle. But Hoel has an answer: “The overfitted brain hypothesis suggests fictions, and perhaps the arts in general, may actually have an underlying cognitive utility in the form of improving generalization and preventing overfitting, since they act as artificial dreams.”

Read More at Discover

Read the rest at Discover