“If the doors of perception were cleansed every thing would appear to man as it is, Infinite.” – William Blake
The friendly headlamps and grill of a car. A sly electrical outlet. The full moon gazing back at you. A strangely anthropomorphic cloud. A house with personality.
The human brain is so good at recognizing faces that it sees them everywhere … even where they’re not. The perception of faces and other patterns from orderless “noise” is known as pareidolia, and it is a common, everyday experience for human beings. The Swiss psychologist Herrmann Rorschach developed his eponymous inkblot test in hopes that pareidolia might shed insight into the mental state of individuals presented with ambiguous stimuli. Many types of pareidolia are possible – this is well known to the medical student who suddenly perceives spinal cord sections in tree stumps. While it is possible that exact patterns perceived in pareidolia give us deep insight into ourselves, tests like the Rorschach inkblot test remain controversial. Perceiving Jesus in a burnt piece of toast speaks much to one’s religious inclinations, but the typical pareidolia evoking the perception of nondescript faces probably does little more than remind one that he or she is human. The ability to recognize faces is crucial for social beings like ourselves and has undoubtedly been strongly selected for in our ancestors. If human beings ever encounter alien intelligence, perhaps we should give them inkblot tests to see what shapes their brains have evolved to recognize!
It is perhaps no wonder that a large piece of cortical real estate known as the fusiform face area is devoted to facial recognition in the human brain. Magnetoencephalogram (MEG) recordings of functional brain activity–localized to the cerebral cortex with the aid of magnetic resonance imaging (MRI) – show that pareidolia involves activation of the fusiform face area just 165 milliseconds after a face-like object is presented to a subject. This finding demonstrates that pareidolia can be an automatic and unconscious phenomenon, as opposed to over-analysis of an ambiguous stimulus after rumination. Knowing that a large piece of cortical anatomy is specialized for facial recognition tells us this function is important for the brain, but it does little to inform us as to how facial recognition is actually accomplished in neural circuits. To go deeper, we need to understand the concept of autoassociative memory. You’re probably familiar with associative memory: the idea that one stimulus will retrieve the memory of another stimulus with which it is environmentally correlated. This is accomplished thanks to Hebbian learning, the principle that neurons which fire together wire together. Neural patterns of activation representing the color red co-occur with patterns of activation representing the concept fire truck, such that red in isolation will retrieve the memory of a fire truck and vice versa. Autoassociative memory is a similar principle by which part of a stimulus will retrieve the memory of the whole. For instance, you can probably fill in the blank “Four score and ____ years ago” thanks to autoassociative memory.
Autoassociative memory is the principle that allows humans to recognize wildly exaggerated caricatures and low-resolution images of faces. It is also the phenomenon by which we recognize faces in inanimate objects. The ability to reconstruct a face from little information is often astonishing. Take a look at the image on the left: do you recognize the famous person? There’s a good chance you can, even though the image is only about 30 pixels tall! Autoassociative memory can be simulated in a computer and used to recognize patterns even when part of the pattern is missing or distorted by noise. Consider the Hopfield network, an artificial network of simplified neurons. The reciprocal connection strengths of neurons in the Hopfield network change according to Hebbian learning. The activity of each neuron is represented as either +1 (firing) or -1 (not firing); connections between neurons receiving the same input are strengthened with each training trial or stimulus presentation. After training the network on a particular image and presenting the image again with a few corrupted pixels, the activity of the neurons receiving the correct pixels will “pop” the other neurons into the learned activity pattern.
If you have a knack for computer programming, you might be able to implement the Hopfield network on your own computer. You can download some Python code here and experiment for yourself! Artificial neural networks building on the Hopfield network have become quite sophisticated and have practical applications for machine learning and artificial intelligence. A spooky consequence of recent advances in this field is the discovery that machines can experience pareidolia, too! This is done by feeding a neural network an unfamiliar image; the activity of the network moves slightly towards a stored pattern which the image vaguely resembles. The network is then fed its output activity as a new image, and it continues to crawl even closer towards learned patterns on the second iteration. This process of positive feedback is continued many times until the output image features surreal imagery and vivid “hallucinations.” Clouds morph into pigs, buttons morph into eyes, trees turn into towers … the results are often unsettling and undeniably “trippy.”
In the title of the novel which inspired the film Blade Runner, science fiction writer Philip K. Dick asked, “Do Androids Dream of Electric Sheep?” To finally answer Dick’s haunting question, machines dream of illusory animals and electric landscapes beyond our wildest imagination. What we see in the dark, at the mercy of our own mechanistic perception, often terrifies us. Be glad then that you are not a neural network at Google X labs, doomed to forever repeat your own terrifying visions and hallucinate upon them further. But perhaps in these frightening computer visions, we as humans will find profound beauty. As the poet Rainer Maria Rilke once wrote,
“For beauty is nothing but the beginning of terror which we are barely able to endure, and it amazes us so, because it serenely disdains to destroy us.”
Early (M170) activation of face-specific cortex by face-like objects. (2009). Early (M170) activation of face-specific cortex by face-like objects., 20(4), 403–407. http://doi.org/10.1097/WNR.0b013e328325a8e1
Mordvintsev, A., C. Olah, and M. Tyka. Inceptionism: going deeper into neural networks. Technical report, Google Inc., 2015. Google Research Blog, bit. ly/1BkXP09, 2015.
Solé, Ricard, and Brian Goodwin. Signs Of Life How Complexity Pervades Biology: How Complexity Pervades Biology. Basic books, 2008