Curiously, most people don’t actually want to know the future. Only 1% would want to know absolutely everything. This may come from the mind intuitively saying “hey, that’s my job. You can’t do it for me”. Predictive cognition illustrates the pass-through informational entropy of organic intelligence: there is no separation between information input, processing, storage, and expression. This pass-through entropy makes it impossible to establish an information hierarchy or processing map and impossible to model the process using artificial intelligence, which makes it counterintuitive for information scientists. Not only is the input indistinguishable from the processing, but the entropy of information is indistinguishable, microscopically, from the chemical entropy of the ion channels in the nervous system. Only at the high level of decisions and conscious thought can you say that the brain is a pure information system. Like an ocean wave, a brain wave is macroscopically identifiable as a discrete bit of information, but microscopically indistinguishable from the entropy of the ions that create its action potential. At some level, the success or failure of a thought depends on whether enough ions cross the neuronal membrane to keep the wave going, which comes down to the distribution of ions around the membrane, and, ultimately, the uncertain position of an individual calcium ion. Even a perfect brain schematic couldn’t tell you the final form of a thought because it would be a map of a map, with all the uncertainty that entails. In other words, the formation of discrete bits of information remains shrouded in the infinitesimal uncertainty of the actual entropy of the physical brain.
Gary Lupyan and Andy Clark
Department of Psychology, University of Wisconsin–Madison and School of Philosophy, Psychology, and Language Sciences, Edinburgh University
Abstract Can what we know change what we see? Does language affect cognition and perception? The last few years have seen increased attention to these seemingly disparate questions, but with little theoretical advance. We argue that substantial clarity can be gained by considering these questions through the lens of predictive processing, a framework in which mental representations—from the perceptual to the cognitive—reflect an interplay between downwardflowing predictions and upward-flowing sensory signals. This framework provides a parsimonious account of how (and when) what we know ought to change what we see and helps us understand how a putatively high-level trait such as language can impact putatively low-level processes such as perception. Within this framework, language begins to take on a surprisingly central role in cognition by providing a uniquely focused and flexible means of constructing predictions against which sensory signals can be evaluated. Predictive processing thus provides a plausible mechanism for many of the reported effects of language on perception, thought, and action, and new insights on how and when speakers of different languages construct the same “reality” in alternate ways.
on July 22, 2013
Our findings suggest that information flow along this predictive hierarchy is crucially modulated by the engagement of attention, and is finely tuned by our prior expectations, or predictions, about future events. Successively complex neural processes are involved in predicting the future, and thereby try to explain away expected prediction errors generated at lower levels. Only information about failures in prediction flows up the hierarchy, and results in the revision of expectations about the future. Ultimately, this research offers a broad, elegant, and neuroscientifically grounded explanation of how brain deals with an ever-changing external world. It reasserts that cognition, and ultimately our consciousness is fundamentally shaped not just what is out there, but also by our biases and expectations, which themselves are governed by our past experiences.
By Anil Ananthaswamy
What’s your brain doing when you process information? Could it be producing a “controlled online hallucination”?
Welcome to one of the more provocative-sounding explanations of how the brain works, outlined in a set of 26 original papers, the second part of a unique online compendium updating us on current thinking in neuroscience and the philosophy of mind.
In 2015, the MIND group founded by philosopher Thomas Metzinger of the Johannes Gutenberg University of Mainz, Germany, set up the Open MIND project to publish papers by leading researchers. Unusually, the papers were published in open access electronic formats, as an experiment in creating a cutting edge online resource – and it was free. The first volume, spanning everything from the nature of consciousness to lucid dreaming, was a qualified success.
The second volume, Philosophy and Predictive Processing, focuses entirely on the influential theory in its title, which argues that our brains are constantly making predictions about what’s out there (a flower, a tiger, a person) and these predictions are what we perceive.
To make more accurate predictions, our brains modify their internal models of the world or force our bodies to move, so that the external environment comes in line with predictions. This idea unifies perception, action and cognition into a single framework.
Some of the titles of the papers are playful, and maybe a tad over-the-top: “How to entrain your evil demon”, “How to knit your own Markov blanket” or “Of Bayes and Bullets”. But despite the titles, the content is serious and heavy-going: it’s written by some well-known proponents of predictive processing, including Andy Clark, based at the University of Edinburgh, UK, and Jakob Hohwy, at Monash University, Australia.
Perception isn’t passive
Lay readers will do well to start slowly, with the introduction to the field by Metzinger and Wanja Wiese, also based at Johannes Gutenberg, before dipping their toes into the deeper waters. Most of us will not get beyond sampling the introductory paragraphs in each paper, but even doing so can provide a flavour of the ideas they contain.
One of the keys to predictive processing is that it sets out to challenge our intuitive feeling that our brains passively receive information (via our senses) and create perceptions of what is actually out there – the so-called bottom up approach.
Instead, predictive processing argues that perception, action and cognition are the outcome of computations in the brain involving both bottom-up and top-down processing – in which prior knowledge about the world and our own cognitive and emotional state influence perception.
As Metzinger and Weise point out in their introduction, the idea of top-down processing is not new, but “dominant theories of perception have for a long time marginalized” its role. The novel contribution of predictive processing, they write, is that it emphasises the importance of top-down processing and prior knowledge as a feature of perception, one which is present all the time – not only when sensory input is noisy or ambiguous.
In a nutshell, the brain builds models of the environment and the body, which it uses to make hypotheses about the source of sensations. The hypothesis that is deemed most likely becomes a perception of external reality. Of course, the prediction could be accurate or awry, and it is the brain’s job to correct for any errors – after making a mistake it can modify its models to account better for similar situations in the future.
But some models cannot be changed willy-nilly, for example, those of our internal organs. Our body needs to remain in a narrow temperature range around 37°C, so predictive processing achieves such control by predicting that, say, the sensations on our skin should be in line with normal body temperature. When the sensations deviate, the brain doesn’t change its internal model, but rather forces us to move towards warmth or cold, so that the predictions fall in line with the required physiological state.
If you do get through enough of the papers to understand the computational principles behind predictive processing then, as Metzinger and Weise put it, you get closer to understanding why “it is only a small step towards describing processing in the brain as a controlled online hallucination”.
Everything we perceive, including ourselves, are simulacrums of reality. The takeaway here is this wild thought: we are always hallucinating