Premium

Is anything real? And if not, whose dream are we in? Philosopher Daniel Dennett on the mystery of consciousness

Can virtual reality involve more than one dreamer? Keanu Reeves in the philosophically meretricious film The Matrix (1999) 
Can virtual reality involve more than one dreamer? Keanu Reeves in the philosophically meretricious film The Matrix (1999)  Credit: Reuters

In his new book From Bacteria to Bach and Back, atheist philosopher Daniel Dennett wonders how human consciousness arose and what it's for. Jane O'Grady examines his case

Water is H2O, heat is the movement of molecules, and in time, claim some philosophers and scientists, we will similarly be able to say that consciousness is xyz. The trouble is, we are sensorily conscious of water and heat, conceptually conscious of gases and molecules – but what sort of stuff would we be perceiving or conceiving in the case of consciousness?

In a sense, we already know the unconscious physical constituents of consciousness: neurons. What we don’t know, and conceptually balk at, is how their connecting and sparking get converted into: a sensation of heat, sense of indignation, decision to leave the party, visualisation of a donkey or adding up the bill; a transfiguration as magical as the djinn materialising from Aladdin’s lamp.

According to the American philosopher Daniel Dennett, the problem of neural-mental conversion does not arise. Or, rather, the unbridgeable “explanatory gap” is only a self-inflicted “Cartesian wound” which, almost 400 years later, Dennett can heal. Descartes doubted that anything existed, then deduced that something must be doing the doubting, and must therefore exist (himself). But in setting up the indubitable conscious mind, he severed it from the “external”, non-conscious physical world, and, says Dennett, skewed inquiry from the outset.

Philosopher Daniel Dennett Credit: Rex Features

So you know you’re not a zombie, he satirically demands. “No, you don’t. The only support for that conviction is the vehemence of the conviction itself.” Far from being “‘privileged’, our access to our own thinking is really no better than our access to our digestive processes”.

Mental events are “user-illusions”, like the icons on computer screens, of all the hardware operating in our brains. “We don’t see, or hear, or feel the complicated neural machinery churning away”, but have to settle for its “metaphorical by-product”.

Just as we learn about other people from seeing, hearing or reading what they do and say, so we learn about ourselves through observing our own actions and utterances. “Something it is like to be me” is, if at all, only elicited “because we tell other people and ourselves what it’s like to be us”.

As for “qualia” (those purported raw, inside feelings of what we perceive externally), they would, if they existed, only “double the cognitive work to be done”. They would require to be perceived in “a Cartesian theatre” in our brains, and perceived by what, asks Dennett. For him, “the true puzzle is ‘what is consciousness for (if anything)?’ ” Over aeons, the human has evolved from being a set of “merely competent parts” into being “a comprehender of sorts”. But consciousness is unnecessary for comprehension. Look at computers.

Is consciousness necessary for comprehension? A humanoid robot at this year's CeBit fair in Hanover Credit: Bloomberg

Dennett invokes Wilfrid Sellars’s distinction between the “scientific image” (what everything is really like) and the “manifest image” (the colours, sounds, sunsets, ash trees, money, marriage, free will and music that our perceptions and conventions construct). Eventually, our “manifest-image” identifications of mental states will be aligned with the “scientific-image” neural structures and events that actually generate them.

But who or what are (or is) “we”? “The end-user of an operating system,” answers Dennett. So at least one consciousness exists, then? And surely must do, for he says that “we” are better at predicting the behaviour of humans (and even of some non-human animals and machines) when we treat them as if they had meaningful reasons for their behaviour. It is part of his vaunted “depsychologising”, however, to make this “intentional stance” seem unjustified, merely pragmatic. Yet isn’t whoever adopts it necessarily as sentient and rational as she is confabulating others to be; and a solipsist?

Also, mustn’t there be many system operators in the “manifest image” that “we take ourselves to operate in”? The film The Matrix uses a meretricious premise, disingenuously ignoring that, like a dream, virtual reality can accommodate only one viewer; interactions with other “humans” would all be “in the head”; murders would have no corpses; all pregnancies would be phantom.

For Dennett, too, there would need to be a public space, or a communally conveyable medium, in which, as with computer games, the “user-illusions” can be visually accessed by external physical eyes. Of course, he says there is, that we inhabit both the scientific and the manifest images, but how does this work? How can the manifest image (which sounds suspiciously qualia-like) be communally manifest to us image viewers, even if we do have one foot in the scientific image?

A tattoo is an example of a 'meme' - an element that passes between humans by imitation Credit: Rex Features

Dennett claims that “the conditions for… a user interface” have evolved, thanks to “memes”. A meme (the concept was created by Richard Dawkins) is a cultural element (tattoos, songs, catchphrases) that passes from one individual to another either by contagion (imitation) or, in a way analogous to genes, by replicating and transmitting itself from one generation to another. Dennett takes memes to be words, which at first were uttered by humans without meaning anything (“competence without comprehension”), but which gradually, since a word “wants to get itself said”, invaded human brains just as viruses (often beneficially) invade human bodies and reproduce themselves inter- and cross-generationally.

Yet genes are composed of DNA, which is an organic part of the human body and does not need to invade it, and viruses are organic entities. What is the chemical constituency of words? We know of “nothing like this (yet) in the physical representation of memes”, says Dennett. Which seems to assume that we will do.

A word in itself is meaningless, though, which is why Dennett (like Dawkins) often makes memes ideas, rather than words. But ideas won’t work either. What sort of discrete, circumscribed units are they? Dennett scathingly reminds us that ideas don’t need to be experientially “had” by an individual; they are public and impersonal. Which is true when “idea” refers to an objective, conceptual “aboutness” (“Finches from different Galapagos islands have differently shaped beaks”) but untrue when it refers to a one-off, subjective mental event (Darwin’s thinking this thought).

Dennett makes the very Cartesian error of conflating “idea” as container and temporal occurrence with “idea” as the container’s potentially timeless, publicly available content. Or perhaps he trades on the ambiguity to make feasible the notion that “neuronal spike chains” constitute information sequences. But who, in this world of smoke and mirrors, does this information inform? Anyway, isn’t this whole notion of memes a meme itself?

Dennett is one of “the Four Horsemen of Atheism”, but his galloping is superfluous. Philosophy killed God ages ago. His savage attack on the self, however, seems to catch us in a dizzying, Escher-like, Möbius curve. We are, apparently, “guests in our own brains”, and he wants to evict us.

From Bacteria to Bach and Back by Daniel Dennett
445pp, Allen Lane, £25, ebook £12.99. To order a copy from the Telegraph for £20, visit books.telegraph.co.uk