Last month the philosopher J.J.C. Smart died, at age 92. A wonderful man, generous and modest, Smart was one of the most important figures in the development of “materialist” theories of the mind in the 20th century. He argued that the mind is identical to the brain; mental states (like sensations) are states of the brain and no more.
Smart accepted that things don’t seem that way. He saw this as a conclusion we are have been led to by science. The identity of mind and brain is like the identity of heat and molecular motion, of lightning and electrical discharges.
Smart argued that the identity of mind and brain is a contingent one; it could have been otherwise. In the 1970s Saul Kripke argued that this can’t be right. If the mind and brain are really identical – the same thing – then this relationship is not contingent but necessary. But, Kripke went on, it is easy to see that there is no necessary relation between a particular pain (say) and any state of the brain. There could be one without the other – we can easily imagine this. So whatever their relationship is, it’s not identity.
Is it really possible to shoot down a claim like Smart’s identity theory with these mere imaginings? In the years since Kripke’s work, many philosophers have become more and more confident that subtle facts about contingency and necessity of this kind are real, can be known, and can carry a great deal of weight in arguments about how things are. This is usually done by considering huge arrays of “possible worlds.”
(When the 19th century mathematician Georg Cantor introduced a strange new universe of infinitely large sets, with some infinitely large sets larger than others, some people thought he had come up with a bizarre fiction. David Hilbert replied: “No one will drive us from the paradise which Cantor created for us.” This is how it has been in philosophy with possibility and necessity in the years since Kripke.)
However these issues are handled, something a materialist philosopher has to do is explain why it seems that the mind is so different from the activity of the brain. If we had a good account of how this works – an account of the psychology of the seeming – I think a lot of complicated arguments that try to show that the mind is something separate and extra, based on the “conceivability” of one without the other, would dissolve.
A paper that pushes in this direction was recently published by Brian Fiala, Adam Arico, and Shaun Nichols. (GoogleBooks has the paper here, the ms is here.) Fiala and co. draw on “dual process” theories of human cognition. These theories hold that we engage in two different kinds of thought, using distinct systems in our brains. One kind (system 1) is fast, automatic, parallel, and unconscious. System 2 thinking is slower, conscious, does one thing at a time, is more rule-governed, and may be dependent on our internal use of language.
Fiala and co. think we have two very different ways of attributing conscious mental states to anything we might come across. One, the “low road” as they call it, relies on quick intuitive responses to how the system behaves (a system 1 pathway). Earlier research, some from back in the 1940s, has looked at what features of movement in objects tends to make us interpret that object as an agent, rather than a mere object. These include apparently goal-directed motion and having things that look like eyes. Once something behaves like an agent, people tend to attribute mental states to it, including experiences.
An observer can also wonder about whether something is conscious in a way that uses “system 2” psychological processes – reasoning, and bringing explicit background knowledge to bear. When we look at, or think about, another whole human being, and wonder whether it is conscious, both ways of approaching the question give the same answer. When we look at a brain, though, or imagine a huge collection of neurons interacting, and ask whether it is conscious, the low-road, system-1 machinery in us gives a “no” answer even if we have beliefs that induce the system-2 part of us to say “yes.” This gives rise to the feeling of a deep “explanatory gap” between the mental and the physical.
In the Fiala-Arico-Nichols paper, the relationship between two different third-person perspectives is used to deal with the problem. I think this view should be combined with another piece of psychological diagnosis, one based on the relation between third-person and first-person perspectives. The main idea was sketched years ago by Thomas Nagel, (who is an opponent of materialism) in a footnote to a famous paper, and developed further by Chris Hill.
When we wonder about the relation between mental and physical, and feel an apparent separation between them, this involves the combining of two different kinds of imaginative act that people can engage in. We first imagine a brain or other physical system, and we do that using a perceptual mode of imagination – we imagine looking at a brain or some other physical system – and then when we consider some mental state such as an experience, we imagine that using what Nagel called our sympathetic imagination; we imagine being in that mental state. When we “splice together” these two acts of imagining, it seems that the two things are clearly separable – could not be identical. But this is because of the special features of our vantage point, when we imagine mental states (which we imagine being in) in relation to physical set-ups (which we imagine seeing).
Put these two two angles on the psychology of the mind/body problem together, and we have gone a fair way toward improving the situation. None of this shows that materialism is true, but shows that some famous apparent obstacles to its being true are not obstacles at all. One way to put it: if materialism was true it would still seem false to us.
This is not really a post about cephalopods, but octopuses are – yet again – something of a special case here. Above I put some photos of a friendly and inquisitive octopus, reaching out and exploring me with a couple of arms, while watching closely and staying within reach of its den. When any animal does this, it is almost inevitable that we treat it as having an inner life – perhaps a very simple one, but some internal locus of this combination of curiosity and caution.
That animal is a Gloomy Octopus (Octopus tetricus). Among octopuses they are a bit distinctive, I understand, for being muscular and firm and having a reasonably definite shape, at least much of the time. Some other species are even more amorphous – in some cases almost liquid, both in appearance and to touch. Here are a couple of pictures of an octopus seen at night off Belize, in the Caribbean, last year.
This is Octopus briareus. I’ve not touched any of these, but when they move they are much more fluid, and more otherworldly, than the Australian species. They are disconcerting to watch, with a less straightforward relationship to our habits of perception of moving animals.
As I put together this post I kept being tempted to excessively sharpen these images, to impose more shape on this semi-liquid creature.
____________
1. The octopus in the first photos above eventually decided that my camera might make good den-building material, or perhaps lunch.
2. An obituary of Jack Smart is here, and here is one of his best works. A colleague and co-developer of the mind-brain identity theory was U.T. Place. I am reviewing a new book by Nagel at the moment [and the review is now here – email me if you want a copy]. A criticism of the “footnote 11” approach to defending materialism is here.
3. Music for cephalopods:
From here.
Thanks for the great post (and blog). I look forward to reading the Nagel review—where will it be located?
Review will be in the London Review of Books. Nagel’s book is very interesting. He’s got far too much respect for the ‘intelligent design’ movement, and that is a weak point in the book. But on the mind-body problem and how that problem “seeps out” and affects other issues, much more thought-provoking.
(Review: http://www.lrb.co.uk/v35/n02/peter-godfrey-smith/not-sufficiently-reassuring)
Hi, Brian Fiala here—thanks for the mention! I’m in agreement with the vast majority of what you say. In particular, I think you’re quite right to focus on the importance of psychologically explaining certain seemings (or intuitions) that feed into dualist arguments. Dennett was always big on this way of talking about the mind-body problem. Other philosophers (such as Papineau (2011)) have adopted this sort of rhetoric in recent years, and it seems to me correct not only as a matter of rhetoric but also when taken as a substantive point.
I’d also agree that a complete explanation of the relevant seemings ought to include a psychological account detailing the contribution of first-person concepts or processes. While we make a big deal out of the “two third-person perspectives” in the Fiala et al (2011) paper, it is difficult to deny that the account would be strengthened by a complement re: introspection or other first-personal processes. However, I do think it is interesting/provocative that one can get quite a lot of explanatory mileage out of wholly third-person concepts here—and worth seeing how far that line can be pushed without help from the first-person perspective.
Like you, I’m also quite sympathetic to the Hill/Nagel account. What I’d really like to see is a development of that sort of account that connects more directly with empirical work on introspection and reasoning about minds in the first-person mode. Tony Jack has been doing some great work along these lines over the past several years, which he integrated into a promising overall picture at his Tucson 2012 keynote talk.
One thing I’d caution against, however, is the danger of assuming that merely by explaining the relevant seemings in psychological terms, one has automatically ‘dissolved’ conceivability arguments and the like. Besides explaining intuitions of contingency, one must further provide some argument that relevant intuitions are debunked—i.e. that the relevant intuitions are misleading, untrustworthy, or otherwise epistemically ‘bunk’. The way you put the basic idea seems right: “if materialism was true it would still seem false.” But one wonders whether the epistemology here mightn’t be more complicated—thus the debunking step of the argument seems to me to be a nontrivial one. At any rate, we probably shouldn’t assume that the seemings will be automatically debunked simply in virtue of being psychologically-explained (cf. everyday visual perception, which is psychologically explicable and error-prone in certain scenarios, yet eminently trustworthy in others). But, one can hope!
Regarding the last point, about how psychological explanations of beliefs need not be debunking of those beliefs, I think here the crucial fact is that “if materialism was true it would still seem false.” If someone gives an explanation of why it seems right now that the lights in this room are on, for example, this will be an explanation that implies that if the lights were off, they would not seem to be on. It will be an explanation that connects the facts about how the lights are with the facts about how things seem, via the way our eyes work. In the mind-body case, what we are getting is the beginning of an explanation of why an appearance of separation between the two will arise whether or not there is a separation. The seemings do not track the facts.