[Elster, Jon (1984), Managing to deceive ourselves (Review of D. Pears (1984), Motivated Irrationality), The Times Litarary Supplement November 30, p. 1388]

[ Books | Articles | Reviews | Index | The main menu ]

Managing to deceive ourselves

[start of page 1388]

Jon Elster

Motivated Irrationality

285pp. Oxford: Claredon Press. £14.95
0 19 824662 5

In The Terrible Secret Walter Laqueur set out to investigate why the eye-witness reports of Hitler's genocide of the Jews failed to have the impact one would expect them to have. Essentially, his argument is that the recipients of this news thought or acted irrationally. By late 1942, he writes, "while many Germans thought that the Jews were no longer alive, they did not necessarily believe that they were dead". Similar paradoxes arise when we try to understand the failure of the Jews in Nazi-occupied Europe to flee their country or otherwise resist what lay in store for them. Laqueur quotes from the diary of the Polish resistance leader Emmanuel Ringelblum: "it will remain completely incomprehensible why Jews from villages around Hrubieszow were evacuated under a guard of Jewish policemen. Not one of them escaped, although all of them knew where and towards what they were going."

To explain irrational belief formation and irrational action - or failure to act - we may invoke the notions of self-deception and weakness of the will. This, however, would really be a case of obscurum per obscurius. When described in a certain way, these phenomena appear so paradoxical that doubts have been raised as to their very possibility. Before they can be used to explain other events, they must themselves be much more thoroughly understood. David Pears's Motivated Irrationality is a major step towards this goal. By careful conceptual analysis he argues for the view that, properly described, there is nothing impossibly paradoxical about these phenomena. People can and do entertain mutually exclusive beliefs , and can even have a motive for doing so; they can and do act against their own better judgement.

To convey the flavour of the book, one may cite some of the names that appear most frequently in its pages. Pears draws heavily on Freud, Aristotle and Donald Davidson: on Freud for setting up the problems and for some hints towards the correct solutions; on Aristotle for providing us with the right language in which to describe mental phenomena and their link to action; and on Davidson for his pioneering studies on weakness of will and "paradoxes of irrationality". On the other hand, Pears is more critical of the analyses of Sartre, Elizabeth Anscombe and R. M. Hare. Sartre's famous criticism of Freud's theory of the unconscious is shown, I believe conclusively, to be invalid. Anscombe is taken to task for the notion that actions are susceptible to being true and false, so that an agent acting against his own better judgment actually contradicts himself by so acting. Pears objects, finally, to Hare's arguments against the possibility of what Pears calls "last-ditch conscious akrasia" (of which more later). His comments on the substantive views of Anscombe and Hare are, unfortunately, embedded in extensive discussions of their interpretations of Aristotle. I fear that this will have the predictable effect of making the book less attractive to psychologists and psychiatrists, who like their arguments straight, with no chaser. Another feature of the book will contribute to the same effect. Although almost never obscure, it is quite dense and difficult to follow. Arguments that on a first or second reading appear elliptic, can usually be unpacked, but with some effort. Although I believe that those who make the effort will find it worth their while, many readers will be deterred.

This is all the more to be deplored, since Pears is addressing issues of central importance to empirical psychology. He has, moreover, a more than casual acquaintance with cognitive psychology which helps him identify the province of the mind in which his subject-matter is located. By and large, he is also able to resist the philosopher's fallacy of armchair theorizing. He knows well enough that "when philosophers set up their examples, it is only too easy for them to project their own assumptions onto the characters that they create".

The book divides into two parts, of roughly equal length. The first deals with various forms of irrational belief formation. Here the emphasis is on motivated irrationality, ie, self- deception and related "hot" phenomena, but there is also much useful discussion of the "cold" perversions of reasons that are at the centre of much recent cognitive psychology. The second half considers akrasia, or acting against one's better judgment. The two topics are interrelated, since self-deception about what is one's better judgment can facilitate akrasia. Yet the focus is on the cases of akrasia that cannot be explained by cognitive deficiencies. In my opinion, the chapters on irrational belief are more exciting, more controversial and more obscure than the treatment of akrasia. Other readers, whose interest is more squarely in conceptual analysis, may judge differently.

There is self-deception when a person thinks a certain belief is (inductively or logically) unfounded, yet holds that belief and does so for a motive. If only the first two conditions are satisfied, we have the well-known, non-paradoxical phenomenon that people may entertain incompatible beliefs simply because they belong to different realms of their life. If only the second and third conditions obtain, we have wishful thinking, a phenomenon whose existence Pears is committed to denying. This implication may count against his solution of the paradox of irrationality, as I shall argue. First, however, the bare bones of that solution must be indicated.

The paradox of irrationality is the following: how can a person adopt a belief in the teeth of the evidence, even if he has a motive for doing so? We may deceive other people, by hiding from them the evidence that would lead them in the correct direction, but we cannot similarly fool ourselves, or so it would appear. Yet everyday and clinical experience seems massively to demonstrate that self-deception is possible. How do we go about it? Pears, following Davidson, suggests that the wish to believe in the unsupported belief recedes from the main mental system and sets 'up a small subsystem mental system and sets up a small subsystem which also includes the recognition that the belief is not supported by evidence. This subsystem behaves in a quasi-altruistic fashion towards the main system; it tries, as it were, to foist the belief on the main system, given its knowledge that the main system wants to believe it. Or, more precisely, the subsystem refrains from preventing the formation of the irrational belief in the main system, since it knows that the latter wants to believe it and wants its wishes to be satisfied.

The details of this solution are intriguing, intricate, and will no doubt be the subject of a great many comments in philosophical journals. Here I only want to consider two broader issues: the relation between self-deception and wishful thinking, and the nature of the subsystem in question.

Wishful thinking, if there is such a thing, would differ from self-deception in the lack of any mental division. When we deceive ourselves, we somehow, somewhere, remain in possession of the justified belief and yet, somehow, elsewhere, adopt a belief contrary to it. Wishful thinking (in my stipulated sense of the term) would involve going directly for the preferred belief, without pausing to see whether the evidence on the whole justifies it. Hence it might happen, accidentally, that the belief formed by wishful thinking is the very same belief that one would have formed by impartial consideration of the evidence, had that operation not been preempted. On Pears's theory this could not happen, since the wish to believe is permissive rather than productive. Its operation is a failure to intervene rationally, not an irrational intervention. His theory has no place for superfluous irrationality, of the kind that would occur if the wishful thinking just happened to produce the rational belief.

I wonder, however, whether it does justice to the phenomena. From my armchair, at least, it appears to me that we often form the belief before we consider the evidence for it. The parts of the evidence that support the belief spontaneously acquire particular salience or force, without any other part of the mind simultaneously evaluating them at their proper weight. Perhaps we know "deep down" that this way of forming beliefs is not rational, but this is not to say that we know the belief to be irrational - which, in fact, it need not be. A special, important case is the process of belief- formation that operates by making correct inferences from the evidence, but stopping the collection of evidence at the first point where the net balance of information favours the view one wishes to be true. One may have no grounds in this case for believing the view to be irrational, and it might well be quite unobjectionable; yet I submit that this is a case of motivated irrationality.

A more far-reaching question is that of the nature of the subsystem which is involved in self-deception. Pears stresses that this system must have its own internal rationality: it is an efficient, quasi-altruistic manipulator of the main system. Yet for his argument to go through it must also be endowed with a variety of features that would almost turn it into a homunculus - a consequence that, in my view, is strongly undesirable. The subsystem must have all sorts of attitudes - beliefs and desires - concerning the main system. It must, in other words, be capable of having representations of the main system. Moreover, it must be able to weigh and choose between alternative ways of satisfying the wishes of the main system. To my mind, these requirements almost inexorably imply that the subsystem must have some kind of consciousness. Now, Pears does not deny that the subsystem may be part of consciousness. On this point he explicitly departs from Freud. He also argues that sometimes the sub- system may be part of the Freudian preconscious, ie, not included with the main system in one self-monitoring system. In my opinion, both of these possibilities are quite unattractive. Since Pears believes that in the really difficult cases of self-deception we must locate the subsystem in the preconscious, I shall focus on this case.

Our notion of consciousness has, I believe, two main features. It includes both the capacity for having representations of absent objects, and a peculiar kind of self-transparency. By contrast, the Freudian unconscious, on one plausible reading, involves neither feature. It is and a peculiar kind of self-transparency. By contrast, the Freudian unconscious, on one plausible reading, involves neither feature. It is essentially a mechanism for climbing pleasure- gradients, with no capacity for representing temporally or spatially distant objects. Also, its operation can be mechanical and unnoticed. A non-Freudian example would be the unconscious adaptation of what one wants to what one can possibly get. Pears suggests that there is room and need for a mental operation that has the first, but not the second of the defining features of consciousness: the capacity for representing and even choosing between abstract options, but not for monitoring its own operations. Or, if such a mental operation has some kind of internal consciousness, it would be "buried alive", and hence constitute an almost vacuous hypothesis.

To persuade us of the reality and power of the preconscious, Pears offers the following example: "a girl who persuaded herself that her lover was not unfaithful might avoid a particular café because she believed that she might find him there with her rival, and yet she might not be conscious of this belief'. But he offers no evidence - beyond armchair theorizing - for believing in the existence of this phenomenon; or for thinking that cases that apparently conform to this description cannot be otherwise explained. Moreover, I think the concept of such a well-endowed preconscious is inherently implausible. It would be doubly detached from anything tangible: from its objects, since it relates to them only in the mode of representation; and from its subject, since it would not be within the scope of consciousness. True, this is more an expression of conceptual uneasiness than an actual argument against Pears's proposal. I simply do not believe that we can get very far in this inherently elusive domain by inferring unobservable mental entities from phenomena whose very description tends to involve a great deal of implicit theory.

The treatment of akrasia breaks less new ground, although the ground broken is covered more thoroughly. The central question is whether it is possible to act consciously against one's own better judgment, while remaining fully aware of the relevant feature of the situation, fully committed to one's value-judgment and free of any compulsive urges. Here Pears parts company with a distinguished line of philosophers - from Socrates to Davidson and Hare - who have denied the very possibility of this phenomenon. In arguing for the view that it is not only possible, but not at all uncommon, he relies on a distinction between weak and strong valuations of action. Weak valuation is expressed in "mere preference", strong valuation in judgments about the long-term interest of the agent or the interest of people other than the agent. Pears claims that Davidson's argument against the possibility of last-ditch akrasia works only because he limits himself to weak valuation. Hare, on the other hand, explicitly considers strong valuation, but his argument fails.

According to Hare, an agent who is fully aware and fully committed, yet acts against his own better judgment, does so because he is unable to do otherwise. The force of strong valuation is such as "to overcome all internal obstacles except sheer psychological impossibility". This, of course, does not presuppose universal determinism which, as Pears notes, "is a theory that produces overkill in this area". Rather the moral weakness is due to a specific kind of failing, an inability to resist temptation. It is worth quoting in full Pears's objections to this view, since they also serve as the pivotal arguments for his own theory:
First, weakness is not the only cause of such lapses . . . . It is easy to be misled by the assumption, that weakness is the only cause, and to infer that an agent who is too weak to resist a temptation is psychologically unable to resist it, just as a Japanese wrestler, who is not strong enough to push his opponent out of the ring, is physically unable to push him out,

Second, although some addicts in some circumstances are literally unable to resist temptation, it does not follow that this is the explanation of all, or even of typical, apparent cases of conscious last-ditch akrasia. In fact, the claim is self-evidently implausible once its extreme character is clearly understood.

Third, even if we always had to believe the agent's excuse, "I could not resist temptation", there would be no need to suppose that it always means, "It was literally impossible for me to resist it". There is a common use of "I could not" in which it only means "Because it was difficult, I did not succeed", just as "I could" often means "I did succeed in spite of the difficulty".
This, to me, is distinctly unsatisfactory. The first argument relies on analogy, the second is mere assertion and the third a linguistic sleight- of-hand. I have more belief in Davidson's proposal, that we must look at the way in which desires cause actions. In cases of akrasia it is the causal wiring between the desires and the action which is at fault: the desire which according to the agent's, judgment is the weaker wins out because it somehow blocks the other desires from operating. To that extent it causes behaviour, not qua reason for action, but qua sheer psychic turbulence. At the moment of action, this is not within the control of the agent - although there are a number of times prior to the action at which he might have caused it to be in accordance with his better judgment. True, Davidson's account is not without difficulties of its own, notably his notorious and self-confessed inability to clarify what it means for a set of beliefs and desires to cause an action "in the right way". Yet I believe that his is surely the correct language for dissecting the problem, more useful than appeals to the way in which we use phrases like "I could" or "I could not". In particular, Davidson's approach holds out more promise for a collaboration between philosophy and psychology. In stark contrast to his treatment of self-deception, Pears's discussion of weakness of will does not at all consider the important work that psychologists and psychiatrists have done in this area.

It is inevitable that a book of this scope and ambition will be controversial, and give rise to objections of the kind put forward here. What ought not to be controversial, however, is that David Pears has given us an outstandingly lucid and intelligent account of matters of the highest importance. It is the first comprehensive and unified treatment of the paradoxes of irrational thought and irrational action. As I tried to indicate in my opening paragraph, these are not puzzles invented by philosophers, but problems of deep human significance. Even though we may never be able to get rid of our irrational propensities, knowledge of how they operate may at the very least enable us to take some rational precautions.

[end of page 1388]

[Elster, Jon (1984), Managing to deceive ourselves (Review of D. Pears (1984), Motivated Irrationality), The Times Litarary Supplement Novemeber 30, p. 1388]

[ Books | Articles | Reviews | Index | The main menu ]