IN PRAISE OF STUPIDITY

or

How to Stay Reasonably Enlightened

© Ronald de Sousa
University of Toronto

(This is a descendent of a talk originally delivered as a Keynote Address to the Ontario Philosophical Society, October 1991)
 

A profession of Stupidity

Philosophers make a profession of stupidity. A story is told that at a large psychology conference, a "lecturer", actually an actor schooled in mellifluous delivery, uttered a "lecture" that was carefully designed nonsense. As they left, the psychologists were polled: the vast majority thought the lecture very clear, and claimed to have understood it perfectly. Now that's one difference between psychologists and philosophers: if the same experiment were tried on philosophers (unless, of course, they were postmodernist philosophers), they'd just say: "Complete rubbish. Didn't understand a word." Other people who want to show they are smart pretend they understand everything ; philosophers who want to show they are smart (unless they are postmodernists) pretend to understand nothing. This attitude, excesses aside, is the pride of our profession. Yet I don't think we've actually carried it far enough.

In fact, philosophy has put over almost as many unbelievable stories on us as religion. In too many ways, what we are taught has been rubbish, and the training that we have been given has been intended to get us to the point of not seeing that it was rubbish. This used to be true more than it is, of course, or so I like to think. Ever since the Age of Enlightenment, it has become relatively easier for us to be stupid in the right way -- to refuse to accept clever arguments for what Russell called "intellectual rubbish". Yet there are many voices now that cry that the Enlightenment is dead, and God resuscitated all over again. Enlightenment philosophers define, at least for the purposes of most Philosophy departments' curriculum, what is to count as Modern. So what I want to do, in at least part of this essay, is to ask how much of the Enlightenment we have to give up; to ask what it is to be a post-modern, not in the trendy sense (or senses, or lack of sense) so prevalent in English departments suffering from Philosophy envy, but in the straightforward literal sense, of what comes after modernism as it was defined by the Enlightenment and its direct inheritors.
 

Irrationality as Evolutionary Leap

Man, we used to say, is a rational animal. Actually humankind seems to have made a leap into irrationality, just as it achieved bipedalism -- that is, just as we emerged from our condition as apes. Suddenly the rituals of apes, which can generally be given a plausible function, were replaced by others that are wholly irrational. Becoming human was, we like to think, a big leap forward for intelligence: but the price we had to pay for this is that it was also a great leap forward for irrationality. This is precisely, perhaps, where culture leaves off from the genes. Many of the most distasteful human characteristics underlying the most diverse social arrangements are plausibly viewed as having been favoured by natural selection. But the great mad products of culture -- the specific doctrines of the grand religions, as opposed to everyday superstitions -- can hardly themselves have had much feedback influence on the genes. So it's unpromising to ask whether they might have had much detectable effect on adaptation. They seem to reflect the unconstrained gift for theoretical speculation that comes with human intelligence. But their manifestations cry out for a little more stupidity.

What then is stupidity? I want to define it, for the purposes of this essay, as the refusal to believe something on the basis of a clever argument one doesn't quite understand. By contrast, madness, or irrationality, is the insistence on believing and acting in the face of good arguments for not doing so.

Actually, stupidity and irrationality are both ambivalent concepts. Most philosophers, as non-philosophers will be quick to agree, were mad, and very honorably so. Nearly every great philosopher has had truly mad ideas, not at the periphery but at the core. Newton had mad ideas too, but his are considered to be at the periphery: we can ignore them easily enough. But Plato's mad ideas, Kant's mad ideas, Hegel's, Berkeley's ... they are all at the core, and we declare their authors to be great philosophers precisely because they have had the courage to follow the argument wherever it led, indifferent to the mad implausibility of their conclusions.

The ambivalence of irrationality begins early: it begins with the very notion of a rational animal. What that phrase means, if we are to give it a genuine sense, is not that humans are rational in an evaluative sense -- not rational, that is, in a sense that contrasts with irrational -- but rational in a categorial sense: that is, able to be both rational and irrational (in the "evaluative sense".) If we wanted to demonstrate that an animal was irrational, one would be reduced to having to attribute to it a set of beliefs and desires, on the basis of evidence which would have to include the "irrational" behaviour in question: but by hypothesis the irrational behaviour doesn't fit, and so proper attribution to the animal must be such that it counts as rational. A point rather like this, but with a different moral, has been made both by Quine in relation to the attribution of logical contradictions, and by Davidson in his adaptation and strengthening of that "principle of charity" to the claim that the basic belief set one attributes to anyone must be rational on pain of being a non-starter as an attribution. Now my point is that humans actually are rational in that categorial sense, and that it is possible to attribute irrationality to them. So human reason, is, in a very strong, indeed analytic sense, born of irrationality.

The Janus character of rationality has a long history, which improbably links Plato to B.F. Skinner. Skinner demonstrated that pigeons too could be superstitious. What he meant by this is that some chance associations that are actually statistically non-significant can produce expectations in pigeons which we would retrospectively call "unreasonable" if the pigeons were human. This phenomenon fits a model offered in Plato's Theaetetus: some minds are made of soft clay, and therefore easily written on but as easily erased. Some are made of hard clay, and therefore not easily written on, but less easily erased. Plato's different tablets anticipate what statisticians call errors of type I (rejecting the null hypothesis when it is true) and errors of type II (accepting the null hypothesis when it is false.) What this illustrates, is that there is no ideal solution to the problem of defining optimal rationality: whatever you do to decrease your chances of committing one type of error automatically increases your chances of making the opposite error.

The "best" compromise in some circumstances isn't necessarily the best in other circumstances. Biologically, we have been left with a fairly latitudinous inductive policy: that makes us susceptible to superstition and to prejudice, rather like Skinner's pigeons. But it doesn't yet account for any of our inventions of really monumental irrationality: our construction of theories, on the basis of inductive evidence, which actually articulate hypotheses about what is causing all these strange phenomena of our experience.

I can now rephrase my characterization of the leap formard into irrationality as we turned from apes to humans. It's not that the the ape isn't liable to prejudice and superstition, in the simple sense of being particularly susceptible to making errors of type I. But humans go beyond inductive error: they construct models to explain their alleged inductive discoveries. This goes much farther than mere errors of type I or II, because it involves positing actual entities, not just assessing the probability of some type of event or other.

But this too, of course, is necessary to the very existence of science: the leap into a realm that goes beyond the merely inductive into the positing of entities that cannot be directly experienced. So this aspect of reason too is marked by ambivalence. Irrationality is an intrinsically necessary price we pay for creative scientific thought. Still, my argument here is that it needs to be counterbalanced by a proper dose of philosophical stupidity.

The case for madness, from Erasmus on, has often been made. But there is more of a case to be made for stupidity too, and that is the case I would like to advance now.

Let's begin by considering some samples of honorable stupidity. To call them honorable is to recall that honour is precisely one domain in which it is hard to see what is smart and what is stupid. Surely Archilochos and Falstaff were in some sense right in being too stupid to understand the ethics of honour -- in valuing life above honour.
 

What is especially philosophical about stupidity?

Philosophers themselves have certainly been on both sides of the issue: Clifford, typically, wanted us to avoid errors of type I (he wanted us to have hard mental tablets, on which erroneous marks would be harder to make). James wanted to avoid errors of type II. (He thought we might as well have the wrong stuff on our tablets, since that's inherently more fun than having nothing at all.) My plea in this paper sides with Clifford's attitude.

Philosophical stupidity, professional and praiseworthy stupidity, began with the Milesians. Before them, cosmology and metaphysics were essentially what we now would call soap opera, and soap operas, otherwise known as myths, very effectively give people the illusion of understanding. It was the Milesians who taught us that stories don't explain anything: for after all if Chronos or Zeus did this or that, it doesn't tell us why. To understand why, we need some general formula that tells us that events of that kind need to be understood in the light of laws or generalizations or at least general properties of this and that sort. In general, philosophers from the Milesians to the Enlightenment are committed to the generality of reason. "Reason belongs to all," Heracleitus may have said, "and yet everyone thinks they have a private understanding."

The generality of reason is an enormously powerful idea. It lies behind the possibility of logic, and therefore of every kind of technology, notably computer technology, which makes essential use of formal methods. From a philosophical point of view, it grounds the twin pillars of Enlightenment modernity: on the epistemological side, the ideal of a unified, intersubjectively validated science, and on the ethical side, the idea of impartiality.*1

But the belief in the generality of reason is under threat. Self-styled postmodernists think reason is just a con game, that all that we can hope to have is stories after all, that all claims to universality are self-serving and phony. This marks a repudiation, we might say, of 25 centuries of civilized thought. Don't get me wrong: that could be good (notice the prejudice that has initially caused you to interpret that sentence as pejorative. We are all progressive-conservatives.) This would also mean that history is useless as an endeavour to provide knowledge about human nature; for if stories are all "just so stories," as it were, there can be no lessons to be learned from any of them. Getting general information out of history is rather like recovering Newton's laws from the weather.

But to return now to reason and stupidity. I've said that the various ways in which being reasonable and clever can be ambiguous can be related to one rather simple consideration. This is the wholly general feature of the use of reason which might be called the multivalence of argument. It can most brutally be put thus: no argument ever compels*2 --no argument, in fact, ever proves anything in particular. The reason is that any argument merely gives us a set of alternatives: believe the conclusion together with the premises, or continue to reject the conclusion, but then also reject one or more premises. And in this situation, what is the most reasonable thing to do?
The most reasonable thing to do is, surely, to believe the least incredible alternative.

And that shows that at least sometimes the most rational thing to do is to be, in effect, irrational: it is to refuse to go along with the argument. What's more, if you think about the consequences of this simple feature of reason, it implies an interesting limitation on the universality of reason. Heracleitus was wrong after all: For since every argument has to be interpreted by an individual consciousness, there is less about it that is common, in a certain sense, than one might have thought.*3 In some sense, every argument must be made ad hominem. For although finding a bad argument that convinces you is not what I aspire to as a rational being, there's no point in my giving you an objectively good one that you are not able to follow.

In the face of these various manifestations of the ambivalence of reason and intelligence, I want to ask: what stance is it now reasonable to take towards reason?

One possible stance is that reason is merely negative. This view goes back to Socrates's eristic method: it is designed to show up views that must be false. (Socrates himself, to be sure, expresses the hope that the method might, first by elimination and then by the discovery of some some inner coherence, discover positive truth. But this is explicitly said, in the Meno, to be merely a hope.)

This negative view of reason is not, actually, the minimal view. Some at least, notably Hume, have claimed even less for reason: have claimed, indeed, that reason can't actually be relied on even to eliminate falsehood. On the other hand, the negative view can seem a little more powerful when we notice that it doesn't require any further assumptions about reason to entrust it with the task of making means-ends calculations. For means-end calculations really just consist in the computation of the consequences of various constraints. (Hence the formulation sometimes heard that if you will the end you will the means. This is actually a mistake, but that is due to a fact about willing and the consistency conditions on willing, not to any deficiency in the mediating or brokerage role that reason plays in such cases.) But though it may not be the minimal view, it is one which many philosophers have made it their business to show can be transcended. (My sometime colleague Fackenheim, sucking on his pipe, was once heard to offer this definition of philosophy: "The goal of philosophy is to find the limits of knowledge, and then transcend them.") In fact, the verdict most often heard about this view, which I will follow its critics in referring to as instrumental reason, is that it is stupid. The philosophers who think it is stupid regard the task of philosophy in a light that is perhaps not quite so cavalierly paradoxical as Fackenheim's boutade; but still they regard the point of being especially intelligent as offering a prospect of demonstrating things that merely negative or instrumental reason would be powerless to show.
 

Clever arguments

So various philosophers, succeeding in this various theologians, have offered us enormously intelligent and ingenious conclusions for propositions that certainly couldn't be established by a merely negative or instrumental form of reason. Things like what?

Well, here are a few items.

There are many others. What they all have in common is that if you are not initially inclined to believe their conclusion (admittedly a big if, since many people have been) the arguments offered in their favour are disconcertingly feeble.*5

I'm not sure whether we should put the argument from design on this list. I share with Richard Dawkins the view that Hume's arguments wouldn't, perhaps even shouldn't, have persuaded a reasonable person who had looked carefully at the biological evidence for design. On the other hand, Hume's arguments do show the the argument for design doesn't prove anything, and we now know spectacularly, because, of Darwin, why this is true. Hume was here as usual, I think, the great model of inspired stupidity. The cleverness of the argument from design is very real; but Hume's stupidity in the face of it is now spectacularly vindicated.

Let me call arguments for the above conclusions, and many others, by the technical term of Clever Arguments. I think one can give a general characterization of Clever Arguments, which would go somewhat as follows:

Clever Arguments typically (but perhaps not necessarily) involve a certain sort of transcendental inference. I used to be very attracted by this style of argument, because it seemed to me to vindicate the magic power of philosophy. I was inclined to think, for example, that Kant had demonstrated that morality in his sense was inescapable, because it was presupposed by the very concept of action. Here are some version of three stages of an argument of this sort, something like which are found in various places in the writings of rationalist and existentialist philosophers:

1. All action involves a description in terms of wants and desires, however minimal;
But that in itself defines a (minimally) rational action.
Therefore the structure of action is inevitably and intrinsically (minimally) rational.
Therefore we cannot escape our rationality.
         2. If I am asked for a decision, I cannot refuse. For to refuse is to make a decision.
Therefore, we are "forced to be free."
And then again: 3. If we are forced to be free, there is necessarily something that must guide our decisions at any particular point.*6
Therefore, we are forced to adopt some sort of orientation in relation to the good.
Therefore, Morality in the broad sense, then, seems to be entailed by the very idea of agency.
Charles Taylor adds -- but admits he doesn't argue for this -- that morality must therefore be grounded, and furthermore can be properly grounded only in theism. (Taylor, 342).

All these arguments are flawed, rather as most philosophical arguments of a foundational sort are flawed, by a sort of fallacy of four terms. If the terms in the premises are taken in precisely the same sense as those in the conclusion, then the conclusion doesn't follow. In the first, there is no warrant for the enlarged sense of rationality that we are asked to accept. (Rather as there is no warrant for the enlarged sense of self that Descartes asks us to accept as a result of the cogito.)

In the second, the equivocation is between a weak sense of decision, in which it is indeed analytic that we cannot avoid making a decision, and a stronger sense, which is the interesting one, in which deciding involves getting the instruments of reason to function in the formulation of some sort of practical argument.

Finally, in the third, which is a caricature, but I believe not an unfair one, of the argument central to Taylor's book, it is true that there must be something that determines our act, but it is not shown that what does so is a consideration of reason or morality, or even that it is a consideration at all, that is, that our action is determined by what we think our conscious deliberation produces. In fact, there is much in the literature from Freud to Nesbitt to suggest that much of the determinations of our actions are not accessible to our deliberative consciousness at all.

Now some Clever Arguments are just those that were the targets of the Enlightenment. Some, but not all: for the curious fact is that some are themselves the product of Enlightenment thinking. For on this point Taylor is surely right: in spite of its opposition to dogmatic thinking, the Enlightenment did not escape producing dogmas of its own.

One such dogma, acccording to Taylor, is the dogma that an ethics of benevolence arises naturally out of naturalism.

According to Taylor, the problem of the Enlightenment is one which is also our problem: it is how to ground a naturalist ethic. I agree. But the problem that Taylor sees is this: any naturalist ethics is exposed to the charge that it commits the naturalistic fallacy. From some factual premise about what our actual inclinations are, there is a big gap to any general principle of benevolence. Taylor's view is that as far as naturalistic arguments are concerned, there is no justification for espousing the ethical views of Hume rather than those of the Marquis de Sade:

What Sade's views bring out, as a foil, is the usually invisible background of Enlightenment Humanism, ... the moral horizon of their thought. Just embracing some form of materialism is not sufficient to engender the full ethic of utilitarian benevolence. One needs some background understanding about what is worthy of strong evaluation: in this case, it concerns the moral significance of ordinary happiness and the demand of universal beneficence. (336) Unfortunately the central argument in Taylor's large and wonderful book is, to put it bluntly, feeble. It is, in fact, a prime Clever Argument. I don't want to mislead you by exaggerating Taylor's own claims for his argument; in the last chapter, he claims to be simply presenting views which he admits he is unable to argue for. But the whole force and thrust of his book lies in its insistence that there can be no real and sufficient sources for morality outside of theism. And if there's one thing the Enlightenment had settled, it is that there can be no real and sufficient sources of morality in theism. (Darwin was needed to show why we didn't need the theist hypothesis for explanatory purposes; but the arguments of the Enlightenment surely sufficed to establish that God was not sufficient for the grounding of Ethics. Theism involves the naturalistic fallacy just as much as Naturalism)

Taylor is good on the negative: in spite of what I would like to believe, atheists haven't proved that they are, as such, a whole lot nicer than believers. So far in world history, on a sheer quantitative scale, the atrocities committed by believers on behalf of their beliefs outweigh those committed by atheists on behalf of theirs. But atheists have hardly had much of a chance to show how beastly they can be. Besides, since atheists, historically, have perforce been in the minority, they have in general tended to come from a rather better class of persons. Atheists have not been a random cross section of humanity, likely to exemplify average human savagery. So experience has not taught us enough to be sure that religion makes people worse on the whole. On the other hand, theistic justifications of morality fall under the very same arguments as naturalistic ones: both are open to the charge of committing the naturalistic fallacy, if that is indeed a fallacy. This means that theistic justifications are not a whit more tenable than naturalistic ones just on the point where Taylor sees fit to attack naturalism. Perhaps, then, the Nietzsche-existentialist view is the right one: there is no transcendence except self-transcendence. And why should there be? Where is the burden of proof?

What, then, defines a stance to reason, consciousness, the human condition that we might properly call "post-modern"?
 

How might we be post-modern?

It would be ludicrous, of course, for me to attempt a serious answer to this question. But it is worth sketching some directions. The directions I will mention are all, in some way or other, motivated by the two most important revolutions of thought that have taken place since the 18th century: Darwinism, and Freudianism.*7

Curiously enough, neither of these intellectual revolutions has been sufficiently assimilated. Taylor himself, it seems to me, sometimes writes as if Darwinism hadn't really made any difference. He makes a lot of fun, for example, of Wilson's Human Nature with its talk of a naturalist ethics. (406-407). On the one hand, Taylor criticizes Wilson for having a moral position which is essentially reductionist; but in addition he thinks Wilson is inconsistent in saying that "Human nature bends us to the imperatives of selfishness and tribalism. But a more detached view of the long-range course of evolution should allow us to see beyond the blind decision making process of natural selection...."

But why is this so ridiculous? There is an important analogy here with other ways in which the consequences of evolutionary history can be viewed. Not everything about us is an adaptation; but whether something is an adaptation needn't determine its value even in an evolutionary context. Some side-effects of adaptation can later take on major intrinsic importance when viewed from the point of view of a human being as it is now, in its full gerrymandered nature. The two most striking examples I can think of are the capacity to do higher mathematics, and the female orgasm. Both are most probably side effects rather than direct objects of selection, but both are arguably among those things most worthy of being granted intrinsic value. So there is something, perhaps, to that idea of Russell's, that impartiality leads to truth in thought as in action, and to universal love in feeling. A capacity for impartiality may be a side effect of our evolutionary heritage; but if it is also at the core of our capacity for theoretical and for practical reason, it may be sufficiently important to our natures as humans to be viewed as the focus or root of some sort of evolutionary ethics. However that may be, my point here is only negative: Taylor's failure to take both sides of Wilson's view seriously argues that he does not take seriously the consequences of Darwin's discovery.

What about Freud? What is crucial about Freud, for my purposes, is his definitive undermining of the transparency of consciousness.

We might, perhaps, think of this as threatening the Enlightenment emphasis on benevolence and individual rights. The argument might go like this:

The idea of the individuality of consciousness is crucially linked to the ideal of the elimination of suffering. This is the crucial importance of benevolence: it recognizes the irreducible individuality of pleasure and pain: pleasure and pain simply can't be literally ascribed to a group or to a society as such. This is both the triumph of utilitarianism and the source of one of its greatest difficulties, the problem of distributive justice. The individualism of consciousness lies at the heart of political and ethical individualism. But the notion of unified individual consciousness, together with the notion of a phenomenology defining the whole of the mental, is not faring well under the scrutiny of either Freud or contemporary cognitive science. Can we retain enough of the notion of unified consciousness to justify this emphasis on benevolence?

Taylor offers no suggestion about how to answer this question, because he doesn't really take these cognitive science developments seriously. But though individualism in various forms has itself come under a great deal of pressure, it seems to me that political individualism is not threatened by any new developments in psychology. For if we find we must give up the unity and transparency of consciousness, this will not take us further into some sort of communitarian view, but rather lead us to mark distinctions within the traditional individual.

Moreover, the fragmentation of the self and the loss of the transparency of consciousness do not directly challenge the procedural commitment to reason -- what I have called the minimal or negative conception -- that I have attributed to the Enlightenment. If we are multiple and successive selves, as Parfit has argued, this is not an obstacle but a constraint on our capacity to organize our lives in some rational way.*8 And if consciousness is not unitary, that means we must be more suspicious of ourselves, it means that self knowledge becomes a regulative ideal rather than something to be taken for granted, but it does nothing to undermine the character of reason itself.

There are two other crucial respects in which our perspective is profoundly different from that of the Enlightenment:

1. the fragility of reason.

2. the irreconcilable Multiplicity of goods.*9

Let me say a few words about each of these in turn.

Reason. Some Enlightenment figures, specifically of course, the Rationalists, were inclined to provide clever arguments for the transcendence of reason. Nowadays we have become skeptical either of the claims of reason itself, or at least of the disinterestedness of the actual claims made in the name of reason. Nevertheless, it seems to me that something must remain. What must remain are negative claims: it is unlikely that the principles of rationality we shall be left with will be rich in substantive content. For part of what the Enlightenment is about was precisely a certain confidence in the power of science. But it is of the essence of science to change. In particular, we now understand something about the biological nature of reason itself, Cherniak, Wilson and Nisbett, Kahnemann and Tversky, et al all show the limitations of reason as a topic-neutral instrument, and that makes a big difference to what it's like for us to be rational.*10 It makes it possible, as I implied at the outset, to see some classic examples of irrationality as perfectly rational in a larger perspective: superstition, prejudice, perhaps even akrasia and self-deception.

But we still want to be able to say that some arguments are better than others, and that some of the ways in which they are better are rational and not merely rhetorical, in the sense that there is some sort of normative criterion associated with reason. That normative criterion itself, where does it come from? We can attempt to establish norms for arguments on the basis of the success of certain kinds of arguments (Descartes notoriously does that) but the Humean argument against this strategy is still devastating: the success of those forms of argument has always been in a given domain; we are now wanting to use them in another domain. We still need a leap of induction (Taylor 404: the "leap of faith" of scientism itself) to those other cases where the patterns of reasoning have not yet proven themselves. That is the grain of reasonableness in the rather irritating way that religious people have of reacting to the clear fact that they have lost the argument: namely, to claim that the domain of assessment appropriate to their beliefs is not accessible to reason. (Would anyone ever claim this who hadn't first lost the case before the court of reason?) This does mean, however, that from the strictly logical point of view the religious person hasn't altogether lost the argument, having resorted in extremis to that special kind of meta-level scepticism.

Does reason matter? what does it mean to retain allegiance to the enlightenment's ideal of rationality? Obviously we realize that reason can't really do all that much. Darwin has taught us for real what the Baron d'Holbach already suspected: We are animals, and whatever else we think we must come to terms with that.

In particular, our relatively recent knowledge of what that means confronts us with a need to demystify good and evil. And that is just what Taylor objects to. For he seems to think we can't. But that, as I have argued, is something he believes merely on the basis of a Clever Argument -- and in the face of that I prefer to remain stupid. For this is the real dilemma posed by the foundationalist quest in Ethics: either we base our ethics on something, or on nothing. But there is not a shred of argument in Taylor or anyone else for the positive idea that theist belief is true. So even if one granted Taylor's claim that "the significance of human life ... is [not] best explained in a quite non-theistic, non-cosmic, purely immanent-human fashion" it would be a hopelessly question-begging leap to the claim that "the most illusion-free moral sources ... involve a God"(342). To base our ethics on what doesn't exist may be comforting, but it is just a more high minded form of the disposition to self-deception for the sake of feeling good which Taylor himself so despises.
 

The Multiplicity of Goods.

Many Enlightenment figures, certainly all the rationalist philosophers, were committed to the unity of the good. So giving that up changes things, and there is no doubt that we must give it up. Taylor expresses the multiplicity of goods admirably clearly: We have to avoid the error of declaring those goods invalid whose exclusive pursuit leads to contemptible or disastrous consequences.... a dilemma doesn't invalidate the rival goods. On the contrary, it presupposes them. (511) He may even be right in attributing some intellectual background climate of this to the Christian concept of sacrifice: For the Stoic, what is renounced is, if rightly renounced, ipsofacto not part of the good. For the Christian, what is renounced is thereby affirmed as good.... Christian renunciation is an affirmation of the goodness of the kinds of things renounced: health, freedom, life. (219) But two things need to be noted about this multiplicity: one is that the multiplicity of goods is not sufficient to impugn the objectivity of value. The other is that some of the ways in which this multiplicity of goods is realized consists in going, as it were, to the meta-level: in taking indefinitely many steps backwards into self-scrutiny. Let me say a word about each of these points.

First, the multiplicity of (irreconcilable) goods does not necessarily imply that we must abandon the ideals of objective truth and of objective value. Unity is indeed, as Aristotle taught, a requirement of truth. Truth imposes consistency constraints that are of a different nature than those imposed by goodness: two truths cannot be incompatible: all (atemporal) truths must be capable of coexistence, else at least one of them isn't a truth after all. But two goods can be incompatible in the sense that they cannot simultaneously be realized, without that fact entailing that at least one of them must fail to be a genuine good.*11

Secondly, one of the ways in which the multiplicity of goods is realized is that some of us -- all of us philosophers, for example -- have become committed to at least some degree of irony. Richard Rorty has defined the ironist as "the sort of person who faces up to the contingency of his or her own most central beliefs and desires"*12 -- in particular, this means being self-conscious about the role of fantasy and imagination: to see art as art, and not as religion or as magic. The unattainability of so many of our aspirations drives us to take refuge in imagination at all stages of civilization; but in its primitive avatar this response is religion, in its civilized avatar it is the self-conscious pretending of art.

Now Taylor actually seems to be claiming that one can't be an ironist. His ground for this view is what he calls the inevitability of frameworks.

Taylor says you can't really put all objective values into question. It's not quite clear whether the reason one can't do this, as he thinks, is that it involves a logical incoherence because one always has to speak from some moral point of view or other, some "strong evaluation" which escapes one's own skeptical thrust, or whether he thinks that one just psychologically can't bring oneself to think so, rather in the vein in which it used to be claimed that there are no atheists in foxholes. Sartre says that, on the contrary, that is precisely what the human condition amounts to: we are forced to be free means, we are condemned to commit ourselves to some "framework" or "hypergoods" without there actually being any such things in reality.

Now here I have the impression that both Sartre and Taylor are actually offering Clever Arguments. Sartre's argument seems to be of the suspect quasi-transcendental form described above; on the other hand, Taylor himself seems to be committed to a clever argument of his own:

He can say to Sartre: Ah, but that's just what I am am saying. But that takes the framework offered by Sartre (the only framework is that there is no framework) too seriously in Taylor's terms. If that's all that's needed for Taylor to be right, then his thesis has very little content after all.

It's probably right ad hominem, in that Sartre "valorizes" this notion of freedom in his own clever argument. But Taylor remains wrong in substance, because his claim is that there is no logical room for Sartre's position itself requires a Clever Argument, in the form of a reductio. The reductio fails because of a fallacy of four terms:

1. Suppose, with Sartre, that the real human condition is to be deprived of all frameworks.
2. Then this is precisely the framework that defines us and our moral sphere.
3. But then there is, after all, a framework.
But the framework that is offered in 3, if it is to suffice for Taylor's thesis, must be more substantive than that offered in 1: it has to be a framework of "strong evaluation". So Sartre is genuinly after all, we might say, a real atheist in a foxhole.

But then so is Hume. Taylor himself characterizes Hume's position rather well:

...we can also explore a way of seeing our normal fulfilments as significant even in a non-providential world. The significance would lie simply in the fact that they are ours; that human beings cannot help, by their very make-up, according significance to them; and that the path of wisdom involves coming to terms with, and accepting, our normal make-up. (344)
This seems to me to render it pretty fairly, and to be as near a "solution" as one can get. Which is not all that strictly a solution, since Hume's view of the grounds of morality is rather like his views on the grounds of logic, which is that all foundational arguments are bound to fail. Rather than asking why we should engage in certain reasoning practices, Hume enjoins us to change the subject, asking merely how we use reason and what the consequences are of doing so.

What is added here by the idea of irony is simply this: it is always possible to draw back and question any particular grounding belief. To think otherwise is to commit, in the honourable company of Aristotle, a fallacy roughly equivalent to thinking that because there is no greatest number there must be a greatest number. Aristotle's version of this fallacy consists in inferring from "all action aims at an end" to "there must be some end at which all actions aim" (Nicomachean Ethics I.1); Taylor's version is to infer from "no one can value absolutely nothing at any time when one is asking oneself what is worth valuing" that "there must be something supremely worth valuing."

There remains a quasi historical question. I mean by that, a question about the actual future instead of the past: Is it possible for a civilization to survive, and to espouse standards of public morality as well as ideals of life sufficiently high to impede the self-destructiveness that may be inherent to our race? I, of course, don't know: I can't claim to know, because I've already disavowed any claim that we could learn from history. On the other hand I've advocated ad hominem arguments, and since Taylor makes so much of history I feel entitled to use a historical argument against him. It seems extraordinarily arrogant, inexplicably so for someone as broad in his sympathies and deep in his knowledge of cultures beyond our own as Taylor, to claim that only (Christian) theism is capable of bearing the weight of an ethics of benevolence. To cite but one counterexample, but a counterexample that spans a world about as large and longer lasting than the Christian world, the Confucian ethic was, for twenty-five centuries in China, just what Taylor claims we can't have: an atheistic ethics of benevolence whose sources are entirely to be found in a certain interpretation of the natural order.

What does seem above all obvious, is that whatever the fact of that matter may be, there is certainly nothing that philosophers can do about it. The ratchet of skepticism can be turned back by the power of anxiety to close minds. We've seen this happen many times, most shockingly in our century, and there is certainly no guarantee that it couldn't happen again. (My personal interpretation of the current religious revival is that it is exactly that: a broken tooth in the ratchet of reason.) But so what? I doubt if it can be turned back merely by clever argument, merely by the claim that we've found, after all, that there is order in the cosmos, that it was all meant for us, that benevolence really is inevitable if we just follow the argument. Besides, it is only in extreme circumstances, such as Arthur Koestler described in Darkness at Noon, that the ratchet of self-conscious skepticism can be turned back in any individual case. Society as a whole is a different matter: whether a society is relatively enlightened or not depends on how its children are brought up, and especially, as Aristotle pointed out, how they are taught to react emotionally. So morality across societies is going to depend hardly at all on anyone's arguments for the foundations of ethics (this would be the lesson of Confucianism, if history had lessons), and very much on what people are brought up to care about. The hubris of thinking that philosophy can change the world by devising sufficiently clever arguments is itself a form of stupidity that even I can't endorse.

And what, after all, does it mean if it turns out that the problem of grounding moral philosophy is really intractable? Moral philosophers have always proceeded, against a faint obligato of derision coming from such as Diogenes and de Sade, on the optimistic assumption that the problem of the justification of ethics was soluble, if not yet solved. But what if there is simply no solution? What if the problem of the justification of ethics were just absolutely intractable, as is, in most people's view, the problem of foundations for knowledge? Would that conclusion not be as legitimate and philosophical as any positive result? And why, actually, as philosophers, should we even deplore it? In the face of all the bad clever arguments that we have been offered, it is wisest to be stupid.

NOTES


1.  Taylor (p. 408) quotes this thought from Russell, that impartiality leads to truth in thought as in action, and to universal love in feeling. See below.

2.  Robert Nozick considers somewhere what it would be like for an argument really to be compelling: if you don't believe the conclusion, you die! See Robert Nozick, Philosophical Explanations, Cambridge, MA: Harvard University Press 1981.

3.  It occurs to me now that I should have called this essay in praise of idiocy, rather than in praise of stupidity, and thus availed myself of the knowing approval one gets when one mentions that for the Greeks individuals were idiots. But that seems a bit cheap, although there is a vital connection between what I want to defend and the idea of individualism or intellectual anarchism.

4.  For the general argument for philosophical anarchism, see R.P.Wolf. For the specific question of attitudes to groups, and the difference between fidelity and loyalty, see Shklar, "Obligation and Loyalty", lecture delivered to the philosophy department University of Toronto, October 17, 1991.

5.  Jonathan Bennett once characterized Kant's metaphysical deduction of the categories as an argument "in which a peerless philosopher offers reasons of unexampled feebleness for saying that there twelve kinds of propositions or judgment all of which must be employed by any being who.... believes p for any p." (Jonathan Bennett, Rationality: London, Routledge & Kegan Paul, 1964, p. 1.) I disagree only about whether the feebleness is unexampled. Examples, it seems to me, are everywhere, usually in all the central doctrines of the great philosophers. We teach this to our students, of course, and then we feel sort of sorry, because they go at the great philosophers in such a coarse way, and we are dismayed to realize they've entirely missed the point of the great philosophers' greatness.

6. Charles Taylor, in this Sources of the Self, puts this argument thus: "Doing without frameworks is utterly impossible for us.... living within such strongly qualified horizons is constitutive of human agency." (Taylor 27).

7.  To say that these are major revolutions is not to say that we need to subscribe to the beliefs of either Freud or Darwin on every topic about which they had some opinion. Nor is it even to claim that they were solely or even mainly responsible for the ideas that I associate with their names. But even if it is a matter of mere association, these are the names that stick to those ideas. There are other names, of course: Nietzsche, and the movement known as American Pragmatism. But my incompetence in history suggests the modest course of leaving it to others.

8. See Derek Parfit, Reasons and Persons. Parfit argues for a fragmentation of the person into time slices. This provides him with a reason to reject the privileges of the individual in moral reasoning, but that is because one's own consciousness at different times is on a par with the consciousness of others, not because there is any such thing as collective consciousness. On the contrary, we are further away than ever from having any reason to take that concept seriously.

9.  I don't discuss some other ideas which Taylor plausibly offers as characteristic of Modernism, such as the thematization and valorization of everyday life or the emphasis on expressivism. This is not because I find them unimportant, but because they essentially cut across the ideals of the Enlightenment: those features of Modernism neither challenge any of the ideals of the Enlightenment, nor are entailed by them.

10.  In Without Good Reason (Oxford: Clarendon Press, 1996) Edward Stein concludes from a searching examination of the psychological evidence and philosophical arguments that the question whether humans are in some respects innately irrational admits of no definite answer.

11.  See "The Good and the True", Mind 1974.

12.  Richard Rorty, Contingency, Irony, and Solidarity, Cambridge: Cambridge University Press (1989) p. xv.
 

    back to Ronnie de Sousa's home page