I. Logic and Making Beliefs Explicit
Brandom’s magnum opus without a doubt is his massive Making It Explicit. In that book, he develops in great detail an alternative philosophy of language (and much else) to replace the traditional representationalism that has been the paradigm of most philosophical work for the last 400 years, if not longer. Whereas Brandom’s teacher, Richard Rorty, spent most of his time making representationalism as a replaceable paradigm apparent, Brandom has done what Rorty only ever suggested and pointed in the direction of—developing an alternative.
I have not read that book, nor would I probably understand much of it if I tried, but thankfully Brandom has written an introduction to his systematic redescription of traditional philosophy, which he calls inferentialism, Articulating Reasons. Another helpful thing about Brandom is that he writes in such a way (befitting Rorty’s heir) where learning his philosophy is pretty much like simply “getting the hang of” a new way of speaking and thinking. He is also a brilliant expositer for what would otherwise be a tremendously difficult task—replacing one edifice with another. Since getting the hang of Brandom’s philosophy is synonymous with getting the hang of a new technical apparatus, one finds the tools he uses on almost every page, so there aren’t usually better or worse places to draw one’s attention to in Articulating Reasons, and for that reason I will mainly explicate Brandom without much reference to particular places in his text (most of the tools and orientations I will be referring to, however, come from chapter five, “A Social Route from Reasoning to Representing”).
Brandom is a pragmatist, and as such he wishes to follow out the Deweyan suggestion that we take thought to be a kind of practice. When you are thinking you are doing something, and Brandom wishes to describe what it is we are doing. One of the obstacles to viewing thinking as a kind of doing is the idea that our minds are passive mirrors upon reality and that whatever thinking is, we do it naturally. Brandom suggests, however, that thinking is like riding a bicycle—it is a social practice that we learn.
Two main pictures fundamental to Brandom’s view are from Quine and Sellars. From Quine we get the picture of the self (our minds) as a web of beliefs. Beliefs, however, are wont to be viewed as little static items that hang out unchanging if we aren’t careful. Following the pragmatist apotheosis of Bain, then, we can begin our analysis by construing beliefs as habits of action. For our purposes, which will eventually be to display belief change and the scope of rationality, there is a particular habit of action, called reasoning, that we want to understand. For that purpose, Sellars’ own spatial model, the space of reasons, comes in handy. This is the space in which we can say that we remain rational, for it is within this space that we play, in Sellars’ phrase, the “game of giving and asking for reasons.”
It is the explication of precisely what is involved in this game that Brandom is primarily involved in. I’ve previously reserved the “space of reasons,” or rationality, for Sellars’ picture (for reasons to become apparent later), but for Davidsonian reasons (which I will not go into), describing the space of reasons will be the same as describing how language functions. Thus, we get this description from Brandom:
“Specifically linguistic practices are those in which some performances are accorded the significance of assertions or claimings—the undertakings of inferentially articulated (and so propositionally contentful) commitments. Mastering such linguistic practices is a matter of learning how to keep score on the inferentially articulated commitments and entitlements [emphasis mine—MK] of various interlocutors, oneself included. Understanding a speech act—grasping its discursive significance—is being able to attribute the right commitments in response. This is knowing how it changes the score of what the performer and the audience are committed and entitled to.” (164-165)What follows is an attempt to unpack part of the preceding, dense paragraph.
The basic idea is that Brandom redescribes a belief (a term he amusingly says he does not officially believe in) even further from Bain’s definition of a habit of action. A belief, for Brandom, is a kind of commitment. When you assert a belief, you are committing yourself to the content of that belief. If you say, “I believe in God,” you are committing yourself to the consequences of believing in God—which is to say, you are committing yourself to act accordingly. And because of the pragmatist housing of thinking under the broader category of doing, part of the habits of action you are committing yourself to are habits of speech/thinking. To put it another way, saying belief is a kind of commitment is to make sense out of the form, “If you say X, you can’t say Y.”
As an example:
P1) Bob believes in God.
P2) God created the world and all of its inhabitants within seven days.
P3) Biological evolution created homo sapiens over millions of years.
P1 states that Bob is committed to his inability to believe in P3. If Bob does happen to believe in evolution, then a friend of Bob’s, Carrie, could point out to Bob that, according to the scorecard she’s keeping of his commitments, he’s not entitled to believe in evolution—he has broken the rules of the game (called “rationality”).
It is a common happening of the world that people aren’t always fully aware of cognitive dissonances in their web of beliefs, and that these can be pointed out. Because we have so many beliefs that we could be aware of, it is quite possible that Bob had simply never been confronted with the fact that he cannot believe both P2 and P3. On Mondays through Fridays, Bob works as an evolutionary biologist attempting to discover when homo erectus gave way to homo sapien. On Sunday, Bob nods approvingly when the preacher reads from Genesis. Bob didn’t notice anything wrong until Carrie said one day, “You know, Bob—if God created the world, including people, in seven days, then there’s no way evolution could be true.”
What Carrie has just done is make explicit and an implicit tension in Bob’s web of beliefs. Every belief in our web gets its definition by its relationship to every other belief in our web (and implicit relationships are just as much a real relationship as explicit, even if they look invisible to the naked mind’s eye). The tool that was used by Carrie to make the tension explicit to Bob was logic (specifically the logical connective known as the “conditional”). Traditionally, logic has been understood as a canon of right reasoning, but for Brandom it is an auxiliary vocabulary of explication—it helps nonlogical vocabularies (like talk about rocks, or God) go from implicit to explicit relationships.
To sum up the pieces of Brandom’s vocabulary I’ve been deploying: beliefs are like point-masses in a web. As point-masses, beliefs have no definition outside of their relation to other points. This is to say that to know what a belief is (whether for others or yourself) is to know both what else that belief allows you to do/say (your entitlements) and what allows you to say it (your commitments). Logic helps you make explicit what other beliefs you are allowed to have and what beliefs you are committed to keeping.
II. A Spatial Model of Belief Change
Part of what upsets people with Rorty’s argumentation (which becomes more and more pronounced as the years wile by) is that he seems to take positions, and make argumentative choices, for eristic reasons—“I would lose this debate if I didn’t take this position” (i.e. “if I didn’t take this position, I would have to agree with you—and abandon other positions”). Eristic reasons have traditionally been ruled illegal (i.e. sophistic) because they flout truth: one shouldn’t simply argue to gain ephemeral superiority over a transitory opponent, let alone take a position simply because you don’t want to agree with them—one should only take the best position available. However, part of Rorty’s philosophical point (which his practice mirrors) is that eristic reasons are legitimate reasons (though not the only kind of reasons).
The reason for this is the same reason that it’s tough to tell the difference between Sophistic eristic and Socratic dialectic. The Sophists were said to be simply scoring points on the opponents standing in front of them, but that’s what Socrates looked like he was doing, too. The positions exhibited by the characters in the dialogues take the shape they do because of the positions and intercessions of the others. This is to say, both exhibit argumentation spatially, as positions, where moving on the X-axis shifts—whether you know it or not explicitly yet—your relationship to Q (that’s what the “and abandon other positions” clause, usually hidden in people’s understanding of what Rorty means, conveys).
Fig. 1 pretty much represents the spatial model of belief that Quine thought of with his web of belief. In addition to the panrelational quality of beliefs, on the spatial model we can also represent the tenability of a belief by the length of the line between the different points. We can think of the lines as like rubber bands and if a line gets too long, it becomes easier and easier to snap. Facing belief change is sometimes done consciously, but even unconsciously it can be represented as a series of choices, various alternatives of what might happen given increased untenability.
With Fig. 1, say you are faced with A and B:
A) you believe P and Q.
B) you are either 1) confronted through persuasion with believing R instead of P or 2) suddenly find yourself believing R instead of P (through sensation or other dramatic life event, like death of a loved one)
C) What do you do?
1) believe R and stop believing Q
2) forced with the loss of Q, reverse course to P
Both (1) and (2) are simple rankings of importance. In (1), you concluded explicitly on reflection (or we are to assume you must have thought, given exhibited unconscious belief dumping) that R was more important to you than Q, and vice versa in (2).
3) augment Q to Q´
By way of example, if Q is belief in God, you might change your relationship to God (in this case illustrated by the changing spatial inhabitation of your beliefs caused by the shift from P to R) without considering yourself to have shifted beliefs.
4) leave QR implicit, and thus unacknowledged and unfaced
This (illustrated by the dotted line) is the psychological option—which is not an explicit option (for if it was, it wouldn’t be the option it says it is)—that Brandom’s notions of logic helping make beliefs explicit sheds light on formally (which is to say explicitly). Beliefs, for Brandom, are not rocks you pick up and carry with you in a bag that you can inspect at a moment’s notice. Beliefs, following the pragmatists, are habits of action, which is to say for Brandom habits of linguistic articulation. And unlike rocks, habits aren’t something you can just look at and inspect—you can really only get a good look at them while in the act of performance. Adding a new belief isn’t like adding a new piece to a puzzle, where you can kind of glance over the whole puzzle notice clashing colors. Belief tension is more like playing different sports: it isn’t apparent any of the skills you learn playing various sports would clash with each other—until you try hitting a baseball and swinging a golf club. (Or think of Happy Gilmore’s importation of hockey habits into golf.)
The explicit way you articulate a belief shapes exactly what the belief is, which is to say there’s nothing exact about anybody’s web of beliefs because I know of no one who articulates themselves in exactly the same way, even if seemingly talking about the same thing they did the day before. Part of this is memory and part of it is actually being in slightly different contexts (and so causing you to call up slightly different words to articulate just what you think). Either way, though, to say that Q is the same belief from moment of expression to moment of expression becomes a slightly trickier affair, even without the trouble of P and R.
I don’t want to meditate on this difficulty in the isolated belief (which Brandom accepts full, slightly paradoxical, responsibility for in his work), but simply massage it for now by remarking, 1) remember that there’s no such thing as an isolated belief and 2) since beliefs are all hooked up to each other, like in a web, just ponder on how webs billow in the wind with sometimes great flexibility without losing their, shall we say, identity. I bring up the billowy nature of belief simply to make explicit the often lost possibility that it makes quite a lot of sense that people sometimes don’t face up to tensions in their web until some distance after the fact of belief change, like Bob and God in Part 1. Tensions don’t arise until they are made explicit, which can be done by yourself (if you are reflective person), by others (if you are making an ass of yourself and people are sick of your hypocrisy), or by situations (if you find yourself suddenly wanting to say two contradictory things). Since beliefs are habits of articulation, all thinking is an activity, and like all activities, can be practiced.
III. What Are Private Beliefs?
There would seem to be a 5th option, based on Rorty’s work, which would be to:
5) make Q and R not connect
Rorty’s notion of a public/private distinction seems to suggest that we can make beliefs not connect to each other (as when he suggests that we keep God and poetry out of politics), but under this model, it is understood that all beliefs are connected in some fashion. If it’s a belief, it’s a dot, and every dot has a spatial distance from every other dot.
Rorty’s been criticized in several different fashions on this score, some saying it’s impossible to keep beliefs out of each other’s way, some that he’s suggesting something like lying to yourself (an explicit (4)). If the former means “all beliefs are connected,” then yes it’s impossible, and then the latter collapses into the impossibility of the former (you can’t lie to yourself, at least not explicitly). But if the former is not construed that way, it is difficult on its face to see why we can’t keep our beliefs out of each other’s way. Does my belief that “God loves me” get in the way of my belief that “peanut butter is brown”?
What, however, if my belief was that “God loves me because peanut butter is purple.” Then there is a tension if I encounter peanut butter (unless my friends don’t let me open my PB&J sandwiches, and I just see the jelly trickle). What Brandom’s vocabulary helps us see, however, is that these two God beliefs are actually different because actually articulated differently. Articulation counts, big time. Further, we see that Rorty’s suggestion is itself about connections between beliefs, though it typically composes itself by saying it’s not (“God has nothing to do with democracy”). And here we see that Rorty’s strategy is (3).
It might be useful to see that Fig. 4 and Fig. 7 are both useful ways of describing the same change in belief, depending on perspective or attitude. In one regard, “Q to Q´” highlights the similarity between believing in God before and after disconnecting it from your democratic citizenship. One does this facing a community, so that if your membership in a community is at stake, you could suggest that it is a minor alteration within the pale, not a drastic shift beyond it (one might do this with belief in God or disbelief in a war). In the other regard, “Q to S” highlights the fact that in the previous, “Q” and “Q´” are actually different beliefs, and therefore occupy different spatial positions, and therefore have different consequences to your other beliefs.
IV. Faith and Reason
One thing I dispensed with early is the notion that Reason was a thing that could tell you anything, as in the phrase, "What does reason tell you?" There isn’t a faculty called “reason,” but there is an activity called “reasoning.” But, if we follow Brandom’s vocabulary for thinking of thinking, what do we do about faith, that traditional opponent to reason. What do we do about the person who says, “I believe in God based on faith, not a reason.” I do want to puncture the haughtiness of atheists, but we still must say something about it. The first thing to do is to realize that faith is—even if denied—a reason to believe, and so is an articulated reason. But that’s just a baby step: what kind of reason, what kind of seemingly homogenous, infinitely deployable reason is faith?
I think what we need to say about faith is similar to what Kant said about the transcendental ego—it’s that little “I think” that trails implicitly after every sentence. Now, in the case at hand, the reason known as Faith is something like a guardian angel attached to a belief. Remembering that beliefs are habits of articulation, a person is sometimes taught (as all beliefs are learned habits) that if asked for X’s commitments, to reply singularly with “Faith.” In other words, a person just simply learns that “faith” is the only commitment to some beliefs (though the entitlements are everlasting).
This is a fairly simple representation, but it doesn’t quite do full justice. While the above may be true for many simpler kinds of beliefs that some believers have, what are we to do with theology, or any of the sometimes massively articulated creeds of various religions? I think what we get is something like Fig. 9.
Remembering that a line is nothing more than an infinite number of point-masses, we might think of Faith as the shield that protects religious discourse from the entreaties of other discourses. In a way, this is very similar to Rorty’s public/private distinction. We could easily conceive it that way (see Fig. 10).
I’m not quite sure how to wind our way out of it, of what to say about dissimilarities between the two. One might be that the public/private distinction, while itself possibly based on faith (the faith of it succeeding, which is to say hope), is not itself a faith-shield. While the faith-shield in Fig. 9 is something like the ass-end of the outermost beliefs, showing off their commitments to incomers, mooning everyone else with their immunity, the public-shield isn’t exactly a commitment, but a prohibition, a stay-the-eff-out.
I’m not sure where this exactly plays out, but I do think that Brandom’s vocabulary helps us better understand what we are dealing with. We are dealing with people who think—just people who think a certain way. And it isn’t at all clear that us non-religious types, who out of force of habit exclude religious types from their “we’s” and “us’s” when talking about them, don’t ourselves have certain habits of thought similar to their’s with faith. I think that’s why Rorty talked so much about hope and ethnocentrism—what we do around here: both are conversation-stoppers, similar to faith, but it isn’t clear how, or why, one should continue a conversation on certain topics (like why yo’ moms so fat).
Thanks for drawing my attention, this was interesting.
ReplyDeleteI have some questions, I'll start with: is the magnitude of QR in fig 1 onwards to be taken to connote untenability?
Let me say this up front--I didn't use any of my figures that systematically, nor do I remember enough math to remember what "magnitude" refers to on a Cartesian graph. If you mean "length," then yes, yes I do basically assume for the exercise that the QR hook-up would be untenable.
ReplyDeleteWhich does, in its way, bring up the issue of who's judging tenability--as if beliefs were so easily judged, as on a graph. They aren't, which is one of the wonderful things about Brandom's elaboration of what the game metaphor, which Wittgenstein made very popular, means--a heavy intersubjectivity, as people keep their own scorecards on the interrelationship of everyone's commitments and entitlements.
Yes, I was thinking in terms of vectors, the magnitude of QR is just its length.
ReplyDeleteThe issue of a protocol to enable judgement of tenability is important I think. One salient feature of games that informs the metaphor in my view is this: if I play you at chess, the rules are publicly agreed and you can keep your own personal score card if you like, but the public one is likely to be relevant to both of us in a particular way, precisely because we both signed up to the protocol when we agreed to play the game.
I like the way you put that--"likely to be relevant to both of us in a particular way."
ReplyDelete"Relevancy" is one of those underestimated terms in dialogue--if there's a disagreement between two people on relevancy, then big trouble is likely to emerge because it would be analogous to disagreeing on the rules of the game you both thought you had agreed to play.
I think the feature you point out is exactly right, and the large game we are talking about roughly is "rationality." To remain rational, on Brandom's technical account, is to keep score of commitments and entitlements. There will always be what we can count as rational disagreement about the score, but leaving the bounds of the game can happen in egregious cases, which most people will agree on.
The scary thing is if--and this was a fear labeled "irrationalism" earlier in the century--people stop being able to recognize a game called "rationality." Being rational is a social skill, needs to be taught, and can be undone given certain kinds of socializing.
Matt,
ReplyDelete"The scary thing is if--and this was a fear labeled "irrationalism" earlier in the century--people stop being able to recognize a game called "rationality.""
I searched the rest of your site and couldn't find anything else on this, would you care to elaborate? Couldn't one easily make the case that rationality has led to a great many terrible things happening in the world too? As we discussed in the other thread, if our most basic premises/beliefs/reasons are what we use in our reasoning, then it's not the fact of reasoning that leads to "good" or "bad" outcomes, but the premises/beliefs/reasons it was based on?
Thanks,
Nathan
Hi Nathan:
ReplyDeleteYes, one could easily make that case. The Germans, especially, enjoyed doing that at the beginning of this century. The two most prominent, though different, stories of this type is the second-wave Marxists of the Frankfurt School (most especially, Horkheimer and Adorno's Dialectic of Enlightenment) and Heidegger's story, mainly later Heidegger (paradigmatically, in "The Letter on Humanism"). Those two kinds of stories have fit well with stories about the evils of imperialism and/or colonialism, as well as consumerism.
But, I don't think rationality led to these bad things, exactly. Those stories are premised on the same Enlightenment myth that convinced us that something substantive called "rationality" will banish superstition and improve our material well-being. (What did I call it the other day? The "Enlightenment bias"?) It is about, as you say, what initial premises the reasoning is based on, but I don't think we can just count out the "practice of reasoning"--if you are bad at it, you will make bad inferences based on your "correct" premises. Better to think that rationality is a practice, and that good premises and good reasoning come together in an evolving lump (like holism suggests).
If you want a short version of the story against "rationality," and a Rortyan perspective on it, you could see the close of Posthumanism, Antiessentialism, and Depersonalization,"esp. sec. 6.
Matt,
DeleteAs I mentioned in the other thread, your reply is well put.
Nathan