Friday, August 01, 2014

The Legacy of Group Thinking

1.      In the ‘80s and ‘90s, two parallel discussions enveloped a good part of our time in American political discourse, their energies both sometimes denoted by “the culture wars.” At the national level, debate revolved around affirmative action practices and policies. To see the connection to the term “culture,” one must recognize how “political correctness” as a term of abuse was part of the same debate. While political correctness was attaining infamy, the less abusive “multiculturalism” denoted the more parochial “culture wars” in American universities. The idea behind affirmative action practices was that, for example, systemic forms of racism had become embedded in all kinds of American practices (e.g. in the education system or governmental hiring practices or university admissions processes) and that only by active affirmation of equity could these systemic forms of disadvantage based on racial classification be corrected. As an extension, political correctness was the idea that our language is a practice that performs some of this embedding. When minority groups began requesting (or demanding) semantic authority over themselves, the post-Civil Rights milieu was inclined to hear them. So, as an example, within a short space of time accepted parlance went from “colored” to “Negro” to “Negro-American” to “Afro-American” to “African-American,” with the stock of “black” rising and falling randomly.

Now, I said “accepted parlance,” but some instinct in most of us is going to prompt, “accepted by whom?” If I’d said “approved,” then the siren certainly would’ve gone off. Who is handing out this “approval,” judging the “correctness” of our language? I was talking to a friend recently when for some reason this issue of the shift in what to call black people came up. He’s black and was born in the early ‘60s, and so lived through some of these shifts. With impatience he said, “I grew up saying ‘negro,’ but then I was told ‘black.’ Fine. And then I was told ‘African-American,’ and I said, ‘fine,’ but who cares? Why does it matter? I was born on Long Island, not in Africa.” A little while after that, I was talking to an eminent scholar of African-American literature about Ralph Ellison, and we stumbled into that area as well. I told him about my friend, and he related an old quip that someone made in the ‘80s—that only an academic could’ve come up with “African-American.” We laughed. But this is a nexus of the two cultural wars. My friend is no academic by any means, and he votes Republican. His instinct comes out of the American self-reliant tradition. Who is anybody to tell me what I should call myself? And what does it matter? The scholar and I, however, laughed from ironic self-deprecation, at the pieties of academe. For the reason why “African-American” is ensconced in public discourse is in large part because of its enforcement in the cultural sphere of the university, which permeates laterally other intellectually-minded spheres and longitudinally multiple generations of the college-educated.

2.      There are many relics of the cultural wars, of which Allan Bloom’s The Closing of the American Mind (1987) is probably the most famous. But that book, like the right-wing hatchet jobs that abut it (Profscam, Tenured Radicals), doesn’t interest me in the long-term. What do interest me are the books by those on the non-Marxist left. During this time period, the term “liberal” was used to refer to this left, just as “radical” was used for the kind of leftist that generally preferred a post-Marxist, highly theoretical vocabulary for talking about politics, a left that also had a very negative attitude toward America, sans phrase. Two of these books that I’ve kept close to heart for many years are Richard Rorty’s Achieving Our Country (1998) and Stanley Fish’s There’s No Such Thing as Free Speech (1994). Fish’s book has a more complex relationship to the attitudes and situation of that era, as our own, but Rorty’s book simplifies the issue by splitting the two lefts into the liberal “reformist left” and the radical “cultural left.”

This latter term Rorty picked up at a conference at Duke on liberal education, in the midst of the wars, from a comment Henry Louis Gates, Jr. made about the “Rainbow Coalition of contemporary critical theory.” Rorty thought that this left deserved at least “two cheers,” as he put it in the title to his contribution to that conference. What they were doing in focusing our attention on cultural issues of racism, misogyny, and homophobia, and in particular how our language ramifies those things, was an important step in the history of moral progress. The only problem with this left is that it seemed as if they forgot about the money. Class, as a defining concept in one’s politics, seemed to get left behind, and it was hurting the politics of the left at the national level. And when you see the culture wars against the background of the Nixon/Ford-Carter-Reagan-Bush-Clinton-Bush sequence, one can see the prescience of thinking that the parochial-level conversation was, perhaps not hijacking, but obscuring what was happening in national-level politics.

I have great sympathy for this point of view, for I tend to think—in my naifish way—that money would solve a lot of problems. [1] However, David Bromwich doesn’t seem to think that the cultural left even deserves two cheers. Bromwich, a friend of Rorty’s and an English professor at Yale, went after the cultural left, not on political grounds for forgetting about class and producing a skewed and losing political strategy, but on the cultural grounds itself. In Politics by Other Means: Higher Education and Group Thinking (1992), Bromwich argues that the forces at work in multiculturalism are undermining the liberal customs and traditions that support the practice of democracy. I have a lot of sympathy with this trajectory of thought as well, for debates in political theory at the time of the cultural wars were of the thought that the very concept of tradition was at irreducible odds with liberalism. Thus there was that motley crew of “communitarians”: Michael Sandel’s trenchant attack on Rawls in Liberalism and the Limits of Justice (1982), Michael Walzer’s alternative model in Spheres of Justice (1983), Alasdair MacIntyre’s sweeping story of descent into moral unintelligibility in After Virtue (1983), and Charles Taylor’s equally sweeping story in Sources of the Self (1989).

A lot of the debate with communitarians was extremely productive—at the level of theory. The only thing they all have in common is that they are anti-Kantian, and what Rorty and Bromwich have in common is an equally anti-Kantian attitude toward politico-moral philosophy. [2] The master argument of the communitarians was that liberal political philosophy grew in the bosom of Kantian moral philosophy. Kant argued that “the moral” was produced only by a will that willed actions built on the categorical imperative. These were actions that came from no particular interest—interests are contingent features of your empirical self. Moral action only emits from the transcendental self, which is a will not built out of any particular feature of yourself you may have picked up from your environment of individual growth. This is the form of argument Rawls translated into the “original position” argument: pretend you’re behind a veil of ignorance and know nothing about your own features—what kind of just society would you construct for everyone, including yourself?

Sandel suggested that the nature of the self this politico-moral philosophy imagines is peculiarly “unencumbered.” Thinking of yourself this way, as unencumbered by any relationships to the past, future, or the people around you, then dovetails really quite nicely with a libertarian economics that has produced some really bad socioeconomic disparities. The communitarians, riding high on a crest of anti-Kantian argument, said that the philosophy is unworkable, and without that justification, liberalism must fall apart. Additionally, it has produced a uniquely introverted culture with no tradition of coherence to fall back on because it imagines itself without tradition. As Emerson put it, we are endless seekers with no past at our back. So when Rorty and Bromwich turn to the communitarians, there response is roughly: “No, you’re right—Kant produced a terrible philosophy for liberalism. But political liberalism is a practice and tradition, and it doesn’t stand or fall by its philosophical articulation. What we will do—and the grounds upon which you should debate us—is articulate both a better philosophy that agrees with all your anti-Kantian positions and a sense of what liberalism’s practices and traditions are to help repair what we agree is an increasingly introverted public culture.”

3.      What bound the communitarians together was the effort to work from a post-Hegelian tradition of philosophy. This, thus, brought them close to the wisdom post-Marxists wielded. Multiculturalism, however, had quite other sources than Hegel, or even Marx—what motivated and gave it shape was, not the experience of reading a certain tradition of books, but the life experience of individuals shaped by their categorization as an X. [3] By this I mean, not that a person is a woman, or black, or homosexual, but that the person is reduced to being only that category. If you were a typical white man in 1830 and you saw a man walking down a road in the South, then if that man was black, you knew all you needed to know about him as you approached. “Who do you belong to? Where are you going? Where’s your master?” Multiculturalism was the large-scale implementation of the tactic embedded in the slogan “black is beautiful.” The slogan gets its significance (and efficacy) by rubbing against the practices of treating “black” as if it weren’t—e.g., practices of hair straightening and skin lightening. Multiculturalism was the movement of saying, “it’s okay to be a member of the group you’re identified with.”

The trouble is that that wasn’t all multiculturalism turned out to be. What “multiculturalism” obscures, like every -ism, are the boots on the ground translating the -ism into practice. Bromwich retails a few of these actions, translating them—as every good intellectual must—into allegories for the theoretical and practical commitments at work. Bromwich is very effective in showing how what underlies both the cultural left and the cultural right (e.g., William Bennett and Bloom) is an authoritarian structure. The Hegelian conceptual priority of community to the individual, pace Popper, isn’t inherently authoritarian, but when translated from the arid sphere of political theory to the practical politics of the post-Civil Rights left, emphasis on the embeddedness of the individual in a community produced a line of thought Bromwich calls “group thinking.” An example of its linguistic habits might be seen in my earlier formulation of what happened beginning with the Civil Rights movement: “When minority groups began requesting (or demanding) semantic authority over themselves....” But the concept of a “group” here obscures an ambiguity, for it isn’t like all black people got together, signed a petition of request to be referred to as “African-American,” and then delivered it to white people (a parallel group-designation to go with the first). “Minority group” here is a hypostatization, a rhetorical device to cover the thoughts, feelings, and actions of a number of individuals. The problem is that, unlike the President of the United States who has the authority to speak for Americans in foreign affairs because he received the most votes in an election, there’s no equivalent method for determining who has the right to speak for these rhetorical groups. Thus, when individuals begin formulating their thoughts unreflectively with these kinds of locutions—using the rhetorical “we” as a proxy for oneself and an implicit usurpation of semantic authority—Bromwich says they stop real thinking. [4]

If a group of individuals all start speaking for the group, everything is fine as long as everyone says the same thing. But as soon as there’s dissension—“hey! that’s not what I think!”—then the group will talk amongst themselves about what the group thinks. The thinking, you’ll see, happens before the next speaking of “what the group thinks.” But “black people” isn’t a real group in the same practical sense because there’s no place they all meet on Fridays to decide what they think and what they’ll bind themselves to, take responsibility for. So what happens when there’s dissension in a rhetorical group? Implicit rejection—by dissenting, and individuating yourself with the “I,” you’re automatically on the outside from all the other voices still saying the same thing. Bromwich’s argument is that this kind of rhetorical “we”-ing produces a covert norm of conformity, because once the habit of chanting begins, you’ll notice when someone stops, and if those people with the habits take control of actual groups—i.e. institutional apparatuses with practical levers of control (e.g. firing a person)—then you’ll have incentive to beware calling yourself out. Every “I” will become an affirmational “Aye!” [5]

The cultural left wanted to be antiauthoritarian, but its implementation in an institution—which without authority is not—created the situation in which a black person can be told what they should call themselves because they are black. [6]

4.      But who are you to tell me what words I should use? Who am I, indeed—that line of thought cuts very deep, much deeper than we often allow it to. That question is antiauthoritarian in impetus and demands not only an account of authority, but an account of the moral stance generally—the question undermines our ability to use the word should or ought. Bromwich senses the practices of conformity underlying the emphasis on individuals being embedded in demarcated groups, and this is why he smartly suggests Emerson’s “Self-Reliance” as a spiritual antidote: “Whoso would be a man, must be a nonconformist.” But who is Bromwich, or Emerson, to tell us who we should, ought, must be? In the polemical context, this kind of Idiot Questioning can get old fast, but if we’re going to take Descartes’ idiocy seriously, why shouldn’t we this? In other words, just as Descartes demanded an account of knowledge, so do we still need an account of authority. [7]

This is the problem Bromwich faces: The political project of a democratic community, which the United States was formed to embody, values the individual as an end in itself. Political liberalism says that the point of a community is to produce individuals who are differentiated from the community that produced them. Multiculturalism thus seems regressive for seeing individuals as identified with communities (hence, “identity politics”). The problem isn’t that you shouldn’t identify with a community—Bromwich agrees with Rorty that the left’s inability to identify with the American liberal political tradition is harming their ability to be an effective force in American national politics. The problem is that people aren’t given a choice in which communities they can identify with—if you’re born black, then you just are part of the black community. You might be born in America, and thus be part of the American community, but the entire reason Bromwich and Rorty are compelled to argue that the cultural left should act like it is because they have obviously chosen not to so act. There’s no practical mechanism there to make a person identify with the country and its traditions the person was born into. And if there were—like taking a loyalty oath by affixing a flag pin to your lapel—then it would be as dumb or disastrous as it sounds. [8]

For political liberalism, the idea is that individuals can opt into communities if they wish, like being a cheerleader or going to church. The good point to respond with is that there are some communities you don’t get a choice in, and the analogy here is with family: you don’t choose your family. And likewise, one doesn’t choose what country they’re born into, what genitals they have, what color their skin is, who they like having sex with, or for that matter what church they go to growing up. The point of liberalism, however, is that part of becoming an autonomous adult is growing up and choosing whether to remain in the communities one was “born into” because of who your parents were. Maturity is identified on this scheme with autonomous choice.

5.      I find the identification of maturity and autonomy completely persuasive—after all, nobody on any side of American political debate believes in authoritarian political structures. But because socialization requires authoritarian structures, we differentiate between the rights and responsibilities afforded adults as opposed to children in any number of different contexts, thus endorsing a concept of maturation in the life of the democratic community. However, while I think that’s true, I also think that the history of treatment of individuals based on certain attributes (e.g. gender, race, sexual desire, genealogy) has left a mark on the processes of socialization still felt today. In an individual’s growth, this kind of mark is called “trauma” and I think that concept, as many have used it before, is well-suited for talking about the effects of misogyny, racism, homophobia, and hereditary elitism. When the individual is reduced to a group against their will, it traumatizes and arrests their growth into autonomous individuals.

The best way to see this is to recur to the imagined encounter in 1830s Alabama: one must see that one effect of the white man seeing the black man and knowing all he needs to know is that it produces a mirrored response in the black man—as soon as the black man, walking alone on a deserted thoroughfare in Alabama saw a white man, he knew all he needed to know. For if he didn’t realize that he needed to hide in order to avoid those threatening and physically dangerous questions, then he wouldn’t survive 1830s Alabama. If he’d acted like an autonomous Kantian will, behind the veil of ignorance and unencumbered by the consciousness of being black skinned, then he would’ve stumbled into the very real and very dangerous encumbrances of racism. So part of the practical wisdom that had to be passed from generation to generation for blacks was racial categorization—forgetting that the masters think of you in some respect as all alike could lead to death. Indeed, this racial wisdom becomes self-enforced as the community suffers the effects of one individual’s forgetting of it. [9]

This is why “black is beautiful.” It is an outgrowth of a community forced to be a community by the flimsiest of attributes—one. It doesn’t seem to matter which one; if there’s wisdom in the last 200 years of moral reflection on this, then it might be that the difference between “thin” and “thick” conceptions of moral community might be almost literally quantifiable, and that thin communities might not be durable enough to last and fragile communities might be dangerous to themselves. I’m not convinced of that line of thought, but it seems a profitable direction of inquiry. [10] “Black is beautiful” is the kind of slogan needed to give self-esteem to people who have been traumatized because of a flimsy but dangerous reduction of self. Racism and the other ugly reductions dug a hole for those it affected—and you can’t just levitate out of that hole or pretend you’re not in it; you have to fill it in.

Self-esteem has gotten a bad rap in the last 30 years because—and in fact during this same time period of the initial culture wars—Americans have been found to have too much of it. The favored statistic is the difference between how good we think we are and our test scores that are supposed to quantify and validate how good we are. It has become a consistent fact that we think we’re better than we are. The rugged individualists of America (and people who so self-identify are often on the right politically these days, for whatever reason, with venerable exception for the late George Carlin) were right to laugh and denigrate the “Everyone’s a winner!” movement. Their instinct is that a win isn’t really a win if you don’t earn it. But what they weren’t fully cognizant of is the depth of the problem they still face as parents (and citizens, for that matter) with respect to self-esteem. For self-esteem is in the same family as pride, courage, confidence, dignity, self-respect, self-trust, and self-reliance. These are needed for individual autonomy, and every person in a liberal democracy has a right to the instruments and conditions for autonomy; for every individual has a right to grow and mature into an adult. So this is a practical problem of balance. You have to trust yourself to stand on your own, but Emerson realized that true self-trust is difficult, and cannot be treated as easy: “And truly it demands something godlike in him who has cast off the common motives of humanity and has ventured to trust himself for a taskmaster” (“Self-Reliance”). But you can’t brutalize a growing self either—we’ve all seen the horrors of that in portrayals of competitive sports families. Shame is the mechanism at work in learning the difference between winning and losing, correct and incorrect, but you can’t shame a person into the Stone Age without destroying the fertile ground out of which autonomy can grow.

6.      Rorty understood these difficulties, and so began his Achieving Our Country with a brilliant summary of the relevant balances:
National pride is to countries what self-respect is to individuals: a necessary condition for self-improvement. Too much national pride can produce bellicosity and imperialism, just as excessive self-respect can produce arrogance. But just as too little self-respect makes it difficult for a person to display moral courage, so insufficient national pride makes energetic and effective debate about national policy unlikely. Emotional involvement with one’s country—feelings of intense shame or of glowing pride aroused by various parts of its history, and by various present-day national policies—is necessary if political deliberation is to be imaginative and productive. Such deliberation will probably not occur unless pride outweighs shame.
The relevant problem that Rorty confronts is: what do we do when shameful acts seem to outweigh meritorious ones? The title of Rorty’s book is from a famous line in James Baldwin’s The Fire Next Time (1963): “If we—and now I mean the relatively conscious whites and relatively conscious blacks, who must, like lovers, insist on, or create, the consciousness of the others—do not falter in our duty now, we may be able, handful that we are, to end the racial nightmare, and achieve our country, and change the history of the world.” During Baldwin’s meditation on America, he goes to meet Elijah Muhammad, prophet of the Nation of Islam. Muhammad is essentially a separatist, who cannot hope that America might be able to change. Rorty says of the two:
I do not think there is any point in arguing that Elijah Muhammad made the right decision and Baldwin the wrong one, or vice versa. Neither forgave, but one turned away from the project of achieving the country and the other did not. Both decisions are intelligible. Either can be made plausible. But there are no neutral, objective criteria which dictate one rather than the other.
I take this to mean that there is no answer to “Why hope?”—no knockdown argument to force people into the position of being unentitled to give up on a group loyalty. For as I intimated before, being a citizen of a nation is already a rhetorical grouping on all fours with the ones Bromwich is concerned about, of race, gender, or sexual identity. The problem Bromwich cogently faces is the interaction between these latter groupings and the former. For while they are all rhetorical groupings, the rhetorical grouping of national identity also has practical mechanisms for control. That makes an important difference.

The problem Rorty considers, however, is the role such separatism as Muhammad’s plays in the life of individuals negotiating a world in which all are not in control of how they are grouped. Rorty didn’t discuss this kind of problem very much in his work, but it shows up in his major essay on feminism, “Feminism and Pragmatism” (collected in Truth and Progress). [11] Taking a cue from Marilyn Frye’s book, The Politics of Reality, Rorty says that “individuals—even individuals of great courage and imagination—cannot achieve semantic authority, even semantic authority over themselves, on their own. To get such authority you have to hear your own statements as part of a shared practice. Otherwise you yourself will never know whether they are more than ravings, never know whether you are a heroine or a maniac” (TP 223, emphasis Rorty’s). This is where the interesting friction with Bromwich’s book occurs. The concept of “semantic authority” articulates “control over meaning.” We cannot just define words as we wish—words are public items that ping-pong between users, and thus can be imbued with significance a single individual has no control over. The problem for oppressed groups—individuals who are forced to belong to a rhetorical grouping because of the flimsiest of attributes: one—is that their language has been colonized. (And now you can see how these reflections can be extended even further.)

Language is the instrument of self-definition. The problem Bromwich skirts is that you cannot just declare yourself self-reliant. Self-reliance is earned, but in addition to being an attitude, it is also earned linguistically. Being reliant upon a self you have created from public linguistic materials poses the Idiot Question: are you really reliant upon a self you’ve created and not simply conforming, if unconsciously, to the movements of the herd? You can be confident of such authority when you can “hear your own statements as part of a shared practice.” But what if you’ve historically been disallowed from sharing in the practice? Can you be confident that the practice isn’t just foisting on you thoughts and feelings that are actually detrimental to your well-being, that the practice isn’t a confidence scheme, that you aren’t being conned?

7.      This is the existentialist motif of antiauthoritarian instincts, and teenagers often get to this point in their development. We adults say, “trust me: this is yet for your own good—you aren’t being conned.” And, in fact, adolescent rebelliousness is a requisite stage for autonomy. It might not always take the form we’re used to associating with rebellion—nose rings, tattoos, dyed hair—but if you don’t eventually rebel from an authority figure, then you won’t set off on the course of reflection required for making decisions on your own. [12] So demanding semantic authority looks like adolescent behavior to an adult facing an adult—“take it,” is the response, “I thought you already had it.” But the problem of semantic authority is more difficult than that. This is why the concept of trauma is useful. The problem isn’t “Why don’t you grow up?”; it’s “Who are you to tell me when my trauma is over?” No one can just wish it away, and everyone lives with the consequences. Who are we to tell people to grow up, when—as William James said in another connection—it feels like a fight? In the context of a family, growth and parental figures are part of a neutral, necessary structure of authority. But in the rest of life, treating someone like a child is infantilization and “Ah, grow up!” is fightin’ words. That’s the dilemma right there. Adults without confidence are a moral problem. Telling someone to grow up is cruel. Treating them like a child is equally cruel. Cruelty, as Rorty defined the liberal ethos echoing Judith Shklar, is the worst thing we can do. But we live in a world in which historical conditions have made it difficult to produce autonomy. Worse, even without the weight of history, we don’t know any sure-fire methods of education for producing it. Our only consolation is that the value of autonomy is a relatively recent invention—hopefully we can figure this out.




Endnotes

[1] For example, I still maintain to friends that money would pretty much solve our problems with K-12 education, something I became convinced of after reading Jonathan Kozol’s book, Savage Inequalities. My conviction remains unshaken, even after hearing some very good points from friends on the inside of the situation and debate. Whatever the utility of Horace Mann’s vision of education for the commercialist agenda of turning us into good little drones that mindlessly consume, books like Dumbing Us Down just don’t provide a viable long-term solution.

[2] Some of them were more anti-Kantian than others. Bromwich, for example, feels comfortable with enlisting Kant into the articulation of his point of view, whereas Rorty’s distrust of Kant was so deep that he would never do so, even when he could recognize his compatibility on a particular score.

[3] We shouldn’t, for that reason, underestimate the importance of especially Marx to the theoretical self-understanding of this movement, particularly given the importance of the Communist Party in Chicago and Harlem between the World Wars. (And that’s not to mention the importance of Marx to our current overtheorized left.) One should also mention the importance of Hegel to Franz Fanon in Black Skin, White Masks.

[4] Anyone familiar with Rorty, and particularly Rorty criticism, will wonder how this fits together with Rorty’s practice of using the rhetorical “we”: “we pragmatists,” “we historicists,” “we liberal ironists.” The rhetorical “we” is a flexible device, I think; my instinct is to say that Rorty’s “we” does not occlude thought the way Bromwich suggests can happen with the “we,” and of which people have implicitly suggested about Rorty’s “we.” But as this is the most interesting and original line of argument in Bromwich’s book (that I’ve perhaps taken some liberty in reconstructing), I haven’t been able to think through all of its ramifications. For I also still believe, with Rorty, that you need to say “we” to construct a tradition and a community. (For a discussion of this facet of Rorty and the issues it involves, see my “Two Uses of ‘We.’”) So some serious thinking still needs to be put into how to say “we” without forming group thinking. How do we avoid that? What practices and habits do we need to have in place to make sure sheep don’t just start bleating back to us what we want to hear? It’s not enough to say “practices of self-reliance” because what are those? As long as power and authority are in play in the world, and on theoretical grounds I don’t think it’s possible to get rid of them, then the issue of telling between sheep, shepherds, wolves, and autonomous individuals will seem always to be in the air. Could this be the democratic equivalent of epistemological skepticism? Not the Problem of Other Minds, but the Problem of Autonomous Minds?

[5] I discuss some abstract problems with the “we” prompted by Rorty’s work in “Two Uses,” cited in note 4, but see especially Section 3, when I turn to the question I turn to below in the next section. Also, one might compare my discussion of Brandom’s Enlightenment notion of a “norm of commonality” that he invokes to distinguish Truth from the Good, which is at the base of his distinction between commitments to believe and commitments to act—see “On the Asymmetry,” esp. section 9. Perhaps I should add in this note that, despite my rhetoric in this paragraph about “real groups” and “actual groups,” when it comes to the metaphysics of this, rhetorical groups are as real and actual as these other kinds of groups. But we must make a distinction somewhere, rooted in differences in practice (in this case, practical control), even if it shouldn’t be at the level of ontology. And for my current purposes, we needn’t think it through any further. However, if one wanted a taste of the direction I would go, see an old discussion of Rorty and metaphysics in “Philosophy, Metaphysics, and Common Sense.” That paper moves through a discussion of Socrates, Plato, Robert Pirsig and Rorty on how to define philosophy, and what is distinctive about it regarding my shift in thinking and approach, is that it tries to translate problems in metaphilosophy into practical problems of behaving in the world. The discussion of Rorty is toward the end, starting with a paragraph that begins “Rorty treats professional philosophers the same way.”

[6] The saddest story to my ears was, of course, about the professor: in this case, the black political scientist whose class on black politics was boycotted by the black chairman of the Black Studies department because the latter thought the former “might not sufficiently represent the Afro-American point of view.” See Politics by Other Means, 23-26. What’s sad about it, I think, is not that the chairman had a view about the class—the proliferation of opinions and views and their friction with each other is the essence of Milton’s hope that truth will win in a free and open encounter—but that he led the particular boycott he did, meaning he lobbied the undergraduates in his own class to drop out of the other or get involved in protesting and pressuring other undergrads. (And in the environment we should have the highest expectations for creating a free and open encounter—if not the university, where?)

That’s the same kind of subtle coercion at work as we see at issue in cases like Town of Greece v. Galloway, the recent Supreme Court case where an atheist and a Jewish citizen of Greece, NY sued the town for opening every town meeting with a Christian prayer. During oral arguments, Douglas Laycock—arguing for Galloway and Stephens—suggested there was coercion involved when all are asked to bow their heads or rise to their feet for prayer. Justice Scalia scoffed, saying someone who didn’t want to participate could just stay seated. Laycock responded: “What’s coercive about it is it is impossible not to participate without attracting attention to yourself, and moments later you stand up to ask for a group home for your Down syndrome child or for continued use of the public access channel or whatever your petition is, having just, so far as you can tell, irritated the people that you were trying to persuade.” (See page 37 of the transcript of the oral arguments, found here. An audio version with background of the case can be found here. My knowledge of the case is indebted to a student of mine that did excellent research on it.)

Students in the university need to be able to trust that an instructor’s politics, or other extraneous opinions other than the subject of the course, will not interfere with the student’s ability to take the course and do well. It’s one thing, I think, to let your views about such things filter in through the course in various ways; it’s quite another to begin persuading your students to act on your views. It’s the second that transforms the university space from one of inquiry into one of political persuasion—and political persuasion is coercive if you can be punished for not being persuaded. (I should be honest, though: I have from time to time made a plea for students to pay attention to politics, and to make sure to vote. It’s more or less extraneous to any courses I teach, but I figure I have some sort of civic responsibility to do so.)

[7] Since my concerns are philosophical and not polemical, I’ll add here that Bromwich understands this problem, though he wasn’t largely concerned with it in the space of his book. Bromwich was concerned with the effect of multiculturalism on our practices of higher learning, and particularly the practices of literary study, and not about offering an abstract account of authority. He does, however, have all the resources for one in his chapter “Reflection, Morality, and Tradition” and mounts a short version of it in Ch. 5 with respect to aesthetic judgment, and otherwise does nothing to undercut the possibility of pulling a more elaborate one together. (I don’t here pull one together, but I think Robert Brandom has made available the conceptual resources to do so. In sections 2 and 3 of “On the Asymmetry” (cited in note 5) I give an outline of the main notions at work.) The line of thought I’m interested in is, taking for granted that an Emersonian account of authority is possible, how does that affect our assessment of the situation Bromwich faced? For Bromwich, it is clear from the tenor of his book, was deeply embedded in his polemical situation—i.e. he was very angry and concerned about literary study in America. But as we all know, emotion can fade—and it is helpful for it to fade for us to make reflective historical judgments about whether we should still be angry, or whether we should’ve been angry.

A case in point is Bromwich’s treatment of Barbara Herrnstein Smith in Ch. 5. I’ve grown to think of Smith as a pragmatist ally on the plane of epistemology, and Bromwich’s treatment of her Contingencies of Value is, perhaps not unfair, but at least unkind. It is in that chapter that Bromwich formulates the thrust of Smith’s book’s argument’s response to an expert community’s judgment: “who are we, after all ... who are we to dismiss the person who judges the game or the work quite differently?” I’m trying to give, in this brief space, a sketch of who the “we” is that produced multiculturalism and whose claims have a certain equal standing to the expert community. I think Bromwich is right about Smith, that if she turns her epistemological pragmatism into Idiot Questioning with a political point, then she’s undermining the very idea of expert communities—which is disastrous. However, I also think that a better understanding of just what the issue is that divides the Emersonian Bromwich from the Marxish multiculturalists can give us a better idea about what the real problem is.

[8] Bromwich discusses the equivalent of a loyalty oath in English departments on 26-29. I should say here that there’s another, slightly different point that Bromwich agrees with Rorty on here, and that’s the view that the cultural left seemed to think that by doing their academic work they were doing political work. So, if one spends one’s days deconstructing a text (in class or at the computer), exposing phallogocentrism by showing how Woman is in a marginal position, or if you spend it uncovering the capitalist ideology that is really the motivation of a character in a story—then you needn’t attend a rally protesting the very idea of “forced rape” (is there another kind?) or signing a petition for raising the minimum wage. In the words of David Hollinger’s slogan that Rorty liked, “you gave at the office.” See Bromwich’s discussion at 223-25.

[9] One of the best reminders of this historical experience with its attendant racial wisdom is Richard Wright’s “The Ethics of Living Jim Crow,” which appears as an introduction to his collection of short stories, Uncle Tom’s Children. One way to understand the differences between Wright and the two other major post-Harlem Renaissance writers, James Baldwin and Ralph Ellison, is by the differences in their geographical experience. Wright grew up in the deep South; Ellison in the marginally southern Oklahoma City; Baldwin in Harlem. Wright’s pessimism about being black in America—epitomized in his unforgettable description in Black Boy of it as the “essential bleakness” and “cultural barrenness of black life”—was taken issue with by Baldwin and especially Ellison. Both Baldwin and Ellison sound the polemical notes—Baldwin in his scorching “Everybody’s Protest Novel” and Ellison in “Richard Wright’s Blues”—of autonomous maturity as against what they take to be Wright’s short shrift of black prospects. But it’s possible to see this difference in perspective as one of different experiences—Ellison in particular never experienced the harshness of the Southern experience of being black. Growing up black anywhere in America produced trauma, but it’s important to distinguish between the different kinds of experiences in the different regions that inform those experiences. (I should also add that Ellison’s different experience didn’t stop him from producing an equally unforgettable literary epitomization of Southern black experience in the opening chapters of Invisible Man.) The fight that occurred between Ellison and Irving Howe in print about this issue in the early sixties is probably one of the most enduring polemical exchanges between great minds I know of. Polemic usually causes writings to date themselves, but as Howe suggests in his wonderful reflections on the exchange, their attitudinal differences and the problems raised by both pieces have remained, and prove immensely useful to think through ourselves. Ellison’s piece, “The World and the Jug,” was collected in Ellison’s Shadow and Act, and was a response to Howe’s defense of Wright against Baldwin and Ellison, “Black Boys and Native Sons.” The latter should be read as it has been collected in Howe’s Selected Writings, 1950-1990, with its two retrospective addendums from 1969 and 1990.

[10] I’m not going to try to unpack the significance of the thin/thick distinction. It has played an increasingly prominent role among a series of thinkers and I think we’ve only begun to understand the distinction’s utility in conceiving the relationship between conceptual thought and politico-moral community. Thin/thick attempts to play the role once played by abstract/concrete, but in an attempt to avoid some of the dialectical seesaws of nominalism and platonism. The idea was seeded by Clifford Geertz in his famous essay, “Thick Description: Toward an Interpretive Theory of Culture” (1973; included in The Interpretation of Cultures). It has lived a life in many, but the most important for my purposes are Rorty’s use of the distinction in CIS to articulate the concept of “final vocabularies” (see 73) and Walzer’s usage in his little book Thick and Thin: Moral Argument at Home and Abroad (1994).

[11] I think this is one of his most visionary essays that we’ve yet to mine completely of insight. Much of Rorty’s later work, as he readily admits, was repackaging of earlier ideas for different audiences. Only occasionally does Rorty find himself in a position to formulate a new insight in this kind of work, since a good portion of it was also carrying further conversations with old interlocutors (e.g. Putnam, Habermas, etc.) or unimaginative ones (e.g. the many responds-to-his-critics books that Rorty took time to do). (I don’t mean to devalue either kind of work, particularly the latter; garden work is important to do for both sides—you can’t always be breaking new ground.) But the essay on feminism puts him into many interesting, new dialectical positions that produce some interesting reflections on pragmatism. One will find the general form of the argument I’ve made above about an individual’s self-esteem on TP 219.

[12] Against the background of his infamy as the supposed leader of a rebellion against analytic philosophy, Rorty once recalled that “my parents were always telling me that it was about time I had a stage of adolescent revolt. ... They were worried I wasn’t rebellious enough” (TCF 4).

Friday, July 25, 2014

Touchstones

1.     
Indeed there can be no more useful help for discovering what poetry belongs to the class of the truly excellent, and can therefore do us most good, than to have always in one’s mind lines and expressions of the great masters, and to apply them as a touchstone to other poetry. ... [W]e shall find them, when we have lodged them well in our minds, an infallible touchstone for detecting the presence or absence of high poetic quality, and also the degree of this quality, in all other poetry which we may place beside them. (Matthew Arnold, “The Study of Poetry”)
David Bromwich calls this Arnold’s “touchstone theory,” and it’s the locus classicus of the neo-cultural conservative’s idea of the “Western canon.” I don’t think there have been many actual mandarin cultural conservatives who have wielded this obviously silly theory—not even Allan Bloom, I should think—but one does find it in any number of the jingoistic defenders of “culture” that pass for American neoconservative intellectuals. I suppose it is not hard to tell how little use I have for them. The silliness is encapsulated in “infallible”—really? Do no wrong? Worse than thinking, like your child or your guru, that anyone or thing could be infallible is the epistemological problem scared up by the application of the concept: if infallibility is the prize, how does one get to be one of these touchstones? The trick is that if there were independent criteria from the touchstones, then we wouldn’t need the touchstones. So the problem is precisely how to pick out a touchstone when all you have as criteria are your touchstones. At best, that’s a deaf and dumb cultural conservatism that no one has ever followed, even if it’s what comes out of their mouth.

2.      Matthew Arnold’s legacy somehow got bound up with T. S. Eliot’s cultural and political conservatism, and both of them with what literary critics call New Criticism. Both Arnold and Eliot are more interesting than the Theory Revolution of the ‘60s and ‘70s gave them credit for, and New Criticism generally got a bad rap. The effect of this triple rejection was to make reading an impersonal activity. This is ironic, given that Arnold’s touchstone was there precisely to correct against personal taste in favor of disinterestedness. However, the Theory Revolution was in part an attempt to correct against the un-disinterested blocking of candidates to the “canon of great literature” such as, say, women, blacks, gays, etc., etc. It was Anti-Old White Dude. And because the touchstone theory so obviously blocked from view any actual criteria for induction into the canon, universalist and unhistorical criteria of all kinds became suspicious. This was how formalist New Criticism came to be rolled up in this. New Criticism was supposed to be just you and a poem, performing an act called “close reading,” but the Theory Revolution said, quite rightly, that you and the poem are always against the backdrop of historical circumstances. Texts are always in contexts—we cannot shut our eyes to it as the New Critics advise.

But with the Theory Revolution, the combined effect that some onlookers have put in different ways was to subordinate the text to the context—to make the context the primary object of investigation. This was largely an unintended consequence, but it’s why when you obligatorily study literary theory now, you learn theories as different modes of unpacking texts. Many of these courses are taught with one central text, and then each section you learn and apply alternately Freud, Lacan, Derrida, de Man, Kristeva, Greenblatt, Foucault, etc. This is fine as far as it goes, but what happened in the journals was that the focus became the jockeying of modes. Battles over interpretations became implicitly battles over theories (or even, over one’s favored theory-guru), rather than battles over readings of texts. The texts themselves got lost. [1]

What happened with Arnold, Eliot, and New Criticism is a tempest in a teapot. The only reason it might be of concern outside of the discipline of literary studies is because of a phenomenon Harold Bloom crystallized years ago in a passage Richard Rorty liked to quote:
The teacher of literature now in America, far more than the teacher of history or philosophy or religion, is condemned to teach the presentness of the past, because history, philosophy and religion have withdrawn as agents from the Scene of Instruction, leaving the bewildered teacher of literature alone at the altar, terrifiedly wondering whether he is to be sacrifice or priest. (A Map of Misreading 39)
I take it Bloom means that the torch of humanism, or “liberal education”—i.e. the reason why you have to take General Education credits—is now being held by only denizens of English departments. The 400 meter relay has turned unexpectedly into a dash. Rorty quoted the passage in Philosophy and the Mirror of Nature in 1979 because he thought he was witnessing (or rather, had witnessed) the unfortunate abdication of this responsibility by his colleagues in philosophy. What he didn’t realize was that literary studies was heading in the same direction. [2]

3.      The problem with the state of literary studies as it relates to wider responsibilities to the community is that English departments teach students sets of reading practices. [3] They still do, but these practices are different than the ones they used to teach, and it seems to me that many of them are arcane. One of the bits of useful wisdom flattened out from Derrida—or really, the general milieu of mid-20th century French theory—was that the world is usefully seen as a text, and our relationship to it as one of reading. One thing this highlights is the importance of assumptions in guiding reasoning, for those assumptions can give rise to different interpretations of the same phenomena. Once we flatten reading out this way, however, then all the humanistic disciplines, and really, all the disciplines generally, teach students different sets of reading practices, practices with which to read the world around them, practices that help shape and give meaning to the world. [4]

My concern here is with something like a “touchstone practice of reading.” This is something I think was much more natural to previous generations, and is something like what we meant by “erudite.” We don’t get it anymore from our English departments because too often we aren’t encouraged to have a personal relationship with the text. It’s too analytical. Who cares about analyzing the (obviously sexist) gender dynamics of Richardson’s 18th-century novel Pamela? It becomes an exercise in telling you something you already know—though maybe you didn’t. One can find wisdom this way, but it’s an indirect search for historical distance that then becomes the true object of the exercise. And it’s that very indirectness that attenuates the probability of learning the lesson and of (impatient undergraduate) students having the patience required.

Having a personal relationship with a text means being allowed the space to not get anything out of it. This is the problem with teaching literary studies in the educational setting: you have to give grades, which means you need to have assignments, which means you need to assign them tasks to complete. That inherently turns the relationship with the text into a pragmatic, utilitarian relationship: “okay, text: I need something from you—so please hand it over.” That’s not a great way to begin a friendship. [5]

4.      What I have in mind as a touchstone practice of reading comes in three different species. The first is roughly the one Arnold professed: “have always in one’s mind lines and expressions of the great masters.” This species (unbelievably) is like what Bloom had in mind when he defined antithetical practical criticism as “the art of knowing the hidden roads that go from poem to poem.” [6] For this species, the practice of close reading is very important. I’ll call this monumental reading—passages fall off from the textual whole, fragmenting themselves, and landing like epitaphs to considerations one finds by the way-sides. Typically, these passages stick in your craw; they generally find you, not the other way around. Sometimes a great reader will show how a passage sticks; great readers help you read by making inroads, showing off the difficulty while at the same time making it easier. These aren’t keys to locks, but more like maps to difficult terrain—you are still called upon to travel the distance. The trick with great readers is that their fragments become yours, though the fragment feels as if it was always yours.

The practice of close reading is important because the only way to turn on a touchstone fragment is to read it. In the process of reading it one calls out the power one sees working in it—a thought, an image, a theme, a feeling, an argument. The reason one seeks such power, of course, is to use it. But this isn’t the pragmatic, externally imposed use one finds in exams (though great teaching plants the seeds of this internalized use). Touchstones help you think, often by antagonism. Take Emerson’s “one can’t spend the day in explanation” (“Self-Reliance”). This bugs me deeply. One can’t spend the entire day explaining yourself, but couldn’t this turn into a copout? The problem with the fragment is that it is covering just the bit that seems so deeply subversive: Emerson’s heroic plea for solitude—whim. “I shun father and mother and wife and brother when my genius calls me. I would write on the lintels of the door-post, Whim.” Nothing poses the problem of genius so distinctly. What you do best, your vocation, your calling, your inner self—all of your explanations and defenses of your withdrawal from your responsibilities could amount to nothing more than a whim, to caprice, to a conceited fantasy that is a fig leaf covering naked desire and mere wish-fulfillment. But what else is there to do? You can’t spend the day in explanation.

In monumental reading, the power that is pulled is put to use on either yourself or another fragment. All that is required for monumental reading is that the two touchstones cross the same commons. And this generally happens unexpectedly, and only after much practice. Do readings like the above riff on Emerson happen the first time around? Generally not, not on call, or a whim. It’s because the passage has been bouncing around my head, and against other passages, for so long. And this “bouncing” is writing. The transition from reading to writing is where the power occurs—the process of reading and writing, the friction between the two, is what generates the power. Otherwise the text will sit inert. It is during that process of transitioning a text from something read to written that allows other fragments to flow in. It is only by reading the fragment above that a break might open up and, in this case, a Freudianism might slip in. Will it pay off? Will Freud help me understand Emerson better, or the two of them to understand genius or explanation or the balance between solitude and society? Perhaps. But only if I move to read Freud.

5.      The second species of touchstone reading is located in the notion of the figure. I adapt the concept from crossing Rorty with Lionel Trilling. Here’s Rorty on the literary ironist’s contextual definition of a figure:
We ironists treat these people not as anonymous channels for truth but as abbreviations for a certain final vocabulary and for the sorts of beliefs and desires typical of its users. The older Hegel became a name for such a vocabulary, and Kierkegaard and Nietzsche have become names for others. If we are told that the actual lives such men lived had little to do with the books and the terminology which attracted our attention to them, we brush this aside. We treat the names of such people as the names of the heroes of their own books. We do not bother to distinguish Swift from saeva indignatio, Hegel from Geist, Nietzsche from Zarathustra, Marcel Proust from Marcel the narrator, or Trilling from The Liberal Imagination. We do not care whether these writers managed to live up to their own self-images. What we want to know is whether to adopt those images – to re-create ourselves, in whole or in part, in these people’s image. We go about answering this question by experimenting with the vocabularies which these people concocted. We redescribe ourselves, our situation, our past, in those terms and compare the results with alternative redescriptions which use the vocabularies of alternative figures. We ironists hope, by this continual redescription, to make the best selves for ourselves that we can.

Such comparison, such playing off of figures against each other, is the principal activity now covered by the term “literary criticism.” (CIS 79-80)
I think one can see how everything I said about reading and writing gets reapplied in this context. The reason why this form of abbreviation makes sense is because Rorty thinks of individuals as “incarnated vocabularies” (80). I want to call this species of touchstone reading dramatic reading. These are tales of the mighty dead doing battle with each other on the stage of your imagination. This is one reason why having a personal relationship with the book is important: it’s how the stage gets set up in the first place. And though we might always reserve the right to have personal favorites, our pets and darlings, Trilling’s criteria are important for rendering the imaginative space of the ironist:
Instructed and lively intellects do not make pets and darlings and dears out of the writers they admire but they do make them into what can be called “figures”—that is to say, creative spirits whose work requires an especially conscientious study because in it are to be discerned significances, even mysteries, even powers, which carry it beyond what in a loose and general sense we call literature, beyond even what we think of as very good literature, and bring it to as close an approximation of a sacred wisdom as can be achieved in our culture. [7]
“Spirit” seems to me a very precise word in this context. Turning an author, or character, or any individual for that matter, into a figure is very much a process of spectralization—our imaginations are haunted by these spectres. And very often we don’t know why at first. People are figures in another very precise sense as well—the names are figurative, they are symbols, tropes, metaphors. That’s why the author’s actual life matters little in this kind of reading, for their afterlife is in our minds—and in, I should add, how we use them to change the world.

6.      There are four things that speak against dramatic reading that I think are important to retail. (Some of these apply equally to monumental reading, though I won’t specify which and how.) Three of them are important. Two are tied to one’s opinion of the intellectual current we still call romanticism. The third is a pragmatic concern about the process. The fourth is technical, and important only in regards to how we are taught literature today, for it only makes sense in the context of some opinion about romanticism.

To disentangle what I regard as four separate objections, it will be helpful to note that Rorty’s notion of a “figure” is derived from Trilling. (So, I’ve really crossed Trilling with Trilling.) Rorty quotes the above passage from Trilling at the end of “Nineteenth Century Idealism and Twentieth Century Textualism” in Consequences of Pragmatism. The essay is an important precursor to Contingency, Irony, and Solidarity, and in it Rorty identifies the proclivity to figuralize with romanticism, and notes that Trilling distrusts this tendency. This distrust, Rorty says, comes from an instinctive democratic attitude. It’s the same Rousseauvian attitude that motivated Kant’s understanding of morality. Everyone, Kant thought, could act morally because morality was relatively simple. You didn’t need to study for it. So the idea that there’s “a sacred wisdom which takes precedence over the common moral consciousness” (CP 158) repulses people like Kant and Trilling. The elitism in the figularizing process is located in the “conscientious study” bit, what Trilling calls “the redemptive strenuosities of the intellectual life.” This makes whatever it is the “instructed and lively intellect” is after an esotericism, the kind of thing that breeds priesthoods.

The democratic attitude’s anti-esotericism is one source for being suspicious of figuralizing, and thus dramatic reading. Another, slightly different source is suspicion of hero-worship. One can see in hero-worship a kind of elitism, but the suspicion of heroes is more general than anti-esotericism. The Founding Fathers aren’t priests, though their image seems sacred. Esoteric priesthoods arise in response to the sacralizing process. You need a hero first, then you can have a priest. Because this suspicion of heroes is more general it has manifested itself in different ways. One important way is the movement by historians of different grades to “expose” the Founding Fathers and the myths that have clothed them. Another way, typically found in the academy, is the attitude of knowingness. Rorty calls knowingness “a state of soul which prevents shudders of awe. It makes one immune to romantic enthusiasm.” [8] In the academy, this knowingness often attends having some theory of human motivation: utilitarian, Freudian, Marxist, Foucauldian, etc. You always know why people really do things when you have such a theory. But sometimes it manifests itself as ironic detachment—sometimes numbed, sometimes haughty, but always dry.

7.      The suspicion of heroes that attends the historian’s exposés is motivated by a good fear of whitewashing history. The figuralizing process does move you away from the “what happened” to something we can just call “significance.” Good history needs to make that move or else all one has is a chronicle, like early tablatures of how much grain was taken in by taxes. But if you get caught up in the significance of events, you can sometimes lose your hold on those events, like a balloon having its tether cut. This is the third objection, which is pragmatic. Recall Rorty’s description of the figure as an abbreviation of a vocabulary. The pragmatic concern is that the figuralizing process tends to whitewash events and persons because you forget (or never learned) how you got to the abbreviation.

This is a real concern, just as the threat to our democratic attitudes is and heroes turning into gods are. The pragmatic concern, however, is where we see the first two concerns come into conflict. Because the only way to make sure heroes don’t turn into gods is to make sure that the historical tether stays in place. But we don’t have those tethers at our finger-tips—it requires historical work, conscientious study. [9] And so generally, returning to dramatic reading, the only way to fight against whitewashing is to read again. Rorty was often criticized for taking liberties with his figures, and I think rightly so sometimes. Sometimes you need to stop and unpack that abbreviation to make sure it’s the thing you say it is. Dramatic reading requires conscientious study because it’s the only way to form the historical sacred: heroes with warts, mysteries that aren’t mysterious because ineffable (and thus watched over by priests) but because they are difficult (and thus to be worked on by honest inquirers).

8.      Trilling’s concern was that books were being taken away from what Virginia Woolf called the “common reader.” This concern has been resurrected by anti-Theory Revolution literary critics like Harold Bloom, Robert Alter, Andrew Delbanco, and Bromwich. Trilling was worried that academic knowingness was creating distance between the books and nonacademic readers, the uninitiated. This is what’s behind my lament in section 3 that we aren’t encouraged to have personal relationships with books anymore. As scholars, we need, quite rightly, to know things about texts. But this can get in the way of why we should read the text in the first place. And this is how the unimportant technical objection to the figuralizing process comes up. It’s unimportant because it can be brushed aside, but it is important insofar as English professors are the only ones at the Altar of Humanism in the Scene of Instruction. Before I got into all this, I knew someone who’d dropped out of UW-Madison’s first tier literature program. Why? “Because I didn’t enjoy reading anymore.”

The technical objection comes up when you recall Rorty saying that ironists “treat the names of such people as the names of the heroes of their own books. We do not bother to distinguish Swift from saeva indignatio, Hegel from Geist, Nietzsche from Zarathustra, Marcel Proust from Marcel the narrator, or Trilling from The Liberal Imagination.” Perhaps there is no problem with Swift, Hegel, or Trilling, but identifying an author with a narrator is a big mistake we teach all students in Literature 101. The author is not the narrator, the poet is not the speaker. An older style of criticism often didn’t care, and made such identifications willy-nilly. But, technically, we need to pause before making those inferences. “The speaker” is a technical device for referring to the voice out of which the poem emits, and it’s important because identifying traits of that voice is terribly important to figuring out what the poem is about. But if you too quickly assume that the voice is the poet’s, you might import all kinds of things you know about the poet into your understanding of the poem—and it might be wrong. What if a male poet wanted to write from a female point of view? What if a typically optimistic poet wanted to write a poem that is ironically tragic, but you miss the irony because you too quickly assimilate the poem to your assumed picture of the poet?

“Narrator” and “speaker” are useful devices to check your entitlement to inferences about the author or poet from material found in their books. Rorty says the ironist is blithe about this, but the devices are important to the figuralizing process insofar as the Hawthorne of The Scarlet Letter might be a different figure from the one of The Marble Faun. They aren’t really, I would assert, but you won’t know that until you do the work. The trouble with many academicians, though, is that they use the technical point to implicitly block the process of figuralizing, to stop dramatic reading. Dramatic reading is difficult, and requires time, but its excesses need room to spill so the seeds of a love of reading can be sown. Getting bogged down in technical details can strangle it in the crib. It’s the how while forgetting the why, the inverse of the problem of figuralizing. This implicit blocking only makes sense in the context of an anti-romanticism. And indeed, all-too-knowing exposers of the past have run rampant over many of our studies, as if the reason to study Emerson was to expose some of the racist attitudes that still attended the prophet of self-reliance. [10] The knowing attitude wants a land without heroes, and perhaps only demons. Dramatic reading is antithetical to that.

9.      If monumental reading is about identifying with pieces and parts and dramatic reading is about identifying with the whole, then the last species of touchstone reading combines the two. I’ll call it afflictive reading. This is the kind of reading and writing that is marked by an obsessive return to the pieces and parts of a single figure. For whatever reason, returning again and again to read and relate the writings of this figure helps you identify who you are and where you are. Afflictive readers aren’t necessarily group-worshippers; they might never want or need to talk to others about their obsessions. Their imaginations are simply haunted, afflicted by the presence of this other imagination, for good or for ill, as demon or hero. Sometimes the obsession takes the form of an implicit inquiry, an extended, attenuated etiological investigation to discover just what that reason is for the obsession.

Afflictive readers, like all obsessives, can be really annoying. Talk about anything, and their obsession is bound to come up. When I was young, my High School Sunday School teacher asked us to think about what the difference was between a cult and a religion. His answer was that being in a cult meant being a member of only one group, at its limit. If Erin—our representative cheerleader—was only a cheerleader, then you might say she was in a cheerleading cult. But no, she’s a cheerleader, a Christian, a member of the Hill family, etc., etc. The spiritual affliction of a single figure can become cult-like in this way—you might only have one hero, one affliction. The best remedy for this is to read widely. Maybe you continue to return over and over to the same person or thing, but you will at least have the perspective to see better why your obsession is worth obsessing over. Hero-worship is best if it’s pantheonistic. Even if you have a Zeus, you’ll only understand why he’s Zeus by comparison with Apollo and Athena.

I’d like to think that touchstone readers with their obsessions are in some way better off than those without such obsessions. But my democratic fiber resists the thought. In colleges, we like to think we’re there to teach “critical thinking,” but when it comes down to it, critical thinking isn’t ours alone. But still...there must be value in some people being capable of long chains of inference, in some people devoting themselves to conscientious study to make sure the tethers stay in place. Jefferson and Hamilton thought education to be massively important to the democratic process, and the laments of the academic class about our continued illiteracy have to be understood as Jeffersonian expressions. [11] The fear some of us have about literature is that a formerly important instrument of self-enlargement might be passing with no replacement. It isn’t all the English professors’ fault, but some of them aren’t helping.




Endnotes

[1] I’ll add that this made critical battles essentially philosophical battles. And philosophical battles fought by people untrained in philosophy are dangerous—like giving automatic weapons to people who haven’t been to the firing range, innocent bystanders and users alike are going to get hurt. The great always rise to the top—Stanley Fish, for example, is as philosophically sophisticated as they come, with no philosophical training. But so much of it is dreadful, and written with a brazen self-confidence that one finds awkward if one has read any philosophy at all.

[2] There are, however, hold-outs everywhere. I had several professors of history at UW-Madison as an undergrad who I would count as having held that torch, but they were largely old. Indeed, there were several political theorists in the Poli-Sci department that I would also count. The theorists were anomalous (and all fled or retired), but my experience with history professors leads me to think that history hasn’t quite left the altar. Bernard Bailyn, an eminent and aging historian, in a short pamphlet recording an extended interview, On the Teaching and Writing of History, gives me hope that undergraduate history education is happening as he teaches and thinks about it—after all, he has had many graduate students.

[3] There’s something also to be said about the production of K-12 English teachers from university English departments, and thus the relationship of what has happened at the university level to K-12 education. But my sense of this is dim, and it is further mediated by my sense that K-12 education has in general much bigger problems having to do with funding. In my experience in a School of Education, though, my sense was that the prospective English teachers—who were, after all, only going to have a B.A. in English—hardly knew who Derrida or Foucault were, and didn’t care if they did. But if you were going to get a job at the university level from 1980 on, you had better. The relationship between lower and higher education is less like trickledown than it seems like some cultural conservatives feared in the Culture Wars of the 1980s and 90s. (For more on the Culture Wars of recent memory, see my forthcoming “The Legacy of Group Thinking.”)

[4] For two good statements by Rorty of what happens when you treat everything, including fossils and other lumps, as a text (and how unradical at a disciplinary level such a redescription is), see the third section of “Method, Social Science, and Social Hope” (in CP) and “Texts and Lumps” (in ORT).

[5] There’s something further to be learned about the type of person that does, then, go in for academic, humanistic study. After all, if it’s the utilitarian and pragmatic reader fulfilling short-term goals that is rewarded at the lower levels (writing papers, finishing a class), then since there are no short-term goals fulfilled by going into it as a business (i.e. money), who is it that is going into humanistic study? What kind of profile do these graduate students have? I think one answer that goes a long way to explaining why all these kinds of post-Theory Revolution alternatives to humanistic reading took off is power—some people enjoy the feel of dominating a text. (It’s similar to the feel of winning an argument.) Power is the impulse at work in the strong poet of Rorty’s Contingency, Irony, and Solidarity, which for Rorty is the Shelleyan unacknowledged legislator of the world. So one explanation is that these theory-gurus started purveying strategies for dominating texts (some more unintentionally than others). The theories appear difficult on the outside, but once you get the hang of deconstruction or Foucault, it becomes easy to endlessly reapply to texts—thus, each time, giving you that high of domination. Combine that with the mid-range reward of a job for being able to do this easy thing, and you have what happened to English departments. It happened to philosophy as well, when logical positivists declared that philosophical problems were really linguistic problems—thinking that dissolving the philosophical problems would be really easy after that. However, in both cases, what gets you high easily at first becomes more difficult after a while if you take too much of it. If that isn’t precisely the case at the individual level, it is at the institutional level, with people increasingly wondering whether there’s a larger point to the activity besides the ephemeral power-high.

Power, too, seems antithetical to being friends, and that’s a deep question Rorty’s perspective opens up on the practices of reading. There’s nothing friendly about the strong poet’s approach to opening up new vistas of thought (though it is quite personal). But must we all be strong poets, and thus unfriendly? Must we be strong all of the time? The patterns I outline as touchstone reading below are something I believe are useful to all readers, and especially the amateur connoisseur who has no interest in power (or, very little at least). One might, however, in response to the impulse of power located in Rorty’s conception of the strong poet, deny the premise that power, and thus cruelty, is at the heart of our creators of new vistas of thought. The denial of this premise can be found in many post-Marxist utopic visions (those that turn away from Marx’s violence), but it is especially common in feminist thinkers. This is the line of thought in Dianne Rothleder’s book, The Work of Friendship: Rorty, His Critics, and the Project of Solidarity (1999). As I hope I’ve suggested, I take this to be a very fruitful line of thought to take to Rorty’s work. For example, Annette Baier’s feminism, praised by Rorty, could be brought into closer contact, like her work on the concept of trust and its masculinization through its assimilation to the contract rather than the personal relationships of a family.

However, Rothleder takes the easy route of criticism, rather than pushing Rorty to his limits. She says, “What we need to ask is why the ironist would want to redescribe others in terms that, if made public, would humiliate?” (64) The answer implicit in her discussion is power—we would do it because it makes us feel powerful. And we need that power, in Rorty’s vision, to overturn the bad in the world. Rothleder, I think, avoids saying “power” out loud, though, because she isn’t convinced that power is needed for the work of revolution (whether conceptual or political). The ironist would risk humiliating because what the ironist really wants to be is a strong poet. (Rothleder conflates the two, but that’s mainly Rorty’s fault for having deployed the terms somewhat inconsistently.) But Rothleder avoids facing the problem of utopic change without power by calling the strong poet’s impulse the “Bloomian desire to destroy otherness in order to be original” (64). That’s true, and self-centered, but that originality and cruelty might have great public utility is left unsaid. So Rothleder needs to deny the premise that change requires power, and is thus in some sense a cruel act. Instead, Rothleder focuses on the terms of cruelty and humiliation, and their forced semi-privacy in Rorty’s utopia, saying, “what is sad about the reasoning here is that the desire not to be cruel seems to come not from a goodness of heart, but from a fear of one’s own suffering” (64). Rothleder is correct here, and cites CIS 91 for evidence, but she again avoids the more promising avenue of reflection, which is that this idea isn’t Rorty’s, but the feminist political theorist Judith Shklar’s. Shklar didn’t think this was sad, but the centerpiece of liberal thinking about politics—the “liberalism of fear” (from the essay of that name, collected in her Political Thought and Political Thinkers). Rorty’s line of thought about this, whether heart or fear, is in fact more complicated than Shklar’s seems at times, because of Rorty’s commitment to moral sentimentalism. (See especially “Human Rights, Rationality, and Sentimentality” in Truth and Progress.)

What further stunts Rothleder’s take on Rorty is that she misses Rorty’s own tenderness over and against the power-mad strong poet. The image of Proust is juxtaposed with the power-strong Nietzsche-Heidegger-Derrida sequence in order to talk about the difference between a tender focus on “beauty” and a tough focus on “the sublime.” It is these passages in CIS that have gone underrecognized in attempting to understand Rorty’s strong poet and ironist, and they bear directly on practices of reading, for beauty is to friendship what sublimity is to power. Read this discussion about power against my discussion of Rorty at the beginning of “Asceticism and the Fire of the Imagination,” and particularly thinking of the line “a lyric which you recite, but do not (for fear of injuring it) relate to anything else.” I don’t talk about power there, but perpendicular issues across these central passages.

[6] Bloom, The Anxiety of Influence, 96. Bloom did not mean just allusion, or a narrow definition of it, but something much more pregnant and difficult to make out. Reading through Arnold is one way of putting this dark line to use, though I do not think it travels as far into the darkness as Bloom wished.

[7] “Why We Read Jane Austen” in The Moral Obligation to Be Intelligent, 519

[8] “The Inspirational Value of Great Works of Literature” in AOC, 126

[9] You might wonder about the rejection of heroes entirely, which came in two flavors. I regard the form of knowingness that attends theories of human motivation as anti-democratic esotericism (did you understand Foucault right off the bat?), and so disregard them because what I’m concerned with is the attempt to be anti-esoteric with regards to people. The problem, I’m suggesting, is that it takes quite a bit of work to get to know people.

[10] Edgar Dryden lamentingly retails some of these attitudes in the recent history of Americanist criticism at the beginning of his Monumental Melville.

[11] I’ve learned most about Jefferson and the other Founders from Judith Shklar. For a review of her use of Madison, Hamilton, Jefferson, and Adams, see “Shklar’s Vision of American Political Thought,” sections 5 and 6. What’s most interesting to me about the antithesis of Jefferson and Adams on education is how education comes out—as it has here—as anti-democratic in some manner, though in Hamilton’s sense education and information is what makes things more democratic. And indeed, that seems a core American democratic value—the right to education as egalitarian.

Friday, July 18, 2014

Absurdity and the Claims of Others

1.      Every once in a while, I receive an email from people who have read one of my Pirsig essays at moq.org, every one of which have an “author’s note” in the header that gives my email and a playful request for all forms of feedback. Often I have to explain that I wrote them many years ago, and no longer think many of the things they contain. [1] The essay, bar none, that I receive the most emails about is one I wrote on Camus for a philosophy class on existentialism, “Absurdity and the Meaning of Life.” Usually, the people have no interest in Pirsig because they found the essay by typing in the relevant keywords into google. I have gotten many different kinds of response to this essay. [2]

2.      I recently received a response that touched an interesting chord. It consisted simply of a quoted passage from the essay—a passage from Camus and then my response—and a series of rhetorical questions. (No salutation, no closing signature; one of the oddities of internet life I’ve had to grow accustomed to.) Like so:
“Rising, streetcar, four hours in the office or the factory, meal, streetcar, four hours of work, meal, sleep, and Monday Tuesday Wednesday Thursday Friday and Saturday according to the same rhythm—this path is easily followed most of the time. But one day the “why” arises and everything begins in that weariness tinged with amazement.”
In this way Camus seems to be merely pointing out the absurdity of some people’s lives. I can think of several people in my life that don’t fit into this simplistic mold. My sister, for example, prefers to live her life as something of a free spirit. She works when she needs money, sleeps rarely, and parties a lot. This isn’t quite Camus’ point, however. Camus would certainly argue that my sister is indeed in a pattern that could easily be questioned. Why party all the time? Why not work instead? Why anything at all? This last question is what Camus is driving at. [3]
What if we have to support our family?
What if we are bored of life?
What if we have no hope to live?
What if everything seems nothing but absurd?
What if you have no one to love you?
I found this very interesting, because it struck me that there was a tension in the questions. Some of the rhetorical questions point to an answer in the Cosmic Christ, and given my mysterious writer’s email handle, I suspect that was the intended effect. What if you’re bored, what if there’s no hope, what if everything seems absurd, what if you have no one to love you? Jesus Christ can do all of those things for you—He can love you, give meaning to your life, enter you into an exciting project to structure your earthly time. He is the substance of hope, the evidence for things unseen.

3.      The first question, however, doesn’t fit that pattern: “What if we have to support our family?” I like Louis CK’s version:
Whenever single people complain about anything, I really want them to shut the fuck up. First of all, if you’re single, your life has no consequence on the earth. Even if you’re helping people aggressively, which you’re fucking not, nobody gives a shit what happens to you. You can die, and it actually doesn’t matter. It doesn’t. Your mother will cry or whatever, but otherwise, nobody gives a shit.
I can’t die; I got two kids and my wife doesn’t fucking work, so I don’t get to die. [4]
I think what this points to is a different pattern of possible answers to those other rhetorical questions: finding a place in the web of life that makes up the connections we have with others is a way of giving meaning, finding hope, and structuring your earthly time. If you’re looking for love, you can find it in your fleshy neighbors.

And this, now, is the philosophical response I would give to Camus. The problem I took up in that paper was produced by the intersection between the seeming absurdity of our day-to-day processes of living and the death of God. Camus saw that there was a problem riven into modern life. God once played the role of framing life, of being the framework in which meaning was constituted—no longer. And the increasing mechanization of life, symbolized (and literalized) by the clock, produces the feeling of a succession of days without success, unending—up goes the boulder, and down it goes for tomorrow. In effect, these two patterns of answer—my interlocutor’s answer of Christ, my answer of fleshy neighbors—are simply two different attempts to grasp one horn over the other of the dilemma of modernity, so structured. He doubles down on Christ, denying God is dead; I double down on day-to-day life, denying it is absurd.

Camus thought the ultimate philosophical question was of suicide: why shouldn’t I commit suicide? Louis CK suggests quite plainly how taking seriously your obligations to others answers Camus’ question—his entire act is almost a concession to the absurdity, which heightens the sense of moral responsibility and obligation we continually flout but must find within ourselves if we are to be good people. Even if I wanted to die, Louis CK says, I don’t get to.

4.      Denying that life is absurd and finding meaning in the arms of others is a much more fragile pattern of answers, given that people can betray you, and no one is entitled to love from any particular person. We might be able to say that every person is entitled to love, but that does not mean that any person is entitled to fulfill that as their unique obligation.

Love from fleshy others takes work. Claiming the attention of others is as delicate a dance as the practice of reason, of the giving and asking for reasons, the exchange of assertions and claims about why you do or think a thing. Jesus’ love isn’t like that; He loves you no matter what. That, I think, speaks to the continued pull of Christian teachings. But it doesn’t speak against pouring your heart into others as opposed to the absurdly easy love of Christ.




Endnotes

[1] You can find a list of those essays and my grading of them here.

[2]One particularly weird one I had fun with I wrote about in “How Not to Start a Philosophical Conversation.” For an earlier autobiographical reflection on the “Absurdity” essay, in which I rain on my earlier self’s parade, see “Second Thoughts on Existentialism.” It has perhaps the greatest pun I’ve ever made (though “How Not to” has a good one, too).

[3] The Camus is from The Myth of Sisyphus, and the passage is from the beginning of the section of “Absurdity” entitled “Life Is Absurd.”

[4] This is from Shameless. You can listen to a clip of this joke here. I agree with the makers of that website—Louis CK is one of the most philosophically substantive comedians working today.

Friday, July 11, 2014

On the Asymmetry between Practical and Doxastic Discursive Commitments

1.      If you’re reading this, you have either a masochistic curiosity for jargon, or recognize the title as an allusion to Chapter 4, Section 4, Subsection 3 of Robert Brandom’s Making It Explicit. Either way, you need help. (Enjoy the pun.)

I intend for this to be a note on a hinge in Brandom’s systematic philosophy of language. There are many hinges, to be sure, as a quick look at the index will indicate: Brandom has a separate entry for “distinctions,” retailing 53 of them, from “acknowledging/attributing” to “weak/strong/hyper-inferentialism.” Brandom’s skill at systematization is somewhat breathtaking, requiring many stops along the way. But Brandom is also a genius of public relations. Brandom’s work, like Foucault’s, requires stubborn persistence. One reads either only by “getting the hang of it” through sustained practice, both in the reading and through writing—by trying to apply appropriately the concepts at work in the reading. Their vocabularies are strange enough to require a certain know-how through practice, but dense enough for a payoff. The only difference between the two is that Brandom was intentionally writing a system, whereas Foucault was simply a strange writer. Foucault’s vocabulary wasn’t created to fit entirely together, and many a critic has profitably broken off pieces for use. (Many more have experimented and failed, and far, far too many have emulated Foucault and ended up just writing thin ice.)

Brandom, however, has mastered the technique of imperceptible repetition—only if you’re paying very careful attention will you realize that he repeats himself quite a lot. I think one reason you don’t tend to notice this is that, given a sufficiently intricate and foreign vocabulary, you’re always welcoming to reminders of what, e.g., the importance of the difference between deontic statuses and attitudes are, or how commitment and entitlement combine to produce a third deontic status of incompatibility. (Plus the book is 700+ pages long, so who’s going to remember?) What is particularly masterful about Brandom’s strategy is that it mimics the content of the theory. For Brandom, semantics is beholden to pragmatics, meaning to use. That means that “meaning” is a cream you skim off the top of the usage of a word, a pattern that forms after seeing a large quantity of uses of a series of marks or noises. Understanding someone’s meaning is, for Brandom, a matter of being able to deploy their vocables in just the way the other person would. Understanding a person is getting the hang of them. Reading systematic philosophy is, in other words, a lot like Weirdo Comedies—if you just hang in there with Zach Galafianakis, the movie promises, you’ll find him quite likable and a real nourishment to your soul.

2.      What I’m going to do in the next three sections is introduce and arrange a few of Brandom’s main ideas in order to reach the alluded to moment in which Brandom shows an asymmetry between practical commitments to action and theoretical commitments to belief. While an analogy between the two allows for an elaboration of a number of key elements of how reason and agency hang together, Brandom hangs on this asymmetry a distinction between how we think about Truth (of our beliefs) and the Good (of what we do). In the final sections, I will speculate about motivations, philosophical and otherwise, and take issue with the asymmetry, suggesting that in the background is a dispute with his dissertation advisor, Richard Rorty, but whose consequences have to do with the soul of pragmatism. [1]

Brandom calls his philosophy of language inferentialism, and its primary benefactor is Wilfrid Sellars. The key thought here is that inference is the concept that needs priority in figuring out how language works, not reference. Reference has gotten a lot of play because of the relative triumph of empiricism over rationalism—since Kant essentially conceded that empiricism is the unguarded philosophy of science and regular life [2]—and the seemingly intuitive appeal of thinking that whatever is true corresponds (i.e. refers correctly) to the way the world is. Our understanding of how words refer, or as Brandom puts it, the representational dimension of language, has been unduly influenced by British empiricism. Sellars’ famous attack on the Myth of the Given is what results when one wants to displace reference in order to get a better picture of how reference works in concert with other important dimensions and concepts as inference, meaning, language-use, truth, perception, and action. [3] The trick is to see that our distinctive form of negotiation with the world (as opposed to a beaver’s or a rock’s) is discursively constituted—necessarily mediated by our language-use. The world isn’t just given to us through the senses; one way we respond to the world is with the linguistic mechanisms programmed into us by socialization into a community.

Brandom’s pragmatism comes out in the form of his Wittgensteinian defense of the priority of pragmatics over semantics. Language-use is what gives rise to meaning, rather than meaning determining use. Our handle on a word, however, is normative—there are correct and incorrect uses of words and concepts. If there weren’t a pattern of conformity somewhere in the usage of a word, how would we communicate successfully? Since we obviously do successfully communicate from time to time, Brandom takes a central project of philosophy of language to be the explanation of how the trick is done (or rather, as he smartly reframes it, what would count as doing the trick—since, again, we obviously know how).

One important angle from which this pragmatism comes out, for our present purposes, is in Brandom’s adaptation of Michael Dummett’s two-aspect model of meaning. A word means what it does given “the inference from the circumstances of appropriate employment to the appropriate consequences of such employment” (MIE 117). [4] If you know how to use the word correctly and know what correctly follows from having used it, then you can be said to know what the word means. That means only saying “red” when in the presence of red (and not blue), and knowing that if you are in the presence of red, you are ipso facto in the presence of a color. In other words, understanding meaning is in the first place understanding a word’s inferential role—what inferences it licenses, what it commits you to, and what is incompatible with it.

3.      I’ve now deployed Brandom’s three major “deontic statuses”: entitlement (what you are entitled or licensed to infer from correct usage), commitment (what you are committed to inferring from previous, correct usage), and incompatibility (what you are barred from entitlement to given certain other commitments). [5] (Don’t ask what a “deontic status” is, for now—it doesn’t matter.) These statuses are like markers in the social scorekeeping game of giving and asking for reasons. We attribute to others commitments based on their behavior (in particular, saying sentences) and keep track of their entitlement to those behaviors. This is the same for our scorecard of ourselves: a “self-attribution” is simply the avowal or acknowledgment of a commitment of your own. The philosophical concept of belief—as in, the one that appears in the JTB formula for knowledge—is replaced in Brandom’s conception by commitment. This makes it clearer that, by and large, you are responsible for being justified or entitled to your commitments. [6] A belief or commitment is something you take to be true—that’s just what it is. Beliefs, your commitments, simply by being that status, are in what Wilfrid Sellars called “the space of reasons.” They must be justified...at some point.

This is the really important bit that distinguishes Brandom’s inferentialism from many rationalisms that are, like his, genealogically tied to the Enlightenment. Brandom thinks that the key moral concept of responsibility is interlocked with another concept that often has trouble fitting into post-Enlightenment ethical frames: authority. Authority has seemed to Enlightenment traditions a social concept that can only become part of our moral thought if it ties to a foundation made up of nonsocial concepts. God’s law, natural law, and Kantian principles are all ways of constituting this nonsocial space, a space unmediated by social activity. The social contract tradition that Hobbes initiated is an attempt to move, as J. B. Schneewind puts it, “toward a world on its own” by trying to create the nonsocial out of the social—rather than found the latter on the former, thus requiring one to find the former prior to the latter—but it got sidetracked by utilitarianism’s insistence that some hedonistic end is the only end possible, on the one hand, and Kantianism’s insistence that the only contract that can rise above the social is one universally applicable, on the other. [7] Brandom has tried to fill out our conceptual notions of responsibility and authority with an entirely social base: the space of reasons is the social game of giving and asking for reasons.

This filling out of what rationality is has meant a repolishing of the concept of authority. The best way to see this is in the role of testimony. Say Don claims that there’s a dog dish in the kitchen. Chris challenges the claim, asking, “how do you know?” Given the way we currently play the game of giving and asking for reasons, it is perfectly acceptable for Don to entitle his commitment by replying, “Because Dan just came from the kitchen and said that Fido was in there eating out of his bowl. What are you, deaf?” Don is justifying his claim, which he takes to be true, with Dan’s testimony: “Dude, your dog is totally chowing down in the kitchen...the food is everywhere around his bowl!” [8] In other words, Don is deferring to Dan’s authority as a reliable reporter of how things are via his perceptions.

Deference then goes together with inference. When justifying a claim, it is in general permitted to either display your inferential warrant or defer that warrant to another. This is also how citation works in intellectual matters. We permit people to cite the work of others in order to justify an inference. There are many (many) things to say about the particular reason-giving and consuming games we play—how sometimes it isn’t permitted to simply rest on the authority of another—but these are species of the larger genus in which deference is an intelligible possibility. This shows that people who like to deploy the saying “I have to see things for myself” are perhaps unduly restricting themselves and what they can claim to know.

4.      The last item to deploy before turning to Chapter 4.4.3 is the relationship between perception and action to inference, to the game of giving and asking for reasons. This is another area in which Brandom’s pragmatism comes out, given pragmatism’s concern with action and consequences. Brandom’s interest in Chapter 4 is to give an account of rational agency. In Section 2, above, I briefly alluded to the linguistic turn that Sellars and Brandom make with regard to thought—our negotiation with the world is inherently discursive, and hence linguistic. When Sellars demolished the Myth of the Given, however, he wasn’t suggesting that we don’t have nonlinguistic experiences—as if rocks couldn’t exist without “rock.” [9] Sellars was trying to get us to see that perception’s authority can only be constituted within the social game of giving and asking for reasons, and as such can only be articulated inferentially and linguistically, even if the perception itself isn’t linguistic.

To that end, Brandom follows Sellars in talking of “language-entries” and “language-exits” to talk about how perception can begin an inferential chain and how action can end one. In the passage I will shortly focus on, Brandom also talks about the parallel between practical and doxastic discursive commitments. A practical commitment is a commitment to act in a certain way; a doxastic commitment is a commitment to (ahem) believe in a certain way. [10] I find it helpful to think of these two as belief-action relations and belief-belief relations. The former is modeled in the practical syllogism: given two premises related to each other, a certain action ought to follow.
A.   It’s raining outside.
B.   If it’s raining, then you shouldn’t go outside.
C.   You shouldn’t go outside.
Doxastic commitments, or belief-belief relations, then, are modeled on a regular syllogism whose outcome is a commitment to take a certain thing to be true: a taking-true, as Brandom sometimes puts it.
D.   It’s raining inside.
E.   If it’s raining inside, then you are crazy.
F.   You are crazy.
Why did I say “you are crazy” and not “you should think you are crazy”? After all, that would make the two syllogistic models parallel. This asymmetry, I think, matches the asymmetry Brandom highlights in the passage I will finally get to. Notice that all the premises and conclusions are takings-true as they are, and that all the lines in only the second premises (B and E) would be takings-true if the “ought” were added, but that if the “ought” is removed from the first syllogism, it doesn’t make any sense. “If it’s raining, then you do not go outside.” “You do not go outside.” You can’t just take the conclusion to be true; you have to verify whether or not it is true. And that’s because they are empirical claims, at best.

5.      Now, I think, we are in a position to understand Chapter 4.4.3, “Asymmetries between Practical and Doxastic Discursive Commitments.” This is the first asymmetry that Brandom explicates:
The first way in which the structure governing the attribution of entitlements to practical discursive commitments differs from that governing the attribution of entitlements to doxastic ones is that there is nothing corresponding to the authority of testimony in the practical case. The issue of entitlement can arise for practical commitments, as for all discursive commitments. But the (conditional) responsibility to vindicate such commitments is, in the practical case, exclusively a justificatory responsibility. Default entitlements aside, it is only by exhibiting a piece of reasoning having as its conclusion the practical commitment in question that entitlement to such commitments can in general be demonstrated or secured. (MIE 239)
To discharge your responsibility for entitling yourself to an action, you have to infer, e.g. by producing explicitly a syllogism like the above, not defer. The reason Brandom says this is so is because you have to have a desire for an action to make sense. [11] That is missing from my examples above. It would read: Aʹ2 “If it’s raining, and if you don’t want to get wet, then you shouldn’t go outside.” If you press for symmetry, what’s the corresponding desire missing in premise B2 of “If it’s raining inside, then you are crazy”? It is this inability to produce a corresponding desire that leads Brandom to claim that “committing oneself to a claim is putting it forward as true, and this means as something that everyone in some sense ought to believe” (239), whereas putting something forward as good, because it is relative to desire, is not necessarily something for everyone. As he puts it later, “That there is no implicit normative commitment that plays the same role with respect to desire (and therefore intention and action in general) that truth plays with respect to belief consists simply in the absence (in the structure according to which entitlements to practical commitments are inherited) of anything corresponding to the interpersonal dimension of testimony and vindication by deferral” (240).

This is where things get clearer and inkier by the same measure, as it is this supposed inability to produce a corresponding desire in the B syllogism that will prove to be at issue. What began my whole rumination on this paragraph was re-reading it with a marginal comment I had left the last time I read the book (two years ago). Next to “exclusively a justificatory responsibility,” which is the bit about inference, I had written, in quotation marks, “she said I could.” Cryptic—how would that be a riposte? It seems like justification, but it doesn’t seem like “exhibiting a piece of reasoning.” Reading the passage again is when I realized what I was pointing to: doesn’t “she said I could” play the role of testimony?

This set me abuzz, and I immediately sat down to start writing this. Literally—for as I puzzled over Brandom’s terminology, making sure that I’m reasoning through it properly, I eventually read further on: “It is of course possible to add an interpersonal dimension of practical authority as a superstructure to the basic game of giving and asking for reasons for actions” (240). These are commands and permissions. Well, that set me adrift (though at least I was using Brandom’s terminology correctly), but it leaves the question: okay, so if we can do it (and obviously do), then why isn’t it part of the basic makeup, and instead merely an epiphenomenon?

6.      I’m inclined to think that there are asymmetries in the area, one of which, Brandom points out, is the fact that claims, assertions, takings-true, seem in general to be inheritable by anyone, though commands or permissions are not. You have to be a citizen of the United States to have permission to vote, but there aren’t similar restrictions on who can claim “There’s a dog dish in the kitchen because Dan said so.” Likewise, if Don makes the latter claim, there’s nothing that can stop Chris—who didn’t hear Dan, if you remember—from extending the chain further, inheriting Dan’s testimony: “There’s a dog dish in the kitchen because Don said Dan said so.” However, a deputized civilian cannot themselves deputize more civilians. If I’m the teacher, and I allow one kid to go to the bathroom, the kid then doesn’t have authorization to allow some second kid to go just because he was allowed. As Brandom says, “assertion ... is an egalitarian practice in a sense in which commanding and giving permission is not” (241-42).

This last comment, I think, reveals a little above the ankle of why Brandom doesn’t think commands and permissions are part of the basic makeup of the social game of giving and asking for reasons. Brandom perceives himself as an inheritor of a distinctively Enlightenment tradition of practical reason. By this I mean that, though he leaves behind the faculty psychologies that hypostatized “R”eason and the foundationalisms that reified “P”rinciples, he traces his Sellarsian inferentialism and Wittgensteinian pragmatism through not only key passages of the Kritiks, but most prominently (and repeatedly) through Kant’s “What Is Enlightenment?”—a short tract written for a periodical that summarizes in a brilliant piece of rhetoric pretty much what people have in mind when they talk about the Enlightenment. Its message is “think for yourself and build an egalitarian world-community.”

Does this mean Brandom’s Enlightenment hopes cause him to bias his account in favor of egalitarian practices? Has he cooked the books in this regard? Or, in another direction, does he use the egalitarian nature of assertion to entitle himself to assertions about how we are all beholden to this implicit egalitarianism at the political level, ala Habermas?

I don’t think he does either. Brandom’s account is too sane and intuitive at all the right places. For example, part of the extreme end of Enlightenment rationalism was the demand for justification—you always have to be able to justify your assertions. But if one actually pursues this injunction, it will produce an infinite regress, as each justification is an assertion, and so equally in need of justification. In order to soften this demand, Brandom introduces the notion of the “default-and-challenge structure of entitlement.” This structure embodies the notion that commitments are entitled by default unless challenged. And since challenges themselves are something like assertions, this means that challenges to produce entitlement need to be themselves entitled. You don’t always need to answer your child’s question, or Descartes’. I believe this is actually a quite radical innovation in theoretical philosophy, for not only does it seem to better describe the way we actually behave, it’s produced by the pressure of a theoretical consideration—the problem of the infinite regress. And with Habermasian appeals to the nature of “communicative reason” as dictating to social institutions, the very fact that he shows how commands and permissions can be constituted shows how Brandom’s pragmatist priority of taking (a thing to be a certain way) over being (a certain way) goes very deep. [12] The world can be taken to be a pretty shitty place, and that’s on us—as Rorty loved to paraphrase Nelson Goodman, there is no Way the World Is to push back on us in this regard.

7.      For his part, Rorty did think his former student at Princeton sounded a little too sane. It sounds a little too close to Habermas. Rorty’s whipping post was Brandom’s reconstitution of the concept of fact as being a “true claimable.” Rorty thought this attempt was of a piece with his attempt to recuperate the word “representation” despite the fact that Brandom was the guy who coined the word “antirepresentationalism” in order to better whip the metaphysical realist that Rorty spent his whole career tirelessly running to the ground. When Brandom says that part of his project is to show how pragmatism can incorporate the “representational dimension” of thought and talk, Rorty thinks he’s conceding too much to the realist, for all Brandom is showing is how we (have to) use the word “about.” In a lot of ways, Rorty concedes, this is just a verbal matter of terminology—but, he insists, “rhetoric matters.” [13]

I too think rhetoric matters, and I think significant Brandom’s occasional, ironic use of a highfalutin Platonic vocabulary. These moments—as when he calls us at the beginning of the book “speakers and seekers of truth”—I think are tweeks of Rorty’s nose, winks in his direction. Rorty was infamous for enjoying his ironic tweeking. Whereas Rorty used to love shocking the metaphysically-inclined with little aphoristic hyperboles, I think these were meant to shock him, the shocker. But how do you do that, how do you shock the impious? You act out by appearing reactionary, saintly. And at just the moment Brandom is making his case for asymmetry, there appears nose-tweaking.
Talk about belief as involving an implicit commitment to the Truth as One, the same for all believers, is a colorful way of talking about the role of testimony and challenge in the authority structure of doxastic commitment—about the way in which entitlements can be inherited by others and undercut by the incompatible commitments they become entitled to. The Good is not in the same way One, at least not if the focus is widened from specifically moral reasons for action to reasons for action generally, so as to include prudential and institutional goods. (MIE 240)
It is this that I think cues us to Brandom marking his territory as against Rorty. For Brandom tweaks at the same time as he makes a distinction that Rorty wanted to deny in order to work out the consequences of pragmatism on moral philosophy: Rorty denied the Kantian distinction between morality and prudence. [14] Why would Brandom suggest we can make that distinction at the same time he’s flouting Rorty?

8.      He does it because he’s too calm and sane. If we get our back up about this, Brandom can just calm us down by reminding us that we are pragmatists—hence, we assert the conceptual priority of taking over being. Thus, if we take a reason for action to be one that should be pressed on everyone universally, then we just mark off that quadrant of actions and reasons-for-actions as those distinctive of the “moral realm.” It’s not a difference in kind, just a difference of how far we are willing to extend the “should.” Prudence, then, is just the kind of thing that we go easy on people about—some people just like plain, old vanilla. No need to force chocolate on them and ruin the dinner party.

The problem with this bit of sanity is that it is a really good point. Why can’t we keep pushing that point about practical commitments over into the “ought” governing the doxastic commitments? The reason Brandom can’t see how to is because of the intuition he throws up in our face, the one that came up in section 5 and that I riffed on in the first paragraph of 6: the truth of claims seems obviously inheritable in a way actions are not. The other way I put this intuition, in section 5, is that there doesn’t seem to be anything corresponding to desire in a doxastic syllogism (viz. premise B2 of “If it’s raining inside, then you are crazy”).

It is at this point that we have to remind ourselves that philosophers, particularly radical game-changers like Plato, Newton, Jefferson, and Hegel, don’t have to give a damn about our “intuitions.” They are just conformity with the past, which is what our radical intellectuals would like to precisely change. [15] And look at the rhetoric I’ve been mirroring from Brandom in this whole discussion: “only by exhibiting a piece of reasoning...such [practical] commitments can in general be demonstrated” (239); “a claim is putting it forward as true...as something that everyone in some sense ought to believe” (239). What sense is that? Does the restriction of sense in which “truth” is something everyone ought to believe mirror the anomalous pocket we find in the sphere of reasons-for-action in general, which Brandom follows tradition in calling “moral reasons”?

9.      I think it must be, and I’m compelled to push back against Brandom’s summoning of the Enlightenment spirit, for I think it is only that too-rationalist spirit that is operative in Brandom’s assigning of weight to the asymmetry between practical and doxastic commitments, Truth and the Good. Here’s Brandom’s most explicit conjuration:
We come with different bodies, and that by itself ensures that we will have different desires; what is good for my digestion may not be good for yours; my reason to avoid peppers need be no reason for you to avoid peppers. Our different bodies give us different perceptual perspectives on the world as well, but belief as taking-true incorporates an implicit norm of commonality—that we should pool our resources, attempt to overcome the error and ignorance that distinguish our different sets of doxastic commitments, and aim at a common set of beliefs that are equally good for all. (240)
This is the implicit commitment he thinks missing from practical commitments. And put this way, it almost seems like a slap in the face of the Enlightenment political project in favor of its ill-fated theoretical project to destroy superstition—after all, it is just the rhetoric of that hyper-rationalism that provided cover for Europe’s imperialist dominations: “let us help you overcome your ignorance and superstitions...just...let go...of the reigns of....control, ah!, there—now, we’ll just run things until you figure all this out.”

Rhetoric matters, but it isn’t the rhetoric that concerns me here. I trust that Brandom’s on the side of the angels; he just feels the need, perhaps rightly, to fight the demons of Derrideans. [16] Rather, I want to know what Brandom would say to Oscar Wilde. Dear Brandom,—was Wilde in error or ignorance when he was tried for blasphemy?
One can only refuse to employ the concept, on the grounds that it embodies an inference one does not endorse. (When the prosecutor at Oscar Wilde’s trial asked him to say under oath whether a particular passage in one of his works did or did not constitute blasphemy, Wilde replied, “Blasphemy is not one of my words.”) (MIE 126)
The Wilde anecdote is a favorite of Brandom’s whenever he discusses this point about the appropriate circumstances of concept-deployment. This point embodies his assent to Rorty’s point about the primacy of vocabularies—it is only in the context of a vocabulary that we can utter true sentences. [17] Brandom seems to clearly make the point that we do not have an implicit norm of commonality governing our choice in vocabularies or concepts. And religious vocabularies are merely the most obvious candidate to push back against Brandom’s asymmetry. Is committing yourself to a religious claim putting it foward as true, and thus in some sense as something that everyone ought to believe? Maybe; but that “in some sense,” it seems to me, is working very differently than the one I quoted in section 5.

10.      What I’ve been driving at is that I don’t think Brandom is entitled to think that the Truth is One in any sense in which the Good is not. In the abstract air of metaphilosophy, the reason we shouldn’t expect them to be different is because pragmatism gives explanatory priority to pragmatics over semantics, use over meaning, action over belief. Brandom’s lead way of working this out in Making It Explicit is to say that “norms implicit in practice” have priority over “norms explicit in rules” (see Ch. 1.3). One way to rewrite Chapter 4.4.3 to reflect this is to say that there is, appearances to the contrary, an implicit desire at work in our doxastic discursive commitments. Here’s the full practical syllogism with the implicit desire-commitment in italics:
Aʹ1   It’s raining outside.
Aʹ2   If it’s raining, and if you don’t want to get wet, then you shouldn’t go outside.
Aʹ3   You shouldn’t go outside.
Here’s the doxastic syllogism with a corresponding implicit desire-commitment filled in:
Bʹ1   It’s raining inside.
Bʹ2   If it’s raining inside, and if you want to use the vocabulary of “crazy,” then you are crazy.
Bʹ3   You are crazy.
It’s not as intuitive as the implicit desire at work in the practical syllogism, but that’s why my reminder about what “intuitions” really are came up at the end of section 8. Rorty and Brandom both understand the awkward balance between old and new they are forced to straddle. Rorty understood that part of the appeal of pragmatism was its commonsensical attitude, almost folksy in the hands of James, but what attracted Rorty was its prophetic, visionary side. This is the side that would rather toss away the old wineskins and let the new wine eke through our fingers than capitulate to our current ability to handle it. But there must be some sort of rapprochement made for the radical innovation to be more than eccentricity, let alone incomprehensible gibberish.

This is what Brandom is at work doing for the radical ideas of Wittgenstein, Sellars, and Rorty in the arena of philosophy of language. For years, (the later) Wittgenstein was received as making systematic philosophy impossible. Brandom says, No. For years, Sellars’ dense prose and historical breadth made his ideas impenetrable. Brandom says, See? And one of Rorty’s most important ideas was what Brandom calls “the vocabulary vocabulary.” Rorty was considered by analytic philosophers (among other things) a relativist. The vocab vocab is one site where this occurs. Ch. 4.4.3 is one site where Rorty’s shadow gets riven with the shadow futurity casts. Brandom takes the future to need the half that falls toward intuition. I think it will need the other half.

11.      The radical idea is that we “choose” what vocabulary we use. If you read Contingency, Irony, and Solidarity—or have any sense of how education works—then that will seem silly. There’s a reason Robert Pirsig jokingly called education “mass hypnosis.” One no more chooses one’s vocabulary than one chooses one’s parents. We’re thrown into the world, as Heidegger would put it. So how do we find ourselves back with decisionism, the trace of which Rorty bemoaned in his earlier Philosophy and the Mirror of Nature? Decisionism is the name used to mark that terrible idea common to Sartre and the boot-strapping American neocon: you, and only you, make who you are, so if you’re a psychopath (or poor), you only need decide to be good to be good (or rich to be rich).

But life isn’t like that. Every choice we make feels like the right choice, and even if it feels like the wrong one, whatever is calling out that “feeling” is clearly the loser in the battle between whatever psychic entities one cares to spell out: reason, passion, conscience, id, better angels, vice, heads, tails. We only do what we ultimately feel compelled to do, in some sense. Even behaving as if for no reason is acting for that very reason. One doesn’t just, willy-nilly, choose.

Of course, you can blind yourself in various ways. Some choices we make, the assessment is so daunting that we go cross-eyed. Sometimes we forget why we did something, and fill in a different syllogism. But this isn’t the problem Brandom is after. Brandom is after a picture of how the mechanism must work to count as working. The most important Enlightenment idea he feels champion of is the idea that making the commitments of our ideas and actions explicit will allow them to be argued over and, following Milton, in a free and open encounter the Truth will take care of itself. The notion that vocabulary-choice fills the spot in a doxastic syllogism where desire operates in a practical one is simply one more explicitation mechanism. [18] It’s the moment where one can, and is made to, perhaps, by challengers, acknowledge one’s adopted vocabulary. And in acknowledgement, we have to take responsibility for it. And if one wasn’t conscious that there were other options available, then Win for Enlightenment. As the G. I. Joes say, “knowing is half the battle.”

(The other half is smacking Cobra Commander’s mask off once you’ve found him out. So even if you can’t coax old vocabs into early retirement, at least you will know where the bodies are buried.)




Endnotes

[1] This piece also got much longer than I anticipated. It is not breezy, particularly in some sections. But Brandom’s vocabulary is worth tussling with, and the general esoteric nature of most analytic philosophy causes me to respond with volume. For two earlier pieces that set the stage for the return to pragmatism that Rorty is the primary protagonist in, see my “Quine, Sellars, Empiricism, and the Linguistic Turn” and “Davidson’s ‘On the Very Idea of a Conceptual Scheme.’” For two earlier attempts to put Brandom’s vocabulary to work, see “A Spatial Model of Belief Change” and “Better and the Best,” sec. 5.

[2] “Unguarded” because Kant split the difference with rationalism by insisting against British empiricism that, as the formula goes, only a transcendental idealist can be an empirical realist. Empiricism wins the day with common sense, but philosophers must be more sophisticated than that. Rationalism turning into transcendental idealism provides the pre-history to inferentialism that Brandom retails in Tales of the Mighty Dead, while discussing the turning point of Kant and Hegel in Reason in Philosophy.

[3] I’ve tried to discuss Sellars’ Myth of the Given in the context of pragmatism and the linguistic turn in “Quine, Sellars.” I’ve also discussed Quine’s related attack on the Two Dogmas of reductionism and the analytic/synthetic distinction (as a prelude to understanding Davidson’s attack on the Third Dogma of the scheme/content distinction) in “Davidson’s” (for both see note 1).

[4] I’m trying to suppress as many unneeded aspects of Brandom’s philosophy as I can for this exposition, and two of them that are useful to bear in mind is Brandom’s endorsement of two more theses one can attribute to the Wittgenstein of the Philosophical Investigations. Not only the priority of use to meaning, but Brandom also endorses the notion that a concept is but a word and that sentences have priority over words in constituting meaning. While the former is linguistic turn common sense, the latter is a serious problem in the philosophy of language requiring quite a bit of theoretical machinery to explain, viz. how do words get meaning from sentences, since it seems so intuitive how sentences get meaning from words? Brandom’s two most technical chapters, “Substitution: What Are Singular Terms, and Why Are There Any?” and “Anaphora: The Structure of Token Repeatables,” aim to supply the detailed backbone for a Wittgensteinian approach to subsentential expressions. As an outsider to the discipline, they are very difficult though surprisingly interesting. I suspect, though, that they are the most important chapters for actual analytic philosophers of language. Rorty for years had gotten a bad rap for dealing in atmosphere rather than nuts and bolts, but Brandom does the hard work Rorty could never convince himself needed to be done. (Since Making It Explicit was twenty years in the making—from dissertation to publication in 1994—and Rorty was an avid follower of his student’s career, it’s possible he was so relaxed just knowing Brandom was out there.)

[5] Though I won’t go into it, Brandom’s inferential status of incompatibility does all the work for him that the logical status of contradiction does for most people. What Brandom is able to make better sense of, to my mind, is the psychological capability of self-contradiction. You can be committed to two contradictory things, you just aren’t allowed. Logical contradiction is subsumed, or skimmed off the top of, the social impropriety of being committed to one thing that precludes entitlement to another commitment you avow.

[6] The JTB, or “justified true belief,” conception of knowledge derives from Plato’s Theaetetus and has been durably used since then, though its sufficiency has been contested by in particular Paul Grice, opening up a small subfield in epistemology for the enumeration of further criteria for knowledge. Brandom has a number of interesting things to say about that conception, and his reconstrual. In particular, however, his discussion of the ambiguity of the concept of belief at MIE 195-96 is apropos.

[7] I’ve borrowed the phrase from the title of the third part of Schneewind’s The Invention of Autonomy, from which I’ve learned much about the history of what I just potted together. Schneewind’s story is about the rise by fits and starts of a morality of self-governance—culminating (though not ending) in Kant—as opposed to a morality of obedience, the special dispensation of moralities with God at the conceptual center. What Rorty would insist upon is that the nature of Kant’s first version of a morality that does not need God, and thus has humans in a world they govern themselves, is still a morality of obedience because of the way he constitutes the sphere of the moral (which importantly stands conceptually outside social behavior). Kantian principles have authority that is distinctly not a social authority.

[8] One might note my inconsistencies of expression: these illustrate Brandom’s notion of meaning as inferential role. Because if you didn’t know that “bowl” in that context was interchangeable with “dog dish,” then we would be entitled to thinking you don’t know what those words mean. (Brandom calls that a “substitution inference,” and it’s one of the cornerstones of his explication of the representational dimension of thought and talk.) However, if Stanley Cavell were here, he might wonder if I, acting as the philosopher and using one of the most common tools of the trade—the thought experiment—actually understood how a conversation works. Reflect on how the scene I constructed, properly sequenced, plays out:
[Dan walks into the living room, and sits down on the couch between the reading Curtis and Don, who is staring at the ceiling]
Dan [to Don]: “Dude, your dog is totally chowing down in the kitchen...the food is everywhere around his bowl.”
Don [still staring at the ceiling]: “There’s a dog dish in the kitchen.”
Chris [looking up from the Meditationes de Prima Philosophia]: “How do you know?”
Don [rolling his eyes and turning his head halfway toward Chris’s end of the couch]: “Because Dan just came from the kitchen and said that Fido was in there eating out of his bowl. [looking back up toward the ceiling] What are you, deaf?”
Chris: “Screw you, Don. What kind of stupid non sequitor was it to say so in the first place?”
Indeed, why did Don make that claim in the first place? Was it to screw with Chris, who he knew had been reading too much Descartes lately? But if it was a trap, why did he roll his eyes? Just to be a dick? It would be like laying a bear trap and rolling your eyes at the howling bear, at his stupidity. (His stupidity—your what for laying it?) And why are Chris and Don on opposite sides of the couch?

Cavell has a very unique, existential approach to philosophy, and took such inquiry into our examples and hypotheticals to reflect something about the nature of philosophy, which induces the philosopher to create such bizarrely remote, half-idiot conversations. (Wittgenstein, Cavell thought, was a genius at this kind of inquiry.) I’ve even thematized philosophy into the thought-experiment, here the emblem of philosophy, to make the inquiry more conducive to such generalizations.

[9] This was the trouble Derrida got linguistic-turn philosophers like Sellars in for what Sellars called his “psychological nominalism”: “all awareness is a linguistic affair.” For Derrida’s slogan that “there is nothing outside the text” sounded like pure, trapped-in-the-head idealism. Sellars’, however, is much closer to Kant’s, which abides by the slogan, “thoughts without content are empty, intuitions without concepts are blind.” For more on this, see my discussion in “Quine, Sellars,” cited in Note 1. A problem in this area is the concept of experience, which Brandom thinks, like belief, is ambiguous and overworked in the history of philosophy. Like “belief,” “experience” just isn’t one of Brandom’s words. I’ve dubbed “retropragmatists” those pragmatists, unlike Rorty and Brandom, who think the concept of experience is ineliminable. These pragmatists, like David L. Hildebrand, often criticize linguistic-turn pragmatists (and analytic philosophers generally, for that matter) for excising experience itself from philosophy. I consider this exceptionally misleading, and a red herring, for it creates a straw man—how could one possibly eliminate one’s experience of the world from one’s thinking? Retropragmatists who pursue this line too often think that that’s why the linguistic turn is an obvious reductio ad absurdum, but I think it’s equally obvious that reductios based on obvious facts mean that the premise at issue is elsewhere. For an attempt to reconstruct Rorty’s stance on this issue, see my “Some Notes on Rorty and Retropragmatism.” In it, I double down on the point I make in “Quine, Sellars”—that Sellarsian psychological nominalism is philosophically identical to Jamesian radical empiricism because it dissolves the same Platonic problem—by moving in a direction the retropragmatists often pshaw: the idea of specifically linguistic experiences, i.e. reading experiences, the experiences of reading books. (For an earlier discussion of Hildebrand, which crosses through my involvement with Robert Pirsig and his coterie of philosophical readers, one in particular, one could read “Dewey, Pirsig, Rorty, or How I Convinced an Entire Generation of Pirsigians that Rorty Is the Devil: An Ode to David Buchanan.” The beginning is a narrative of my transference of power from Pirsig to Rorty, so one could skip down to the link that stands out in blue, “Prof. Hildebrand's short pieces about Rorty,” without much loss.)

[10] This infelicity of expression is the culmination of my pedagogically useful, though inaccurate suggestion that Brandom replaces “belief” with “commitment.” Technically this isn’t true, and precisely because he needs the distinction between (at least) these two different kinds of commitment which the concept of belief obscures.

[11] You don’t, however, have to display the desire in a syllogism, as Brandom makes clear in 4.5.3. This banks on a number of issues I’ve left suppressed, namely the importance of the implicit/explicit distinction in Brandom’s quest to redescribe logic from a canon of rationality into a tool of expression. In fact, my syllogisms don’t even need the premises with the conditionals. Brandom follows Sellars in thinking that all one needs is “1. It’s raining outside. 2. You shouldn’t go outside.” They call this a material inference. Formalist logic understands such an inference from (1) to (2) as good only if one supposes there is a suppressed conditional premise. Expressivist logic understands the conditional premise as optional, as helping to make explicit the implicit reason for why the inference from (1) to (2) is good. If this is your first time in the cow pasture, you’ll wonder at this point what the difference is between the formalist’s “suppression” and the expressivist’s “implicit.” It seems nit-picky, but Brandom makes an interesting case for a lot to be hanging on it. Brandom’s main discussion of the merits of his “inferential materialism” (a wonderful oxymoron if you remember your history of 19th century idealism) is at 2.4.2.

[12] Though I don’t discuss Habermas’s use of the nature of reason to justify egalitarian practices, in “Better and the Best” (cited in Note 1, and esp. sec. 3) I do show how the first point in this paragraph (about justification) is construed by Habermas to fill out the nature of reason (as “universal validity” or “transcendent moment”), followed by Rorty’s argument against it (the “Village Champion Argument”).

[13] TP 132. For Rorty’s criticism of Brandom, see that essay in Truth and Progress and his “Response to Robert Brandom” in Rorty and His Critics. Brandom continues that part of the conversation in “Pragmatism, Expressivism, and Antirepresentationalism” in his Perspectives on Pragmatism.

[14] The most forward statement of this is in “Ethics without Principles” in PSH.

[15] Rorty’s first good defense of this point is in the introduction to Consequences of Pragmatism.

[16] This is oblique, but the only time Derrida and Foucault appear in Brandom’s work is so that Brandom can take pot shots at them for being “irrationalists.” Brandom means this in a quite specific sense, and not in the usual flat-footed way many analytic philosophers wield the epithet at Continental philosophy, but it is another way in which he sends messages to Rorty.

[17] Rorty makes this point in the first chapter of Contingency, Irony, and Solidarity.

[18] Brandom, in fact, does make this point about the expressive role of broadly evaluative vocabulary like “prefer,” “obliged,” and “ought” in 4.5.3. What is missing, then, is just where the expression of our evaluation of the vocabulary we’re using occurs. This seems to me a central element in the basic make-up of the social game of giving and asking for reasons, as it was put at the end section 5, even if commands and permissions are not. All the arguments Brandom gives for his modified Davidsonian notion of a “complete reason” should apply equally to the adoption of a vocabulary to express that reason.

To put it another way, Brandom seems to suggest in 4.5.4 that he would call my strategy a mode of supplying “suppressed” premises in order to assimilate, as I have, practical and theoretical reasonings. (It comes up in the context of his differentiation of different kinds of practical reasoning, but the point applies.) Brandom thinks this is a kind of optional reductionism. But I don’t think my strategy elicits a suppressed premise anymore than Brandom’s treatment of unconditional or institutional ‘ought’s (which he suggests are different from prudential, desire-relative ‘ought’s). By using a vocabulary, any vocabulary, one is implicitly committing oneself to its inferential structure, and this implicit commitment is analogous to the role desire plays and is incompatible with an “implicit norm of common belief” (250), at least one unrestricted by choice in vocabulary. For if Rorty’s right, the only way to get some people on the same page—like Wilde and the Christian perse-, er, prosecutor—is to burn the pages they are holding so that the only one’s remaining are the one’s you’re holding. When Brandom says that inferences that are “truth-preserving are one, while those practical inferences that are underwritten by desires are many,” what he’s forgetting is that all inferences are underwritten by the vocabulary they are stated in. Vocabulary-choice is perhaps not best put into terms of desire, but it is something implicitly done and seems to vary people’s sense of what is a truth-preserving inference in the same way having different desires varies the practical inferences one would endorse.