Friday, August 30, 2013

Hiatus

With the beginning of another semester, my time is consumed, leaving little left for experiments. I have a few almost ready, and they might become posted as they are finished over the coming weeks, but I won't be able to return to this activity until December, at the earliest.

Friday, August 23, 2013

Davidson’s “On the Very Idea of a Conceptual Scheme”

1.     
Davidson’s and Wittgenstein’s writings are not easy for the nonspecialist to grasp. Neither are those of Kant and Hegel. But the work of original and imaginative philosophers such as these, in the course of generations, gradually comes to have an influence on the entire culture. Their criticisms of our intellectual heritage change our sense of what it is important to think about. A couple centuries from now, historians of philosophy will be writing about the changes in the human self-image that Donald Davidson’s writings helped bring about. [1]
Richard Rorty, who was a personal friend of Davidson’s, wrote this shortly after Davidson’s death. So this strong claim has two strikes against it: 1) friendship often colors and 2) interesting prophesies are often wrong. But as amplification, and from someone deeply immersed in intellectual history, it’s a striking thing to say about a philosopher no one outside of small coteries of isolated individuals—called Anglophone Philosophy Departments—has heard of. Not so with Wittgenstein, who gets good press amongst other departments, even if there aren’t many who really know what he was on about. However, Wittgenstein was an eccentric person and carried out his philosophy eccentrically—he just seems more interesting, and so attracts the mind at a more superficial level. Davidson, so far as I know, led a quiet life carrying out a philosophical project amongst a group of fellows who increasingly separated themselves from the life of other disciplines.

I’m not a specialist, but I’ve picked up some things about anglophone philosophy, and I tend to think Rorty was probably right about Davidson’s significance. So what I’d like to do is just provide a little of the context that I understand to be at work to make one of Davidson’s most famous essays more accessible to the layreader, especially students of literature. [2] Davidson once told Rorty “that he had never tried to solve anybody else’s problems, never discussed an issue simply because others were talking about it.” [3] This is a double-edged sword. On the one hand, it can aid in the avoidance of the entropy that sets in quicker for those whose writings are too timely. Most of us are doomed to be only understood within the claustrophobic space of our immediate situation—you could understand what our remarks mean if you knew more about the history of the conversation we were taking part in, but why would you want to? In the hands of a thinker of genius, however, such ignoring of your context can give a freshness because if you know beforehand that these are just your problems, then you are usually more careful to make sure your reader knows what you’re concerned about. Davidson has this freshness, based on his idiosyncrasy, but he is also not completely idiosyncratic—he is taking part in the Conversation of Philosophy, and sometimes in setting up his problems, it’s not always apparent how to situate him in philosophical space. You have to know something about the history of a conversation to understand any conversation. These are just a few notes toward that end.

2.      The first thing to know is that if you haven’t already read Willard van Orman Quine’s “Two Dogmas of Empiricism” (1951), there is no point in reading Davidson’s “On the Very Idea of a Conceptual Scheme” (1974). Davidson generally should be seen, in many aspects, as extending certain basic Quinean ideas, and in “Very Idea” he is pursuing an argument that extends the basic pragmatic holism that underlies that earlier paper. The basic gist of that extremely influential essay is that the empiricist project of analysis—“linguistic analysis” being built into the self-image of anglophone philosophers since Bertrand Russell and Rudolph Carnap—is fundamentally flawed if is based on two very bad ideas. “One is a belief in some fundamental cleavage between truths which are analytic, or grounded in the meanings independently of matters of fact, and truths which are synthetic, or grounded in fact. The other dogma is reductionism: the belief that each meaningful statement is equivalent to some logical construct upon terms which refer to immediate experience.” [4] The second dogma represents the specific manifestation of empiricism as a core philosophical project after the linguistic turn. If, as empiricists since Locke have maintained, we are born tabula rasa, a blank slate, then everything is learned and thus rooted in experience. This means you have to build backwards to experience if you think there is anything at a distance from it. This is a kind of atomism, where the goal is to give an account of our conceptual activity by constructing complexities out of simples that can be tied directly to their origin in our immediate experience of the world. Words are like stilts—they each are grounded in the world, and if they aren’t, you’re just floating in the air, unconnected.

The linguistic turn in anglophone philosophy, however, made an important move away from the realm of philosophical action that had animated 18th and 19th-century philosophy—the mind. Experience, for most empiricists, happened in the mind, which is also where conceptual activity occurred because before the linguistic turn, ideas were concepts, not words. [5] Tired of the Hegelian Absolute Idealism of the preceding generation of British philosophy, Russell and G. E. Moore said sucks to that—skip the ideas and go to the only manifestation we deal with. “Philosophical analysis is linguistic analysis” became a kind of fighting faith as this kind of metaphilosophical, methodological viewpoint swept Philosophy Departments. However, part of the motivation for rejecting the Bradleys and the McTaggarts was to move away from a wooly-headed spiritualism and towards an embrace of the natural sciences. The hard, tough natural sciences are empirical, and the founding of analytic philosophy also marked the resurgence of empiricism in philosophy. But empiricism—rooting everything in experience—poses a problem for a philosopher who also thinks that the natural sciences are the best at doing that kind of thing. What’s left for philosophy to do?

Linguistic analysis! We’ll study words and meaning, and how they mean. However, the only way for this to be a distinct project that can’t be taken over by a science is if some of our words aren’t rooted in experience in the requisite way. So the analytic/synthetic distinction took hold as a means to keep something distinctive for philosophers to do, helping the scientists with their words, if you will. The iconic example to establish the plausibility of analyticity—the notion of a statement that depends for its truth only on the meanings of the words composing it—is “A bachelor is an unmarried man.” That statement is true and the shenanigans of married and unmarried men matter not a whit to that judgment. So definitions are a paradigm of the analytic, as opposed to a synthetic statement like “That rock just fell on my foot,” where you’d have to check my foot for a bruise and the vicinity for a proximal and culpable rock.

Against the atomism of trying to give an account of simples connected to experience being put together to create complexes and trying to separate out the “just language” parts from the “experience decides the truth” parts—which the project was foundering on—Quine substituted a holistic picture of our interaction with our environment:
The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. … But the total field is so underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to reevaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole. [6]
This is, essentially, the picture that holists like Davidson, Rorty, and Robert Brandom all wish to remain loyal to. The only difference is that all three are a lot less likely to conflate what Quine elsewhere calls the “web of belief” that is described here with “total science.” With that metaphor of the self as a web of belief, Rorty and Brandom in particular take the further step of thinking of belief as Alexander Bain did—as a habit of action. [7] In this case, one of the most relevant habits of action here is the action we take to articulate a belief in words. The core idea here is that our beliefs, and the sentences we use to articulate them, mean what they do both because of their relationship to each other and the whole of them to the environing world. The use of any particular word or sentence is underdetermined by the world by itself. “Snow is white” might be true, but why use it instead of “La neige est blanche”? Both are true, but only in relationship to their home language in English or French and the world doesn’t tell you which one to use. The picture here is of a self as a kind of amoeba, whose permeable sides can be crossed in specific ways. On the inside are linguistically articulable beliefs and on the outside is the organism’s environment. [8] What Brandom calls language-entries (i.e. “perception”) and language-exists (i.e. “action”) are fundamental to how this self interacts with its environment. [9]

3.      One problem this picture has incurred is of avoiding the charge of idealism. The problem of idealism in philosophy in the last 200 years is the problem of avoiding Cartesian solipsism. Descartes inaugurated the problem by means of his radical doubt—if I can doubt everything but that I’m doubting, then at least I know that. The premise of this line of thought, of course, is that a certain primacy is given to knowing, or epistemology. Idealism, however, first got off the ground as a corollary of empiricism, not the rationalism of Descartes. Berkeley felt that the consequence of a thoroughgoing empiricism, an effort to put experience first, led to the belief that the only thing you really know then is your experience—the stuff happening in your own mind. This is the connection between empiricism and phenomenalism, the 20th century manifestation of the notion that the appearances just are the reality.

This is troubling, for it seems like admitting that the only thing you can really be sure of is in your own mind, and that is reality. Kant’s transcendental argument, about what we think of as empirical reality needing the categories of the mind, hoped to avoid this problem of idealism, so that as the old formula has it, only a transcendental idealist can be an empirical realist. But given his descendents in German Idealism, who had other agendas, idealism has remained the bugbear of realists, those who demand a robust sense of a world out there. When you look at Quine’s holistic picture, it seems nice, but realists want to know an awful lot about why there’s such radical underdetermination and how one defuses the problem of connecting to reality attendant to the relative independence of the inside of one’s web of beliefs.

4.      These are the problems Davidson concentrated some of his most original work to. For in the time between Quine’s “Two Dogmas” and Davidson’s “Very Idea,” a revolution had occurred in the philosophy of science, which given the fighting faiths of analytic philosophy was a very important subground. Thomas Kuhn’s The Structure of Scientific Revolutions was published in 1962. In it Kuhn made the argument that scientific theories and experiments were elaborated within in what he called paradigms. A paradigm for Kuhn is essentially the guiding assumptions that undergird the conceptual content of those theories and experiments. [10] Given the nature of the relationship between assumptions, inferences, and conclusions, however, this means that if you change your root assumptions, it doesn’t make sense to say the old conclusions are false—you simply can’t draw them from your new assumptions. [11] Kuhn drew from this kind of consideration the conclusion that different scientific paradigms were incommensurable, that paradigm shifts in scientific activity—such as the shift from geocentric, Ptolemaic astronomy to the heliocentric, Copernican—are not logical transitions from the falsification of assumptions, but rhetorical transitions by simply replacing one set of assumptions with another. But this means, then, that proponents of alternative paradigms beg the question over each other in argument because they are working from different assumptions. Paradigms are incommensurable because from each standpoint one cannot attain a position in which to even judge whether the other is true. And thus Kuhn was led to his most regretted line, that “though the world does not change with a change of paradigm, the scientist afterward works in a different world.” [12]

“Works in a different world”—smells just like idealism. It’s against this background that Davidson intervenes, collecting together the linguist-anthropologists Sapir and Whorf, the calm Kuhn and the fiery Feyerebend, and his mentor Quine—quoting the passage I pointed to above as the holistic picture—as all suggesting a picture of different conceptual schemes organizing a world that is otherwise a blank without such organization. The criterion that seems to be implicitly at work in all of them to tell when we actually have different conceptual schemes at work is failure of translation. This philosophical perspective then infers from this failure an inability to translate between them.

Davidson doesn’t think there’s a “between” here. He thinks that whatever the picture of language is that has given rise to what Hilary Putnam called the “cookie-cutter view of reality” is false. He calls the scheme/content distinction, that between “organizing system and something waiting to be organized,” “a dogma of empiricism”—“the third, and perhaps the last, for if we give it up it is not clear that there is anything distinctive left to call empiricism.” [13]

5.      The above should put you in a better position to read the essay. (I’m not confident enough to think it’s sufficient, but it might be a good start.) The view of Davidson articulated above is pretty much that of Richard Rorty. [14] In what follows, I’d like to close read Davidson’s conclusion in order to bring out how the above interacts with his specific mode of articulation. The final paragraph of “Very Idea” runs like this:
In giving up dependence on the concept of an uninterpreted reality, something outside all schemes and science, we do not relinquish the notion of objective truth—quite the contrary. Given the dogma of a dualism of scheme and reality, we get conceptual relativity, and truth relative to a scheme. Without the dogma, this kind of relativity goes by the board. Of course truth of sentences remains relative to language, but that is as objective as can be. In giving up the dualism of scheme and world, we do not give up the world, but re-establish unmediated touch with the familiar objects whose antics make our sentences and opinions true or false. [15]
As I’ve tried to suggest above, in the program of empiricism as a motivation for analysis, the idea was to reduce words to nonwords, to get back behind the complex mechanics of semantic meaning to the atomistic simples directly tied to reality or experience, the individuated things that made some sentences true and not others. If there was not some “uninterpreted reality,” empiricists thought, then we’d have no hold on reality for it would just be a morass of relativism—words pointing to words in a nightmarish Cartesian solipsism. This is empiricism itself motivated by realism. [16]

The thing to understand in this context is how this is empiricism through the looking glass of Kant. What Davidson is saying in the first two sentences might be paraphrased as “you might think that we are giving up on objectivity if we give up the idea of an uninterpreted reality and thus sink into relativism [first sentence], but it is that very idea of an ‘uninterpreted reality’ that produces the possibility of relativism.” That’s what Davidson means when he says “this kind of relativity goes by the board”—i.e., we can’t even make sense of this vulgar relativism without the dogma, so none of its considerations, objections, concerns, or arguments are relevant.

In the rest of the paragraph, then, Davidson is trying to reconstrue what we should mean by conceptual relativity and objectivity. At this point, it is important to not construe Davidson as having identified language with conceptual schemes. At the beginning of the essay, he says, “We may accept the doctrine that associates having a language with having a conceptual scheme.” [17] But what Davidson is doing at the beginning of the essay is leading us dialectically through the inside of this line of thought. We might paraphrase his mode this way: “we may accept this hypothesis about how to understand language, so if we do, this is how it would have to work...oh, it doesn’t work the way we need it to...guess we have to reject it.” (Obviously there are two points at which the mechanics of the argument might be criticized, then: whether Davidson has correctly gotten a handle on how it has to work and whether he’s identified things we need to be done by the theory.) Everything at the beginning leads up to his famous line about the third dogma. He’s attempting to show how contemporaneous discussions of “incommensurability,” then white hot because of Kuhn and Feyerabend’s explosive fight with the Popperians, run back through Quine’s gauntlet of the two dogmas, and how Quine then isn’t enough to show what the problem is with radical incommensurability of languages/schemes/vocabulary.

6.      So here’s the last two sentences again: “Of course truth of sentences remains relative to language, but that is as objective as can be. In giving up the dualism of scheme and world, we do not give up the world, but re-establish unmediated touch with the familiar objects whose antics make our sentences and opinions true or false.” To understand what Davidson is saying here I find it helpful to think of Alfred Tarski’s Convention T (which Davidson, I think, has in the back of his mind). “Claim ‘P’ is true if and only if P.” Here’s the famous example: “The claim ‘Snow is white’ is true iff snow is white.” (I’ve thrown in the shorthand of “if and only if.”) This is now sometimes called the disquotational theory/definition of truth: take off the quotes to find out what needs to be the case for the sentence to be true. (Lately this is also called the deflationary theory of truth.)

Davidson is saying here, first, what Rorty repeats in Contingency, Irony, and Solidarity: truth is a property that only holds in languages, between sentences (“relative to language”). So, there is a kind of relativity here, since I might say “Snow is white” or “I think snow is white” or “La neige est blanche,” all of which might say the same thing. However, these three claims are as objective as can be, Davidson thinks, because we already know how to judge their truth by the way the world is. Snow is white if snow is white; likewise, since the French use their sentence in exactly the same contexts as our English one, we can say the two sentences say exactly the same thing, talk about the same world. “I think snow is white” can express the same thing as the first two, but Tarski shows us that it might also tell us something slightly different: “The claim ‘I think snow is white’ is true iff I think snow is white.” And depending on who is using that sentence, you might come up with different answers, unlike “snow is white.” For why would we say that the claim “I think snow is white” is true tout court just because I happen to think so? (“I” is what they call an indexical, like “here” and “now”—they refer very differently depending on context, and thus can radically change truth-values.)

When Davidson says “unmediated touch,” what he’s saying here is the same as when Rorty rejects the metaphor of thinking of language as a medium. Davidson shouldn’t have said “reestablish,” for his real point is that we couldn’t possibly be in a position where we weren’t in contact with the world in such a way that our common modes of adjudicating the truth of sentences might globally be suspicious. (This the Cartesian threat of skepticism.) Rorty’s argument goes backwards to Hegel, whose attack on “immediacy” in the Phenomenology of Spirit is the first attack on the Kantian framework that Davidson is, we could say, rephrasing in the analytic idiom. [20] Rorty wants to say that language is more like an arm than it is a veil (or a map). That means “snow” is just as connected to snow as your hand.

There are a number of more general problems scared up by this discussion, most especially how we are to understand Davidson’s argument’s relationship to Kuhnian notions of radical conceptual shifts. Kuhn is ostensibly a target, so does Davidson’s argument require us to reject Kuhnian paradigm shifts? I will return to these problems.




Endnotes

[1] Richard Rorty, “Out of the Matrix,” Oct. 5, 2003 in the Boston Globe

[2] I should add that literary critics have not remained completely ignorant of Davidson’s work. (And putting it that way, I hasten to add, makes it sound like it’s their fault, when it isn’t. Really, it isn’t anybody’s fault—and that’s despite the tone in the body, above, where it sounds like I’m sliding blame in the direction of the philosophers. That’s a rhetoric I’ve more picked up from Rorty than I should feel entitled to. In the last few years, I’ve become increasingly tired of people complaining about how their particular hero has been “ignored” or someone else’s hero was “anticipated” by theirs—and you can probably find me making such complaints, but I’d really rather weed that out. They sound so uncouth in others, so why should I keep it up as well? In our lengthening age, who actually has time to stay abreast of everyone? Why can’t we learn to lay off the horn, and just show people why we happen to like so-and-so and leave omniscience to the gods? Scholars have an interest in promulgating a sense that there are certain things you need to know to be considered in the club—the problem is that there are a lot of different clubs now, and generally this is a good thing. I have more thoughts about this in “Do We Need a Center, or Generalities?.”) There is a very good collection of essays edited by Reed Way Dasenbrock, Literary Theory After Davidson, and though a philosopher, Samuel Wheeler III’s book, Deconstruction as Analytic Philosophy, is of inestimable value for literary critics desiring an approach to Davidson because of their general familiarity with Derrida and de Man.

[3] “In Memoriam,” 318 in the International Journal of Philosophical Studies, 2006

[4] Quine, “Two Dogmas of Empiricism,” 20 in his From a Logical Point of View

[5] I say “most empiricists” because many pragmatists would like to include pragmatism within the ranks of empiricism. In particular, for James and Dewey, everything was experience—they collapsed the distinction between experience and world that is needed to constitute the idea of a mind radically distinct from the world, and thus create the possibility of being out of touch with it, a problem hovering in the background of all of this. For a different version of this story about the foundation and demolition of analytic empiricism at the hands of Quine, Sellars, Davidson, and Rorty that discusses its relationship to the pre-linguistic turn maneuvers of James and Dewey, see my “Quine, Sellars, Empiricism, and the Linguistic Turn.”

[6] Quine, “Two Dogmas,” 42-43

[7] Brandom, as he likes to cutely point out, technically does not believe in belief. In his systematic philosophy of language, he avoids it in the official account, though provides the means of seeing how to move back and forth between various common ways of understanding the concept of “belief.” See in particular his Making It Explicit, 195-6. He’s also proud of the fact, and his Ph.D. dissertation advisor Rorty is as well, that “experience” does not appear once in that massive book, though that is tangential to the issues here.

[8] When did this web of belief become a biological construct? Sleight of hand, given that it would take me too far afoot to justify the suitability of this picture for humans, though clearly it won’t work, say, for actual amoeba who have an environment but no language. This is a problem that needs an account, and some of Brandom’s best work goes some way in justifying the analytic’s fighting faith in thinking that it’s a good idea to just avoid talking about “mind” and “experience” given how other animals probably have them and instead find a way of talking about the specific problems that arise for us language-users. Brandom calls this the demarcation problem and it is terribly important, particularly for various ecologically-minded intellectual movements, e.g. forms of “posthumanism.” (I apply this Brandomian insight to posthumanism and give the example of one particular treatment of intentionality in “Posthumanism, Antiessentialism, and Depersonalization,” sections 3-5.)

[9] Rorty gives a Davidsonian account of this kind of thing—with a picture!—in his “Non-reductive Physicalism” in Objectivity, Relativism, and Truth.

[10] They are much more interestingly complex than this—for example, the nature of an exemplar in Kuhn’s vocabulary points in a particularly pragmatic-attitudinal direction, as opposed to my reduction of the paradigm to a semantic-conceptual essence—but for my purposes this may suffice.

[11] Say you have an argument like this:
Assumptions: P and If P, then Q
1. P
2. If P, then Q
3 Q
This codifies in symbolic notation the inference from the two premises, (1) and (2), which are assumed to be true, to the conclusion (3). But what if you had a different set of assumptions?
Assumptions: P and R
1. P
2. R
3 …
There is no inference to be drawn from just P and R. Does that, then, make Q false? Nope—it doesn’t, in fact, say anything at all about Q or the conditional If P, then Q.

[12] Kuhn, The Structure of Scientific Revolutions, 3rd ed., 121. A new fourth edition, with a long essay by the most important philosopher-historian working after Kuhn, Ian Hacking, has recently come out, and the pagination is near-identical to the 3rd, but not quite. (Note: I’m not counting Foucault in that evaluation because he worked out of a different tradition than Kuhn. Interestingly, however, Hacking is the heir of both Kuhn and Foucault, being one of the earliest and still most cogent commentators and appreciators of Foucault’s project.) There are significant differences in pagination from the 3rd to the earlier ones.

[13] Davidson, “On the Very Idea of a Conceptual Scheme” in Inquires into Truth and Interpretation, 189

[14] One of Rorty’s more infamous essays, “The World Well Lost” (in Consequences of Pragmatism), is in fact an excited extrapolation of the argument Davidson makes in “Very Idea,” except that Rorty published it a year before Davidson got around to publishing his. (Rorty, similarly to Kuhn, later regretted the rhetoric of that essay.) Rorty’s most thorough treatment of what he takes the importance of Davidson to be is his “Pragmatism, Davidson, and Truth” (in Objectivity, Relativism, and Truth).

[15] “Very Idea,” 198

[16] In Note 5, I suggested that James and Dewey considered themselves empiricists of a different stripe, and talking about motivation—the connections and relative priority given to different doctrines—is another way of articulating kinds of difference within camps marked out by isms. For example, one might think of James and Dewey as empiricists who aren’t realists and Davidson and Brandom as kinds of post-linguistic turn, analytic philosophers who are neither empiricists nor realists.

[17] “Very Idea,” 184

[18] See CIS 4-5

[19] See CIS, Ch. 1, esp. 10-13

[20] An earlier important attack on immediacy in the Hegelian tradition was Sellars’s in “Empiricism and the Philosophy of Mind.”

Friday, August 16, 2013

Autonomy and the Problem of Control in Moon

1.      Moon is not exactly a mystery, so I hesitate to preface by "spoiler alert!" For one would be hard pressed to describe what is meant to be discovered that doesn’t follow easily and conventionally from its defining, generic premises (like the powerful corporation doing whatever it can to reduce overhead and increase profit margins). It is, after all, only 20 minutes into the movie before we hear, albeit scrambledly, GERTY scolded for “los[ing] a harvester and employee,” which only precedes the second Sam showing up by 10 minutes. However, this is precisely what makes it a wonderful movie, for by a kind of philosophical austerity, the movie is able to call for a more subtle appreciation of what are subtle problems. By quite nearly resting its entire overt success on Sam Rockwell’s ability to charm the viewer (a good bet), the film eschews the troublesome pretensions the typically follow from auteurs who think they’re going to break new ground in a well-worn genre and instead encodes slivers of wedges into a few moments that can be opened to great profit.

This is what roughly happens: Sam Bell (played by Rockwell) is the lone human employee on a space station on the moon in charge of harvesting it—it seems the corporation he works for has found a way of turning moonrock into serviceable energy, which has solved a number of energy problems back home. Sam pines to return home, but before he can do so, he crashes his mooncruiser. He wakes up in the sickbay with no memory of what happened, but GERTY—the computer employee with limited robotic abilities—helps him get back on his feet. What with one thing and another, though, Sam finds the crashed cruiser with himself inside. Thus begins a series of discoveries about how the corporation runs its operation—but mainly a series of existential conversations between Sam-1 and Sam-2.

2.      In a classic representation of the evils of autocratic power—whether its manifestation in kings, fascists, or CEOs—it is Freedom that opposes Control. “They may take our lives, but they’ll never take our freedom,” as Braveheart puts it. Whether in the 1984 version of fear-control or the Brave New World version of pleasure-control, what is seen as being lost is our “negative liberty” to do what we want. [1] “Negative liberty” is Isaiah Berlin’s phrase for the kind of freedom realized when one is not blocked or hindered from doing what one wants. The flipside of negative liberty is positive liberty—the sheer ability to do something one wants. One registers a gain in positive liberty when an institution enables one to do something, while a gain in negative liberty is registered when an institution is hampered from getting in the way of what one wants to do.

When movies focus on negative liberty as the opposite of the evil of Absolute Control, they tend to represent the feature of the heroes that is the Evil Controller’s undoing to be their free will. “You can tell me what to do, but at any time, I can resist you because deep down, I am free.” This then represents the feature of the hero as a kind of unlawfulness, for freedom can only be flexed by breaking the despised rules. Moon can be understood on this model, but I think it treats us to a more subtle understanding of what the problem of corporate control really is. For the beginning of Sam-2’s self-consciousness is his breaking of GERTY’s corporate-dictated stricture against leaving the base (thus precipitating his discovery of Sam-1). However, the reason why Sam-2 breaks the stricture isn’t perfectly assimilable to his being hindered in doing what he wants. It is not as if Sam just wants to go outside—he wants to go outside to do his job. So instead of the hero’s free will being the downfall of the Evil Controller, it is more like a specifically Kantian-Hegelian sense of autonomy that proves the corporation’s downfall. The Kantian-Hegelian sense of autonomy is that one gains freedom by binding oneself to norms, rules, laws. [2] By binding himself to his job, Sam requires the freedom to do his job as is required by the job itself. That this is what’s going on in Sam-2’s rebellion against the stricture is evidenced by his initial reaction to being told he can’t go out to do his job: “I don’t appreciate being treated like a child” (0:24:10ish). Autonomy, on the Kantian-Hegelian model, is the freedom of adulthood, not the freedom of the child, who can play and do as they wish. [3] Autonomy, at its root, is positive liberty gained by taking on responsibilities. Autonomy requires trust, and Sam-2 did not appreciate the sudden distrust in his ability to cope with the problems he was asked to cope with as part of his job.

Seen this way, Kantian-Hegelian autonomy lies at the heart of the difference between mechanism and humanity. What the corporation wishes for is a mechanism that perfectly carries out its desires, but what it has at its disposal is basically a form of sub-contracting—a “job” is created in lieu of the creator’s ability to do a thing him- or herself. However, by subcontracting, one creates a role with responsibilities that must then be given control-free space for the subcontractor to carry them out. Hence, the more an institutional body itemizes the sequence of success in a responsibility, the more automated the role, and hence more mechanical. [4]

3.      This is pretty much all basic, Marxist stuff. On the story Foucault tells about bio-power, it is our increasing ability to extend successfully our control into domains we previously did not know how to that gives the specific cast to modern power. [5] Increased technological control has made us able to itemize responsibilities more effectively. And it would be a lie to say that something like this mechanization isn’t at the heart of the Greek dream of reason—what was always wanted was increased control. However, one wouldn’t have guessed from Foucault’s story that we’ve had certain progressive gains at this same time that we’ve made ourselves more dangerous. The Greek dream of control was born of getting killed all the time by nature (which, as it has often been signified, might be just a stand-in for “something we can’t control yet”). The Greeks dreamed of reducing luck’s grasp on our lives. [6] That doesn’t, on the surface, seem like a bad idea. And part of our ability to reduce luck has been in exerting cultural control—getting people to believe in some ideas (e.g., gravity) rather than others (e.g., witches). So was it a good idea for us to believe in witches? If one thinks not and also concedes Foucault’s point that knowledge is power, then one will receive the full brunt of the crowning irony of Moon. Sam-2, responding to GERTY’s casual remark about rebooting himself and the next Sam to replace Sam-2 after he’s gone: “GERTY—we’re not programmed; we’re people” (1:28:35). Unlike almost every other sci-fi movie with the robot/human distinction in play, this line doesn’t go over like a lead balloon exactly because of its philosophical austerity. But what is the difference then? Programming gives us rights, and power is programming—does that invalidate the rights? So what does Foucault tell us about our humanity? Is the question confused, or outmoded?

There might be a sense in which the question is outmoded, so long as what Foucault did for knowledge is assimilated to Darwin—he shows us that we are just one more species doing its best to survive, knowledge being just one more power-grab against that which puts us at risk. [7] But what does Foucault tell us about Sam-2’s remark to GERTY before the final irony? When GERTY tells Sam-2 that for Sam-2 to succeed, he’ll have to reboot himself, Sam-2 says endearingly, “you okay with that?” We might call that decency, or humaneness, but one of the remarkable things about Foucault’s story is how little he regards the efficacy of powers in the classic liberal story of progress. For example: “As soon as power gave itself the function of administering life, its reason for being and the logic of its exercise—and not the awakening of humanitarian feelings—made it more and more difficult to apply the death penalty.” [8] People being decent to each other plays no role in Foucault’s account, and here we get a sense of the disdain he feels toward typical liberal accounts. And indeed, there is something a little too pat about most upbeat stories about the triumph of liberal democracies (at least, maybe 40 years ago, though still for any story you hear outside of a university setting).

What’s missing from this articulation, however, is the cautiousness we found in The Order of Things. [9] Power is not only anonymous, it is also an agent. “How could power exercise its highest prerogatives by putting people to death, when its main role was to ensure, sustain, and multiply life, to put this life to order” (138)? One could substitute GERTY in that sentence and get a similar conundrum to the one GERTY found and Sam-1 and Sam-2 were able to take advantage of. But GERTY, as Sam-2 suggests, is more like a person—an agent, entrusted with powers and responsibilities, that makes decisions about how best to carry out those responsibilities. Is power like that? If, to adapt William James’ famous phrase, the trail of the power serpent is over all, then Power as an agent is as bad as Humanity in all those awful liberal stories of progress, where Providence is seen acting through all kinds of strange puppets. Power, in the sense Foucault seems to deploy it here, is more like the dream of Perfect Mechanical Control that the autocratic villain always wishes for to carry out perfectly its desires. In Foucault’s picture here, we’re just puppets in Power’s show.

But this is a bad historical account unless one is willing to dismiss the reasons why people do the things they do in favor of Hegel-like Hand-of-God treatments of the Real Actors in life’s drama. And all of this just leads us back to the question: what kind of actors are we? Part of the charm of Moon is the simple humanity displayed by the Sams—the little ways in which they behave, none of which have the hyperbolic magnitude needed for a hero, the kind usually hoped for in dystopias. It might be chilling to think that all Sam-2 wants to do is go to Hawaii when he returns to Earth—his lack of rage at the inhumanity with which he has been treated—but such a reduction of scope makes the problem equally manageable for us real, non-hero people. The brief torrent of media chatter we get at the close, most of which articulates outrage at the corporation, emblematizes how not all institutions are facing one way—Power’s way.




Endnotes

[1] I should add that what is at stake in both dystopias is our ability to want to do certain kinds of things, and that what makes Brave New World, in the end, scarier than 1984 is its more plausible account of how to eliminate fully one’s desire for certain kinds of goods (like reading Shakespeare). If the reason 1984 is scary is that it pictures “a boot stamping on a human face—forever,” then the horror of Brave New World is its ability to more plausibly actualize the “forever” (specifically by not being a “stamping”).

[2] I’ve learned how to understand Kantian-Hegelian autonomy most from Robert Brandom’s first three chapters of Reason in Philosophy.

[3] The interesting comparison to make in this regard is the veneration of youth in Emerson and American Romanticism generally. This has tended to make Emerson’s legacy on our moral atmosphere a kind of willful self-assertion, something more analogous to “knowing with your gut,” as Stephen Colbert put it in his parody of George W. Bush’s political rhetoric.

[4] This is why Annette Baier takes trust to be the most important virtue in modern liberalism, for trust is the social relation at the forefront of a relationship in which responsibilities and discretionary powers of action are conferred. And as she says, “one thing that can destroy a trust relationship fairly quickly is the combination of a rigoristic unforgiving attitude on the part of the truster and a touchy sensitivity to any criticism on the part of the trusted” (Moral Prejudices 103). Trust requires room for the entrusted to play the role that’s been asked of them. My emphasis here is on the Kantian tradition’s understanding of autonomy, and I take one very interesting line of investigation to be the rapprochement of the Hegelian and the Humean in their respective critical attitudes to the Kantian model of moral philosophy. For Baier conceives of herself as distinctively anti-Kantian, but as I’ve just intimated, there’s a certain space carved out conceptually that a Humean interest in social-psychological atmosphere can fill. I suspect Brandom, who was a colleague of Baier’s for many years at Pittsburgh, will move us some ways to not only a rapprochement of Hegel with Hume, but with Richard Rorty as well in his (long awaited) forthcoming book on Hegel’s Phenomenology, A Spirit of Trust. (The projected final chapter is entitled “From Irony to Trust: Modernity and Beyond.”)

[5] “Bio-power” is currently one of the hottest pieces of jargon on the market, and it does have significant conceptual resonance (bio is Greek for “life”), even if as a concept it just turns into a mush of noise in many of the attempts to handle it and put it to use. Bearing in mind the Baconian notion that knowledge is power, and how fond pragmatists are of that formulation, Foucault’s introduction of the term makes immediate sense: “one would have to speak of bio-power to designate what brought life and its mechanisms into the realm of explicit calculations and made knowledge-power an agent of transformation of human life” (The History of Sexuality, vol. 1, 143). Foucault’s theorization of the complex interactions between knowledge and practice, and how something new happened in the birth of the social sciences in the 19th century, dovetails interestingly with Judith Shklar’s suggestion that the genre of utopia, until the end of the 18th century, was largely an “intellectualist fantasy” that, while sometimes harshly criticizing, in no way reflected any sense that things could be changed. And likewise, “the end of utopian literature did not mark the end of hope; on the contrary, it coincided with the birth of historical optimism” (Political Thought and Political Thinkers 167).

[6] My understanding of the Greeks here is deeply indebted to Martha Nussbaum’s fascinating early book, The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy.

[7] “Just one more species doing its best” is the slogan-title of Rorty’s July 25, 1991 London Review of Books essay-review of a handful of books on Dewey.

[8] The History of Sexuality, vol. 1, 138

[9] Compare the dismissal of sentiment as having a historical role to: “Can a valid history of science be attempted that would retrace from beginning to end the whole spontaneous movement of an anonymous body of knowledge? Is it legitimate, is it even useful, to replace the traditional ‘X thought that . . .’ by a ‘it was known that . . .’? But this is not exactly what I set out to do. I do not wish to deny the validity of intellectual biographies, or the possibility of a history of theories, concepts, or themes. It is simply that I wonder whether such descriptions are themselves enough, whether they do justice to the immense density of scientific discourse, whether there do not exist, outside their customary boundaries, systems of regularities that have a decisive role in the history of the sciences” (The Order of Things xiii-xiv). For a brief discussion of the downside of the rhetoric found here, see “Foucault’s Rhetoric and Posthumanism.”

Friday, August 09, 2013

“I’ve Never Been Modern? Why, that Changes Everything…”

1.      Latour finds himself in a very awkward position in We Have Never Been Modern: while emphasizing, at the close of his second chapter, the point of his title, he nevertheless wants to historically chart the birth of the modern. The two claims strike me as perfectly commensurable if one sorts out what one is doing and claiming properly (in order to avoid your own paradoxes in your own philosophical Constitution). “Constitution” is, of course, Latour’s idiom for developing his account of modernity. This is the gist:
The hypothesis of this essay is the word “modern” designates two sets of entirely different practices which must remain distinct if they are to remain effective, but have recently begun to be confused. The first set of practices, by “translation,” creates mixtures between entirely new types of being, hybrids of nature and culture. The second, by “purification,” creates two entirely distinct ontological zones: that of human beings on the one hand; that of nonhumans on the other. Without the first set, the practices of purification would be fruitless or pointless. Without the second, the work of translation would be slowed down, limited, or even ruled out. The first set corresponds to what I have called networks; the second to what I shall call the modern critical stance. The first, for example, would link in one continuous chain the chemistry of the upper atmosphere, scientific and industrial strategies, the preoccupations of heads of state, the anxieties of ecologists; the second would establish a partition between a natural world that has always been there, a society with predictable and stable interests and stakes, and a discourse that is independent of both reference and society. (11)
Latour’s selection, in 1991, of climate change as a network to be studied is a good indication of why people have found Latour’s work in the philosophy of science increasingly useful. Latour’s formulations, however, strike me as still too paradoxical by half, as in this early formulation (admittedly rhetorical for his dramatic narrative): “And what if we had never been modern? Comparative anthropology would then be possible. The networks would have a place of their own” (10). If modernity is what disallowed a space for networks, and we’ve never been modern, then networks have always had a place of their own. That’s the central paradox Latour awkwardly confronts. He wants to say (rightly, I think) that modernity never existed and to say (also rightly) that realizing this will enable us to do something new, thus ushering in a new moment in our history. But at the close of Latour’s second chapter, he goes to some lengths to say that he is not claiming “that we are entering a new era” (47). Latour’s probably right: “era” would be too strong. Latour, however, is trying to usher something new in, but because of the state of philosophy and politics at the time, he finds it difficult to say so out loud. Latour’s formulation should be: “what we thought was modernity was never actually the case, and realizing this will enable us to do better things we’ve already been doing, and perhaps for some of us something new.”

2.      In order to isolate the context Latour finds himself awkwardly in, we might look at the four-part schematic in its opening pages. After contextually defining the two modern desires as ending humanity’s exploitation of itself and increasing humanity’s control over nature, Latour lists the two different “antimodern” reactions: “We must no longer try to put an end to man’s domination of man, say some; we must no longer try to dominate nature, say others” (9). These are still two recognizable responses, and we might accord them the status of being the radical right and radical left, respectively. However, I think Latour is smart in not so doing. These responses should not be reduced to political orientation (like conservative/liberal, right/left) because political orientations are linked too directly to choices in political action—a “political orientation” can only manifest itself within some local politico-power grid (like the Beltway, or the Democratic Party, or the Arizona 2nd District, or the Printer Icon Department at Microsoft). So what we need are labels that express a guiding motive, or orienting principle, that can then be transformed into a political orientation (and thence action) given a planting in some local institutional context. [1] The two antimodern reactions might be described fairly easily as, respectively, “social Darwinism” and “primitivism.” [2]

The first two then quite easily slide into a right/left manifestation in most current “Western” political landscapes. But Latour’s second two responses are harder to place, both in their attitude and their potential political manifestation. The third possibility Latour calls “postmodernism” and says it is an “incomplete scepticism” that causes people to “remain suspended between belief and doubt, waiting for the end of the millennium” (9). We might call this “pyrrhonism,” after the ancient form of skepticism about the possibility of making knowledge claims (and thus the problem of basing action on them). But while some (certainly not all) postmodernist theorists behave like millennialists (there were probably more during the ‘70s and ‘80s, the context Latour was confronting), it’s certainly no requirement for a pyrrhonism to so compose itself. This makes Latour’s theoretical art less useful, for its utility is based on the sense of his having placed his finger on the pulse of the times—and so better able to manifest a good alternative. This becomes worse, it seems to me, in his last option: those who double down on modernity and “carry on as if nothing had changed” (9). Latour shapes this option as those who put their head in the sand, willfully plugging their ears to their better angels. But this seems terribly ill-suited as a description for any responsible intellectual. And this also seems to be the only option available for those who, while hearing the angels, don’t have any better ideas than striving for political emancipation and better technological control in order to improve man’s estate (in the Baconian phrase). But sticking your head in the sand seems a very different quality from being out of ideas. The first manifests fear; the other lack of imagination. What I think Latour’s schematic in the last instance hinges on is his implicit definition of modernity as unmitigated, Chicago-style anarchic-market economic expansionism—the “clever trick” (9) the West hopes to export. But this must be considered a narrow definition of the modern, for what do you do if you do think there is some interesting difference between “the West and the Rest” that you yet have an imperfect faith in?

3.      The difficulty here seems to be in denying a plausible position for those who are already behaving as if we’ve never been modern, but just didn’t realize it. Because we should ask ourselves of Latour’s schematic—who is he talking about? Theorists? Intellectuals? Politicians? Educated people? Car mechanics who don’t read newspapers? This is the conundrum for the intellectual—how do I attribute motivations to people who wouldn’t understand what I’m attributing to them? What I’m pointing at, I think, is just a matter of Latour’s rhetoric getting in the way of his real point—he’s staging himself as a revolutionary in a field (which he is) while at the same time saying that everyone is hooked up to this field where he is a revolutionary. But they are and are not—everyone’s hooked into networks, but not everyone is in Latour’s disciplinary field (studying networks). It seems like we must make a distinction between intellectual and plebeian, for this seems a corollary of denying we’ve ever been modern, which was after all an intellectual’s fiction—say, of Boyle and Hobbes (his primary emblem). [4] The “modern Constitution” Latour draws up fills a role in inferential patterns that not everyone might ever stumble down. We have to say this to deny the Platonism of principles ruling practices, rather than practices being summarized by principles: the former idea helps to produce the tripartite structure of separation in Latour’s Constitution (principles rule practices, meaning realms are separate and playing by their own rules, hence all you need to do is isolate a practice and formulate its rules, rather than studying the jumble of practices), and the latter helps to show how discourse, the social, and the real (one of Latour’s several formulations) are wrapped up into each other.

Latour’s graphical representation helps see what the paradox amounts to:


The trick to seeing the paradox is asking what is being translated in the hybrid networks. In the Modern Constitution, it is Nature and Culture. But that means the Work of Translation requires the dichotomy of Nature and Culture. That’s why Latour marks it as “the first” (though if you look at his summarizing paragraph above he calls “purification” “the second”), but where does the dichotomy come from? The Work of Purification—that means all those hybrid networks conceptually require the work we’ve become suspicious of. So what, in the end, does it in fact mean to reject the Modern Constitution?

4.      The way through, I think, is to distinguish the metaphysics of modernity from the practices we are surrounded by. We might think of metaphysics as a kind of conceptual explanation of our practices and how they work. What Latour is saying is that the Metaphysics of Modernity is incoherent, and in fact needs to be to get on with the work the Metaphysics in part is designed to explain. But we shouldn’t go that far—what we shouldn’t do is think that the work requires the metaphysics. It is an explanation and perhaps a justification, but nuclear bombs do not require any particular metaphysics to work. They only require that you do a certain thing, not think a certain thing.

The reason it appears that nuclear power also requires us to think a certain thing is that it seems absurd to think that hunters and gatherers could perform the requisite tasks to create a nuclear weapon without having, for example, learned that e=mc2. Our ability to do certain things does require us to be able to think or say certain things—we shouldn’t deny that. But what certain things? That is the open question. Latour and many others wish to argue that there is something irrevocably paradoxical about modern life (and sometimes, just human life in general—like Camus). However, this claim requires that the only way to get on with some practice (say, the Work of Translation) is via this other particular practice that actually contradicts the first. But this claim will only ever be justified based on experiment—trying out other ways of getting the good practice to work. In other words, I see no reason to think that the bad metaphysical dualism between Nature and Culture is required to get the Work of Translation done. Much good philosophical work that goes under the banner of “pragmatism” has been trying to work out alternative idioms for understanding the entangled lives we find ourselves thrown into without trying to erase the entanglement. For what pragmatists have learned over the course of philosophical history—a history marked by attempts to purify our categories of understanding—is that some categories are entangled in each other, but that isn’t itself a paradox. Entanglement is okay. It’s only the ghost of Plato that would tell us otherwise.

5.      One way of applying the lesson above is to debates about humanism and posthumanism, especially as Latour has been picked up by many who consider themselves as working out posthumanistic idioms. From a Latourian point of view, I would think the theoretical lesson is that we’ve never been human(istic)—“humanism” never existed in the way it thought itself to. The trouble here, however, seems to be that some posthumanism composes itself as, from this vantage, distinctly anti-humanistic—it doesn’t just wish for the erroneous self-conceptions of humanism to disappear, but for vast tracts of practices that “gave rise to humanism.”

My scare quotes denote the problem of identification that is at issue. For if practices proceed conceptions of practices, then what we are talking about when we want to get rid of humanism is an expulsion of practices—but which are the practices that are at issue? The trouble of cultural evolution is that we are complex to a degree we sometimes forget in analysis. Our conceptions, built out of practices, are only a problem if they double back into bad practices. The problem of institutional reform, as Latour points out, is that if you uproot a practice without uprooting the real cause of the practice—which is another practice—then you will create a “return of the repressed” situation (see 8). [5] So the problem of conceptual reform is the same: identifying correctly a conceptual practice that really does enable a particular bad practice. The trouble of conceptual reform is that a lot of different premises can lead you to the same bad action. Conceptual reform is about disabling the premises in our practical syllogisms that lead us to perform some unwanted action. On this model, there’s a certain slipperiness in conceptual reform. Not only do we sometimes misidentify the culprit, but people have a tendency to replace other premises in their syllogisms if they don’t agree that the object-action is unwanted (the problem we’ve been facing in reforming out of existence racism).

6.      This means the real question for intellectuals is—how much time should we really be spending on haggling over conceptual reform? The stance of both Latour and Shapin is that practices come first, then concepts. [6] So how much dog should we have in the fight over concepts? Who, in fact, are we fighting? Intellectuals have a habit of attributing beliefs and concepts to people who have no idea what they are talking about. This is effectively the same maneuver that physicists perform—do rocks understand the laws of gravity? (If you want to say yes, then you’re simply using a slightly different notion of “understanding”—and how much should we really haggle over this definitional asymmetry?) The real problem is not exactly, it seems to me, bridging back and forth between science, politics, and discourse (which is something like how Latour pictures the problem). The problem is bridging back and forth between experts and non-experts. This isn’t simply a problem of non-experts not understanding technical vocabularies, it is more especially the problem of convincing non-experts that the technical vocabulary should be given authority over our conceptualization of problems (and the experts trusted to be working through the correct inferential consequences—after all, how could a non-expert provide a peer review, or as we used to say in math class, “check your work”?). Latour’s battle is with other experts in his field (which we might label as, in this particular book, philosophers). So how much time should Latour spend in convincing laypersons—scientists, sociologists, literary theorists, politicians, car mechanics—that if they take his perspective, the world will become a little better (i.e., we’ll be able to handle better some of our pressing problems, like climate change)? Because just so long as there are experts resisting Latour, these on-looking laypersons will feel uneasy about giving their trust (this being the very obvious strategy in the debate about climate change). And if there are more direct paths of arguing about these pressing problems, more direct than arguing in a technical field before arguing about the bridge between that field and the public, then this makes Latour look like a detour to action.

So what is the point of all this for debates like those about posthumanism? In turns out not much, because if it is really true that all these systems and practices and networks (or whatever other name one wants to give to the layered and shifting sediment of nature-culture) have always been there, then it doesn’t exactly devalue what the technicians are doing in their respective spheres (even if it is from one vantage seamless). After all, as a layperson, I do tend to think Latour is right in nearly every way as a philosopher of science. If the above is right, then a Latourian revolution in conception just might make the world a little better. Likewise, whatever one thinks of as a “posthuman revolution” just might. But the farther one gets from actuating premises in practical syllogisms—the premise that then kicks out the requisite action—the more nebulous the possible effects changing those farther back premises will have on the object-action one’s eye is really on. All that this suggests is that as scholar-intellectuals, we make a short-term/long-term distinction between the different practices we might take our time up with: practices that will have an immediate effect on the world (voting, teaching), practices a little more distant (writing for newspapers, rabble-rousing), and practices that may only have an effect on the world in the distant future (having babies, writing treatises about conceptual change). As a plea, it is simply about orientation, or about what to remind oneself occasionally while obsessing over one’s area of interest. But it may affect our work as well, for it is possible that a practical orientation like an engulfing messianic fervor might be an actuating principle behind some of our worst practical syllogisms—that our ism-work often looks like silver-bulletism, turning our theoretical work into a practical problem. We have to remain aware of when our theory stuffs up our practical syllogisms, which then requires a theoretical decongestant like Latour.




Endnotes

[1] These labels are at a remove from this “local” context, but no less contextual for that. I think our theoretical discourse has still not yet come to terms with how to formulate the abstract/concrete distinction across the global/local distinction—for we still too often feel moved to call abstraction a “decontextualization,” when we should rather think of it as moving from one context to another. To think of abstraction as a process of decontextualization is to be modern—and we’ve never been modern.

[2] For a brevity I will already fail at, I won’t elaborate why these two labels. But suffice it to say that at other moments in history, these attitudes might have transformed into different orientations. For example, our great modern primitivist is Rousseau, and he mobilized his in order to emancipate humanity, not to not dominate nature. Rousseau’s screeds against modern luxury and the like had as their priority his project of political emancipation, not the ecological project of nature emancipation.

[3] The “unmitigated” plays an important role here, for the American non-Marxist, Reformist Left’s hope is that European “social democracies” provide a model for how to have market economies whose tendencies for imperialistic expansion are curbed.

[4] Latour uses Steven Shapin and Simon Schaffer’s groundbreaking 1985 book, Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life, to discuss the theoretical implications of the historical digging done in that book: that Boyle, to create the material-scientific “object,” had to deny the framework of thought that allowed Hobbes to create the moral-political “subject,” and vice versa for Hobbes. I do something similar to Shapin’s later The Scientific Revolution in “A Lesson between the Lines: Teleology and Writing History.”

[5] What’s a “real cause,” especially in this complex game of interlocking practices? The same thing it is in the physical sciences—the condition that produces the effect. The great problem of cultural reform is that there are a multitude of conditions—in fact, given the problem of redescription in object-conceptualization (e.g., are you talking about the same object if you describe it differently?), there are an infinite number of conditions potentially. This makes our practices of description (i.e., “theory”) terribly important for picking out practices to be reformed, but it also means that a deep theoretical skepticism necessarily abides in all of our attempts at cultural reform—even if we successfully extirpate an unwanted effect (say poor people or greed) by reforming or removing an isolated and defined practice, we will not ever know whether or not that removed practice was really the cause of the bad effect—unless, and only unless, the bad effect returns. Only when we are wrong do we attain certain knowledge (the Hegelian via negativa crossed with Popperian falsifiability—an amusing cross-fertilization). This form of (Cartesian) epistemological skepticism is only debilitating if one hasn’t previously put one’s faith in the “experimental method”—the defining quality of which is the belief that it is only by trial-and-error that we shall apprehend causes, i.e. a real cause can never be found by theoretical fiat (though it will only appear under a theoretical edifice). The trick for making sure this faith in experiment doesn’t turn back into the Platonism that dogmatically, and by fiat, declares it really has found the real cause, and we can therefore cease inquiry, is to combine Baconian experimentalism with fallibilism—the idea that inquiry will never end because we’d never know it if we’d reached it. The trouble, here again, for this sort of orientation toward cultural reform, rather than physics reform, is that when you reform physics, the only losers are physics professors (until a physical discovery blips our universe out of existence—a real fear attaching to the activities at CERN for some). But for cultural reform, more lives are at stake, and hence “inquiry” into real causes of cultural problems has effects we might not always wish to risk. This, and only this, is what provides the intellectual justification for political conservatism, the attitude that a given change will put at risk things we wish to keep.

[6] Following Robert Brandom, I call this Shapin’s “fundamental pragmatism” in “A Lesson between the Lines: Teleology and Writing History.”

Friday, August 02, 2013

A Lesson between the Lines: Teleology and Writing History

1.      In The Scientific Revolution, Steven Shapin evinces what Robert Brandom has called “fundamental pragmatism,” “the idea that one should understand knowing that as a kind of knowing how.” [1] For philosophers, this has been a very difficult idea to assimilate ever since Plato distinguished the knowing of philosophers from the doing of artisans, thus making ideas—idea, eidos—unchanging elements to be contemplated, epitomized in Plato’s Realm of the Forms. Fundamental pragmatism’s inversion of the Platonic scale of methodology is associated in the analytic tradition with the later Wittgenstein and Wilfrid Sellars, now sometimes called social-practice theorists of language, and who would applaud Shapin’s attempt to write “a history of concept-making practices” (4). [2]

One feature of practices is that they do not come into being randomly or without reason—practices are created for something, to do something. Shapin’s book is shaped to show how the efforts of the past were geared into the concerns of their present. [3] This is, itself, a historiographical lesson. History-writing has been the site of extraordinary theoretical pressures in the last 50 years. One effect has been the greater awareness of the dangers of writing Whiggish histories of progress. Such histories require that one posit in the past the seeds of a phenomena one wishes to praise in the present. This gives a lopsided view of how the past is, however, how it was in its wasness, as things unassociated with the contemporary telos you wish to explain the origins of are pushed from view, things that were possibly quite important to the actual historical actors. Such pessimism about stories of progress give way, however, to a more pervasive doubt about one’s ability to even select the phenomena you, the historian, wish to write about without corrupting the data with your contemporary concerns.

Shapin, in effect, shows how to diffuse this worry, what he calls “the historian’s predicament” (10). His explicit answer is a brusque “it is foolish to think there is some method … that can extricate us from this predicament” (10), before talking about such theoretical shibboleths as “respect [for] the vast body of factual knowledge we now have about the past” (10) and “the desire to make endless qualification to any generalization” (11). These are beside the point to the predicament Shapin has scared up, particularly as any revisionist historian—as Shapin, in the end, does aim to be—bloody well better not respect “the vast body of factual knowledge” we have, at least not if they truly want to change what we think that knowledge is. What Shapin runs up against here is what the earlier philosopher of science Norwood Hanson called the “theory-laden” nature of facts. If a historical fact is only constituted as such within some historical story, then “respect for the facts” can’t play a role when the problem is competing stories. [4] The theory-laden nature of facts simply restates the historian’s predicament. Shapin’s implicit answer is that it won’t do us any good to pose our anxiety at the level of epistemology—at the level in which we attempt to understand our concepts of concept-making. And so Shapin’s implicit answer to the worry about contemporary interests running rampant over the selective process used to tell the story is to take seriously the interests of the historical actors themselves. This is the check on the historian—not “the vast body of factual knowledge,” but sensitive treatment of the historical actor as having concerns of their own, of doing things for reasons that might not be our reasons for developing and employing a practice. Shapin’s selection of the “Scientific Revolution,” indeed as it is commonly understood and talked about, is a mark of his own contemporary interests—“how we got from there to here” (7)—but it is his third chapter that checks the encroachment of too much here into there.

2.      What is interesting about the story that Shapin tells is how it parallels the lesson I’ve just pulled out of Shapin’s practice as a historian, thus making Shapin’s story an adjunct to a larger story of cultural development. [5] Shapin gestures toward this larger story at the very opening of his third chapter, whose presence is intended to largely account for the book’s “originality.” Shapin says,
Seventeenth-century mechanical philosophers attempted to discipline, if not in all cases to eliminate, teleological accounts of the natural world. Yet as ordinary actors they accepted the propriety of a teleological framework for interpreting human cultural action, and with some exceptions so do modern historians and social scientists: the very identity of human action—as action rather than behavior—embodies some notion of its point, purpose, or intention. (119)
The unspoken implication of this passage, which is simply a segue into Shapin’s check against his selection process, is that what he’s selected is somehow antithetical to contemporary wisdom about the selection process. I believe there is no contradiction here, not even for one who thinks that the mechanistic model of nature has been and still is a boon to scientific study, but it does invite the telling of another story.

The shape of the conceptual story is this: Shapin’s third chapter is the specifically backward-looking moment in his story. It doesn’t matter if we care about theological problems or not—they do, and that’s all that matters for getting the history right. [6] However, Shapin’s juxtaposition of Descartes and Boyle in that chapter gives us a picture of how to be forward-looking. Here is the crucial passage that sharpens the distinction:
On the one hand, Descartes proceeded by imagining a hypothetical natural world that God might have created, a world wholly amenable to mechanical explanation: this was the world the natural philosopher was to explain. On the other hand, such writers as Robert Boyle and John Ray were concerned to trace the evidence of God’s purpose and design in the world he did create. That is why they were comfortable with the philosophical propriety of giving explanations in terms of purpose when, as they reckoned, the evidence of nature unambiguously supported such conclusions. (156)
Shapin’s account of Descartes here displays, whether Descartes intended to or not, how to be a visionary, looking forward to a time not fully comprehended: “by imagining a hypothetical natural world.” The irony of this situation is that the English are known for being philosophical empiricists as opposed to those on the Continent being rationalists in this time period, [7] and it is empiricism that is so often thought synonymous with science and scientific method. However, in this juxtaposition we see Descartes the visionary pursuing the outer limits of this new metaphor’s effects on human understanding, whereas it is the English who stop at the limits of common sense, which is whatever is denoted as “obvious” or “unambiguous.”

What is at work to create this irony is what philosophers like Wittgenstein and Sellars have now told us about how language works. Underlying the empiricist framework is what Sellars called the “Myth of the Given”—that sense-perception gives us a conceptual content that is unaffected by our interpretive apparatus, e.g. words or theories. [8] If we follow Sellars in thinking that this is a philosophical myth, and that our facts are always already theory-laden, then the “evidence of nature” that Boyle and Ray think obvious and unambiguous is a function of loyalty to the past. For if social-practice theorists of language are right, then facts are a function of the language we’ve been taught to state them in and the force of facts is a function of the authoritative hold we still accord them.

3.      The convenience of Shapin’s language in the above passage on Descartes is that it bridges the gap between imagination and reason. Still to this day in philosophical stories, the two are opposed. The distance, however, is not as great as we might imagine. The notion of the “hypothetical” has a history, but the short way to my point is to first point out that in Kant’s time, he still referred to the logical connective we call the conditional (the “if, then”) as “hypothetical judgment.” The conceptual shape of what Descartes was suggesting was the fantasy of supposing X to be true, and then working out its consequences (“if X, then Y”) without regard for whether X was actually true. The axe comes down on the distinction between imagination and reason when we note, with Sellars and Brandom, that the conditional is the basic unit of inference, of reasoning. [9]

What Shapin’s story suggests is why Boyle and Ray followed what we might call an “intellectually conservative” strategy toward mechanism. It is not that they cared more for preserving God’s province, though that is a plausible candidate and a viable motive in respecting the authority of past reasoning. [10] I take it that Shapin’s comment on Newton displays it: “it was not philosophical, but its opposite, to ‘feign’ (or imaginatively concoct) hypotheses, even and especially mechanical causal hypotheses, when the senses and the intellect could not securely discover them” (157). The enemy here, the foe to be curbed, is the Poet of Book 10 of Plato’s Republic. With the conceptual understanding I’ve unfolded in hand, one sees that neither “the senses” nor “the intellect” securely discover anything by themselves. For so gerrymandered in Shapin’s gloss on Newton, all the senses tell you are what you’ve already been programmed to say in response to perceptual stimuli and all the intellect tells you is what you’ve already been programmed to concede as consequences once you’ve disallowed new possible premises in reasoning—which is what is ruled out by “‘feign’ (or imaginatively concoct) hypotheses.”

To put it in an idiom I can only suggest, the conservative intellectual impulse is here created by a fear of romanticism, a kind of generic romanticism we can trace back to the ancient Greeks. To try and pull together some of these threads, we can see Shapin’s stance toward science—as a sociological phenomena, which is an outcome of what I called his fundamental pragmatism—as the outcome of an ongoing search for how to curb the vision of mechanism: not by how they in Chapter Three did it (God knows), but by how he views effective historical explanation. Effective explanation of action requires us to use a concept of purpose, telos. [11] And since the shape of the story of the Scientific Revolution Shapin tells is in broad terms designed to show the unresolved conflict between mechanical explanation and teleological explanation (which takes its final form in the beginning of modern philosophy, Descartes’ dualism between res extensa and res cogitans), there is a larger story to be told about just what we should do with the metaphor of mechanism. This larger story, I can only suggest, is that romanticism takes the place of religion in a conflict with scientific empiricism, and that romanticism’s successor on the plane of philosophy is first pragmatism and then social-practice theories of language. This makes Shapin’s theoretical views about how to write history the heir of Descartes’ rationalism, which is what saved him from the Boyle-Ray empiricist attempt to curb the visionary expanse of a new metaphor. [12]




Endnotes

[1] Brandom, Perspectives on Pragmatism, 9

[2] Social-practice theories of language also distinguish the kind of Hegelianism pragmatists like Brandom and Richard Rorty are willing to countenance. “Postmodern theory” has made a bad name for itself by promulgating such slogans as “all we have is discourse.” I think it’s important to distinguish what Shapin, Brandom, and Rorty are saying from this. This slogan has led to too many pratfalls by theorists (e.g., deconstructionists who want to say that meaning is impossible) and too many openings for hostile critics (e.g., “There is no thing other than a text? Really?”). The problem with both is poor communication, where whatever good point was meant to be made gets bogged down (e.g., the foolish idea that Derrida was some Berkeleyan idealist).

The good point of the slogan “all we have is discourse” is that meaning only occurs within language, so if you have “thing,” you have a “discourse.” However, people like Shapin, especially, want to go one step further. They want to say that not only are “things” embedded in “discourses,” but “discourses” are embedded in practices. The reason why this is an important move to make, philosophically, is because it closes the loop between you, your community, and the world. Stopping at “discourse” makes one look like an idealist, which makes some people fear we’ve lost the world for the sake of philosophical laziness (for it is hard to make a realist theory of truth-as-correspondence work—i.e., no one’s done it satisfactorily). “Practices” puts us back in touch with the world in a very obvious sense. It also helps move us closer to why material emblems like clocks weird us out when we wonder whether the clock is a symbol for time or is time itself. Like chasing the horizon, such gestalt hiccups merely punch up how entwined meaning is with practice, semantics with pragmatics.

[3] Shapin says that “if there is any originality” in his book it “flows from its basic organization” (12).

[4] I take it that Shapin himself isn’t nearly this naïve, and this theoretical lacuna is a result of his writing a book suitable for non-professionals.

[5] Though, admittedly, size is relative to perspective.

[6] I use the literary present tense, “they do,” partly because I think it is helpful to think of the “other in the past” as a conversation partner, much like we do a contemporaneous other we wish to talk to and understand, such as an Australian aborigine, French political operative, or Arizona Diamondbacks fan. I suspect that this has been common sense for historians for some time.

[7] It might suffice for this old doxographical chestnut to point out that the Three Great Empiricists, Locke, Berkeley, and Hume, are all from the British Isles and that the Three Great Rationalists are Descartes (French), Spinoza (Dutch), and Leibniz (German).

[8] Sellars’ attack on the Myth of the Given is from his “Empiricism and the Philosophy of Mind,” which one can find in his collection Science, Perception, and Reality or as a standalone book from Harvard UP with an introduction by Richard Rorty and a study guide by Brandom.

[9] Technically the conditional is the basic unit of self-conscious reasoning, but that is part of a much different story.

[10] I think this is true in Descartes’ case—I take it that, even from just the evidence Shapin displays, that Descartes was a true believer and wanting to display loyalty to God as much as Boyle. However, in terms of possible motivations, concern about one’s relationship to established Church practices and patterns of thought is certainly a distinct possibility in a way that, for example, it wasn’t in Athens in the first century or most intellectual centers in the United States in this century.

[11] For an articulation of the difference between action and behavior in the context of a discussion of what intentionality is, see Section 4 of “Posthumanism, Antiessentialism, and Depersonalization.”

[12] This larger story isn’t as kooky as it seems when it’s just kind of thrown out there. It is roughly coordinate with the kind of story M. H. Abrams tells about romanticism in Natural Supernaturalism, Leo Marx tells about American romanticism in The Machine in the Garden, Rorty tells about pragmatism in Contingency, Irony, and Solidarity, and Brandom tells about social-practice theories of language in Tales of the Mighty Dead and Reason in Philosophy. The interesting conclusion of Brandom is that some of the roots of the pragmatist philosophy that Rorty espoused went back to not only Kant, but Leibniz and Spinoza. For a discussion of Brandom's reinsertion of rationalism into pragmatism and his relationship to romanticism, see “Pragmatism as Enlightened Romanticism.”

Friday, July 26, 2013

Foucault's Rhetoric and Posthumanism

“I am afraid that we have not got rid of God because we still have faith in grammar . . .”
          —Nietzsche [1]

1.      At the beginning of his introduction to the anthology Posthumanism, Neil Badmington makes a rhetorical maneuver that is more common than it should be. While saying “‘We’ cannot live with [-isms] (why else would ‘we’ need to keep inventing new ones?), but neither can ‘we’ live without them (why else would ‘we’ need to keep inventing new ones?),” [2] he adds a footnote to the first “we” that reads, “I place this term in quotation marks because, as William V. Spanos has pointed out, the ‘naturalised “we”’ is one of the hallmarks of humanism.” The implication of this footnote, I want to suggest, is that our language comes not only with philosophical presuppositions, but distasteful ones. For we (“we”?) should distinguish this case from the case of the philosopher of language who wishes to offer an account of how particular linguistic phenomena work. For example, Robert Brandom, in his large treatise Making It Explicit, attempts to give a reasonably comprehensive account of how language has to function for it to function the way it does. Given the problem of rhetorical presentation, Brandom’s book is filled with IOUs for lumber he hasn’t paid for yet, but will in some other section. This kind of theorist’s project is to only build out of conceptual resources one has justified warrant for. Badmington, however, appears to think there is not ever warrant for this kind of “we,” yet feels helpless to use it. So what is going on here?

Structurally, there is a similarity between the conceptual demands implied by Badmington’s footnote and Brandom’s IOUs which goes something like this: “my use of X presupposes Y.” In Badmington’s case, it is that “we” presupposes humanism. If Brandom had written that footnote, on the other hand, it would have been, “my use of ‘we’ requires an account of how the first-person plural works.” What is strange about Badmington’s footnote is that it suggests that humanism is the only account that validates that use of “we,” but that account is bunk. Someone like Brandom would be moved to offer an account that works, but Badmington isn’t so moved. Why not?

The short answer is that he feels licensed for the idea that some philosophical presuppositions are both inescapable and bad, in this case humanism (and thus the bad “we” he must use). [3] However, one of the specifically philosophical theses Brandom would wish to advance is that not all linguistic phenomena have philosophical presuppositions. [4] To display more easily the dialectical ground, we might take an extreme rendering of what Badmington might mean: he is suggesting that the first-person plural becomes suspicious once one finds humanism suspicious. For many, this is just plain absurd. Why on earth should “We are driving down the street towards the mall” become suspicious once I’ve begun thinking that the seemingly natural ethical priority of humans to non-human phenomena (animals, plants, ecosystems, etc.) should perhaps be rethought? This rebuke by a commonsensical attitude, however, is sometimes just taken to be evidence of the depth at which the presupposed phenomena in question is embedded. And so it is, but any careful interlocutor will be quick to point out that while incredulity tells you about depth, it still doesn’t tell you what kind of deeply embedded phenomena you’re facing—in this case, grammatical or philosophical.

2.      Since, as Thomas Kuhn taught us, part of what it is to work in a discipline is to use precursors as models of how to do your work, [5] I think Foucault licenses, in a manner, this rhetorical pattern in post-Foucauldian theorists. Foucault, of course, is far more circumspect than his less careful followers. His forward to the English language edition is a model of rhetorical deflation and intellectual modesty. A good example for my purposes is:
Can a valid history of science be attempted that would retrace from beginning to end the whole spontaneous movement of an anonymous body of knowledge? Is it legitimate, is it even useful, to replace the traditional ‘X thought that . . .’ by a ‘it was known that . . .’? But this is not exactly what I set out to do. I do not wish to deny the validity of intellectual biographies, or the possibility of a history of theories, concepts, or themes. It is simply that I wonder whether such descriptions are themselves enough, whether they do justice to the immense density of scientific discourse, whether there do not exist, outside their customary boundaries, systems of regularities that have a decisive role in the history of the sciences. (The Order of Things xiii-xiv)
For my purposes, it does not matter whether Foucault had something very precise and exact in mind by “systems of regularities” and how that role hooks into all the other roles (played by people, theories, concepts, and themes). His rhetorical presentation is one of open-minded questioning—he is simply wondering whether there might not be another character being played in the background that our current researches are leaving untouched in their account of the drama of life (or perhaps more specifically, the history of the sciences). This gesture, made by the rhetorical questions, the wondering, the not-exactly, and the not-wishing-to-deny, creates the appearance of simply opening up a grayspace in which many may come and experiment in. And in this, I think Foucault was very sincere and very successful. Foucault’s boldness of imagination combined with his hope for, if not a joint inquiry, then a space in which many can all inquire and help each other with their varied inquiries—this made Foucault the towering intellectual father he has become for many. And one thing intellectual fathers do, as we all are quite aware after Derrida, is disseminate their intellectual DNA via their progeny. To make this quasi-metaphor more concrete, we might think of the kind of intellectual dissemination that figures like Foucault perform as casting off dimly understood hypotheses that require more work with to process and confirm. That Foucault must be a godfather for whatever is meant by “posthumanism” must be obvious for the person whose book became synonymous with “man is only a recent invention” (xxiii), “and perhaps nearing its end” (387).

The question must be for us progeny: how are we to understand that? The drive of this reflection is that Foucault’s work opens up a number of possibilities, and I do believe it is pointless to look back to Foucault for definitive guidance (that would be a form of intellectual biography). One of these possibilities is that Foucault is describing the birth and (hopeful) death of a certain self-image that humanity has of itself, a self-image that expresses itself in many different activities, only one of which is the activity of articulating a self-image (call that activity “philosophy”). One activity just might be the use of the first-person plural, for it is difficult for fans of Foucault to forget Nietzsche’s remark that we’ll believe in God as long as we believe in grammar. One can find nourishment from Foucault for this experiment by triangulating the spaciously ambiguous “man is a recent invention” with “an anonymous body of knowledge” and his transition from the first-person variable “X” to the pronominal third-person in “it was known that.”

3.      Would Foucault have thought that “we” should get rid of the first-person plural, “we”? I doubt it, but he was, on the other hand, the philosopher “who writes in order to have no face.” [6] More especially, however, I’m not sure Badmington even thinks we should get rid of the “we.” I suspect that the problem with “the naturalized we” is the presumptive homogenization that occurs by saying that such-and-such is our problem, when the drift of leftist theorizing in the last 50 years has been to try to not be so presumptive about what community, what “we,” people come from and therefore would have the same problems. But is this presumption really tied to the self-image of humanism, which is at the very least a cultural complex that includes philosophical theses? [7] But more importantly, is every use of “we” presumptive? For if this were true, that would make the very idea of community presumptive, or at least the attempt to communicate what you think “we” in your community think. It is the slide between an attitude (“presumption”), a linguistic usage (“we”), and a set of quasi-philosophical theses (“humanism”) that, I think, produces loose talk about a humanism that “forever rewrites itself as posthumanism,” [8] which suggests that the ostensible problem doesn’t in fact have a solution. And this should just suggest we rethink what the problem is.




Endnotes

[1] Twilight of the Idols, “‘Reason’ in Philosophy,” sec. 5

[2] Posthumanism, ed. Badmington, 1

[3] I suspect this idea has been disseminated most by Derrida in our literary theorists, but cannot develop the point here. Badmington evinces this idea when, after introducing Derrida, posthumanism ceases to be a historical phenomenon and instead becomes the necessary conceptual counterpart of humanism, which is forever rewriting itself.

[4] This is not the space to provide evidence for this claim about what Brandom thinks, for it only needs to be the case for my purposes that this is a coherent thing to think.

[5] This is part of what Kuhn meant by “paradigm” in The Structure of Scientific Revolutions. See Ian Hacking’s introduction to the 4th edition for an excellent historical review of the reception of that very important book, including the incredibly misunderstood notion of a “paradigm.”

[6] The Archaeology of Knowledge, 17

[7] Richard Rorty has developed this point about philosophical presuppositions in regards to political liberalism in “The Priority of Democracy to Philosophy” (in Objectivity, Relativism, and Truth), which I’m coasting behind in all of this. The harder problem for Rorty was, after disentangling the philosophical presuppositions of “we,” if you will, dealing with other attendant problems in using “we,” especially the presumption. See my “Two Uses of ‘We.’”

[8] Posthumanism, ed. Badmington, 9