Helping Mark Allan Ohm translate Garcia's magnum opus was probably the second hardest thing (after caring for a newborn) I've yet done, and it's tremendously validating to get a pat on the shoulder as well as to see that other people agree with me that it was not time wasted.
To be clear, if you had to translate a one hundred seventy thousand plus word metaphysics tome from its original French, you could not do better than to translate one by as preternaturally gifted a writer as Tristan Garcia. But it's still a lot of moving pieces you have to juggle for sustained periods.
I continue to think that American philosophers comprehensively missed their chance during the reign of Theory's Empire in the humanities. The fact that French Post-Structuralism was the lodestar of most humanist's work led most philosophers to a priori dismiss most humanists for a couple of decades. As a result, the crucial fact that non-philosophers were excited about using philosophy in their work did not lead to the bridge-building that it should've. And books like Samuel Wheeler's Deconstruction as Analytic Philosophy and Mark Okrent's Heidegger's Pragmatism showed that there was really no excuse not to bridge the fields. And after all these years there's still an inexcusable cultural divide between, for example, people working on the philosophy of fiction and people working on narratology.
This recent article in The Chronicle of Higher Education contains a nice overview of what's happened in the interregnum. It's pretty interesting, and (with Mark Ohm) I've argued elsewhere that the autonomy of texts demand surface reading (also check out the shout to Latour in the Chronicle article). This being said, I'd hate to see symptomatic reading completely die out, and I hate that the decline of symptomatic reading is being taken as synonymous with the decline of theory.
On the other hand. There's this:
Perhaps the key difference is politics. Symptomatic reading, usually associated with the sixties’ generation, often assumes that it performs "politics by other means." Best and Marcus point out that such a stance makes the critic a kind of glamorous hero who does work "more akin to activism and labor than to leisure." They propose "a sense of political realism about the revolutionary capacities of both texts and critics."
Though avoiding a polemical tone, the article on surface reading has touched a nerve, with responses from many prominent critics and, according to Google Scholar, 175 citations in just a few years (most academic articles in literary studies are lucky if they get six or eight). Best and Marcus told me that they had been surprised at the prolific and sometimes vehement response.
Their opponents complain that they discard the political vista of symptomatic reading too easily. And some question the theoretical premise that one can readily determine the surface of the text. In a riposte in PMLA, the journal of the Modern Language Association, Crystal Bartolovich, an associate professor of English at Syracuse University, defends criticism as an emancipatory project. Among examples, she cites the work of Edward Said, who used the tools of criticism, focusing on literary and other representations of the Middle East, to make serious political comments.
It is not that Best and Marcus advocate quietism. As Marcus writes in Between Women, "symptomatic reading is an excellent method for excavating what societies refuse to acknowledge." Moreover, she has made a foray into the public sphere, notably co-editing the online review Public Books. Affiliated with the scholarly journal Public Culture, it tries to redress the lack of forums for book reviews, covering work in the humanities and social sciences, as well as contemporary fiction. However, it doesn’t necessarily assume a politics; as its mission states, it focuses on "the brainy, bookish, or insatiably curious, who share our passion for connecting to the world through ideas."
As Marcus remarked in a talk at Carnegie Mellon University this past fall, if your aim is activism, literary criticism may not be the best way to do it.
Pace many of the poststructuralists' heirs hanging out in philosophy departments, the blogosphere, and facebook, I think that this punchline holds of philosophy as well. It should be obvious that political liberation requires the liberation from politics. But if all reading is symptomatic reading, it's impossible to acknowledge this.
I can't put this any better, so I'll just cut and paste bits of an e-mail from Mark Ohm about Mary Beth Mader's fantastic lecture to the Kingston University Centre for Research in Modern European Philosophy:
Check out this nice lecture by Mary Beth Mader. Her paper comes out in print in Gilles Deleuze and Metaphysicsat the end of the year. She makes a nice distinction between extensive and intensive quality and quantity in Aristotle, Aquinas, Châtelet, Deleuze, et al. She seems to argue against any reductionist naturalistic account of intensity (both Protevi and Olakowski's take on Deleuze's metaphysics of intensity, argues Mader). The weird thing is is that Aristotelian talk of extensive and intensive quality and quantity is very close to Garcia's talk on the value of being something more or less itself (beauty).
I'm going to e-mail her and ask for a paper copy. Just the idea that Protevi and Olakowski share a common presupposition is worth the price of entry, but I also think the paper will be really helpful for anyone working through Garcia's Form and Object. As I've noted before, some of the first batch of critical comment on Garcia kind of botch the manner in which intensity is metaphysical primative for Garcia. It's an understandable error, given some of the claims about the primacy of objects married to his criticisms of process philosophy in the first part of his text. But denying that intensity can do the work that Deleuzians would have it do is not to turn Deleuze on his head and try to derive intensive notions from extensive ones. Book II of Garcia's text is very clear on this point, and in fact all (over sixteen) of Garcia's fascinating regional ontologies involve a notion of intensity which is neither prior to things or objects nor reducible to them.
It is easy to misconstrue speculative realism as a bunch of continental philosophers suddenly taking themselves to cotton on to the face that there is an external world. And the responses are predictable: (1) Analytic philosophers have been talking about the external world for decades now. There's nothing new here. Move along. (2) Even Husserl talked about objects in "the external world" (scare here quotes weirdly denoting the non-technical meaning of the phrase). There's nothing new here. Move along.
I've heard these kinds of dismissals multiple times at different conferences and paper presentations over the past three years. Unfortunately they involve both misreading of Meillassoux's After Finitudeas a manifesto for some kind of naive realism, and not reading the essays in such books as The Speculative Turn* or journals such as Speculations.
The misreading goes like this: Meillassoux thinks that most continental philosophy is idealist in the Berkeleyan sense and he argues that it should not be. In reality, about half of his book is a detailed attack on the presumption that any philosophical methodology (though in particular he is concerned with phenomenology) could give one a perspective from which we can ignore the Berkeley/Fichte master argument for idealism.** Given this reality, the standard misconstrual is hugely ironic, because it is so often followed by gently reminding the anti-idealist Meillassoux that phenomenology gives one a perspective beyond realism and idealism, which was the very thing attacked in his anti-correlationist argument.
Again, Meillassoux's whole point in the "critique of correlationism" is to try to get the reader to not bracket (as in phenomenology) or set to one side (as in most analytic philosophy)*** the worry about how we can have knowledge of, or even talk about, reality give that our access to this reality is always our access.
The first upshot of Meillassoux's worry is that it is very hard not to fall into one of three camps: (1) some (well, actually a German) form of idealism, where the world itself must be construed in mentalistic terms, (2) skepticism, where we honestly admit we have no idea what's going on, or (3) quietism, where we just try to be better phenomenologists by being much clearer about that on which we principly refuse to speculate (the "speculative" part of speculative realism is precisely a refusal of this refusal, cf. the Introduction to Hegel's Phenomenology of Spirit on just this point).
From an analytic perspective, what's so interesting about the constellation of speculative realist thinkers is the extent to which Meillassoux's critique crystallizes a post-phenomenological rethinking of major figures in the tradition, especially German Idealists and post-structuralists. Following Paul Livingston's notion of paradoxico-criticism, I think there is a tradition of paradoxico-metaphysics which fuses German Idealist views about Kant's mathematical antinomies and a characteristic take on the Berkeley/Fichte problematic. Here's the meta-metaphysical antinomy:
1. (A consistent, complete) metaphysics is impossible. Two reasons: (A) Since metaphysics have to account for themselves, they always generate Russell type paradoxes. (B) Given the Berkeley/Fichte problematic, metaphysical knowledge is an attempt to know the unknowable.
On the other hand:
2. Metaphysics is unavoidable. Even when we try to retreat into our own minds, those minds are still things in the world that we are talking about, and fixing what we are talking about forces us to talk about the total world in which they occur (Priest calls this the Domain Principle).
Paradoxico-metaphysicians can be presented as embracing both of these horns.**** The project is to give an account of what reality must be like such that it prevents us of giving an account of what it is like. Harman, Tristan Garcia, and maybe Zizek can all profitably be presented as occupying this problem space. On Zizek, I am going to read this book by Joseph Carrow as soon as possible and try to work out whatever homologies there are between Harman, Garcia, and Zizek.
There is clearly an epistemological side to all of this too. How might linguistic, conceptual creatures like ourselves get knowledge of a world that is radically non-linguistic and conceptual? The classical German Idealist answer is to say that we can't, but to see the world as conceptual (Pittsburgh Hegelians get as close as one can get to this while eschewing metaphysics, which is where their quietism comes in). Can one solve the problem while rejecting any simple form of predicate-equals-property idealism?
The original speculative realists all had in common an affinity for HP Lovecraft, and when one considers all of the above, one sees that this is not accidental. Lovecraft is the master of using language to paradoxically describe the indescribable. I think that Harman has been the most consistent at taking Lovecraft's aesthetic success as a test case for epistemology. His recent book on this is a blast to read as well.
*When I searched the title on Amazon to get the link, the second and third entry were to the book Fifty Shades of Grey. A much more talented writer than me could probably make something funny of this. I went through the "Customers who also bought this item also bought" list and there was no sado-masochistic stuff there, though if I were a masochist I would probably more joyfully subject myself to Lacan and Deleuze's literary styles.
**Joshua Heller and I have written a paper on Meillassoux explaining this. Our paper is under review, but if anyone wants a copy just e-mail me at my name 2 at gee mail dot com. Our paper builds on Graham Harman's great book that is about to be reissued in a revised and expanded version.
***Not always set aside. But when people like Plantinga or Nagel raise the question of whether or not naturalism has the resources to take on the issue, the naturalist's response is so ferocious that one has to dust off Freud to understand what's going on.
*****Neal Hebert and I have a forthcoming paper where we argue that different stances among professional wrestling viewers correspond to different meta-metaphysical positions, the highest corresponding to paradoxico-metaphysics. The idea is a lot less crazy than it sounds. I can e-mail the paper to anyone who is interested.]
Peter Simons' 1998 "Does the Sun Exist? The Problem of Vague Objects" blows my mind. The technical trick to accommodate two fuzzy logical intuitions is brilliant. My student and sometimes co-writer* Joshua Heller (who actually understands the calculus way better than me) is probably going to do some cool stuff with it in his thesis. Here I'm interested in this argument:
A decided disadvantage to the supervaluatitonal theory concerns identity. For while the sun exists (since all candidates do) nothing is definitely identical with the sun: since there is more than one candidate, none of them is determinately it. Hence if for no candidate c is it determinately true (or false) that it fulfills the propositional function x = s, then it follows that the proposition s = s has no truth-value. Yet since all candidates fulfill the propositional function x = x it would seem that s = s is true after all. If the sun exists in some sense we want to say it is definitely itself in the same sense. The problem is symptomatic of the havoc wreaked in our conceptual scheme by vague objects (p. 4)
When I first read this, I just glossed it as the Evans/Salmon argument,** but it's clear I was mistaken. I think Simons is actually saying something interesting about de dicto versus de re identity claims. The claim that it is de dicto determinately true that there exists an object identical to x can be formalized in this manner:
De Dicto- △∃x(x = s)
You get the corresponding de re claim by putting the determinacy operator inside of the existential quantifier:
De Re- ∃x△(x = s)
The first says that it is determinately the case that something is identical to s, while the second says that something is such that it is determinately the case that it is identical to s. Supervaluational as well as Barnes type (see here and here) approaches (if they treat identity like other two place predicates) make the de dicto claim true while rendering the de re claim either false or lacking truth value. At every acceptable precise model something will be equal to s, which renders the de dicto true (note: in my reconfigured version of Barnes which saves her view from my and Jessica Wilson's criticisms, the de dicto identity claim can come out false). But the de re claim can fail to be true if there isn't something in the original model that is such that it is identical to s in every possible precisification of the original vague model.
[Note: Any time I post something this aggravating, I get lots of hostile e-mails. Please say it here (comments aren't moderated) or on your own blog instead. I'm sorry; I wish that I had time/emotional energy to answer, let alone read, all of the angry e-mail my blogging sometimes generates. But I don't.]
I just got an advanced copy of Peter Wolfendale's Object-Oriented Philosophy: The Noumenon's New Clothes. I was pretty excited about it because I've been a fan of Wolfendale's blog for a long while now and also quite enjoyed large chunks of his dissertation on Heidegger as well as some of his takes on Robert Brandom. We have a pretty fundamental disagreement about the latter's quietism, but I continue to find Wolfendale to be an interlocutor from whom I can learn a lot.
Since what I'm about to say is negative let me also preface it with a story of my own. My first publication was something I wrote with Roy Cook in graduate school.The first place we sent it was Analysis and they accepted it with revisions. So this seemed easy enough. And it generally is for people, as long as they are co-writing with Roy Cook (the guy's phenomenal). But after that I moved to Louisiana and it was over two years before I got anything else accepted. It was one of the darkest periods of my life. That is, as I faced the possibility of instantiating the Bob Dylan line about thirty years of schooling and they put you on the day shift, it became a little bit maddening. Weirdly, the psychological issues had a pronounced negative effect on my prose, and my first two single-authored papers published (here and here) just have a nasty tone. I find looking at them today really unpleasant. The frustration of what I was going through as an early stage academic with zero job security is just too manifest. I've long since had a chance to apologize to the papers' targets for the tone of the papers, and both were mensches about it. But I can't go back and rewrite them.*
It's generally a mistake to delve into people's psychology too much, unless doing so helps you be more charitable. So I'm going to try see the preface (available now online here) of Wolfendale's book in terms of what I myself went through. Moreover, in my case the reviewers and editors at Philosophical Studies and Synthese at least helped me tone down the paper a bit before publication. It's pretty clear that the suits at Urbanomic Press did not extend the same bit of professionalism to Wolfendale.
So now here is the weird thing. At the very outset of a four hundred page book ostensibly about Graham Harman's philosophy we get phrases like "the pathological dynamics typical of Harman's work" (xvi). Please pause and consider that. Why write a four hundred page book about something dynamically pathological? Seriously, polemics in philosophy never succeed, for the simple reason that only the already converted give the polemicist a pass on the uncharity needed for the polemic to be rhetorically effective. For a good example of this performative contradiction in action, check out Bertrand Russell's introduction to Ernest Gellner's old attack on "linguistic philosophy."
In this post I presented four problems for the account of metaphysical vagueness offered by Elizabeth Barnes in her wonderful 2010 paper "Ontic Vagueness: A Guide for the Perplexed" (Nous 44.4: 601-27). While starting to read Jessica Wilson's equally fantastic 2013 "A determinable-based account of metaphysical indeterminacy" (Inquiry 56.4: 359-385) last night, it occurred to me that a slight patch to Barnes' account could deflect Wilson's opening criticism of alternative approaches as well as my third criticism of Barnes.
A proposition is ontically vague if it is indeterminate equally acceptable ersatz worlds disagree on it's truth value. An ersatz world is acceptable if it represents as true all of the propositions that are determinately true in the actual world.
There is much to recommend this view. Since ersatz worlds are abstract representations, the view manages to capture the way in which ontic vagueness should cause problems for our attempts to represent it. This virtue is not to be sniffed at, as it is the main reason that many of us find Lewis' genuine realism (where possible worlds are not different in kind from the actual world) to be preferable to alternatives. Likewise, Barnes is able to preserve what is distinctive about supervaluationism, but with respect to the determinacy operator instead of the truth predicate. Since ersatz worlds themselves don't admit of indeterminate truth values, it is the case that every world makes any arbitrary proposition P, true or false. But then, since a proposition is determinately true when it is true at all acceptable ersatz worlds, it follows that bivalence is determinately true. Thus is classical logic saved.
I argued that this view had four problems: (1) the Williamson problem for supervaluationism about narrow versus wide entailment,* (2) Etchemendy's problem of how to construe countermodels, (3) what I call the bait and switch problem (explained below), and (4) the consistency of Barnes' model with non-metaphysical accounts of indeterminacy.
Here's what I wrote about the third problem:
Early in the article Barnes differentiates ontic from other forms of vagueness by holding that in ontic vagueness there may be no acceptable precisification, but she gives us a model of multiply acceptable ones. If we avail ourselves of the notion of a God's eye view (which, for Batterman type reasons, we arguably have to, lots of things are predicates at an infinite limit and only there), we see that these are two different things. God might determine that it is indeterminate which of two or more ersatz worlds is actualized because an instance of a predicate can with equal acceptability be categorized as in the extension or anti-extension. Or the case might be one where God cannot do so because there is no acceptable such ersatz world (please see this earlier related blog post by me). Again, early in the article Barnes suggests that it is the latter that is distinctive of ontic versus other forms of vagueness.
I didn't think of how her view might be amended to deal with this until I started reading Jessica Wilson's piece. She writes:
Let me start by prefiguring, in a heuristic way, what is different about my approach.Previous accounts of MI [metaphysical indeterminacy] have supposed that what it is for there to be MI is for it to be indeterminate which of various determinate (maximally precise) states of affairs (henceforth, SOAs), typically involving an object's having some property, obtain. Here I present an account on which what it is for a SOA to be MI is for it to be determinate (or just plain true) that an indeterminate (less than maximally precise) SOA obtains.
First, let's see why one who agrees with Wilson on this would have to reject Barnes' theory as she presents it. Take the claim taht it is determinate that an indeterminate SOA obtains. Where "" is a determinacy operator, and "v" an indeterminacy operator, Wilson's claim entails that there are P such that vP holds. But that's impossible by Barnes' account. vP is true on her account iff vP is true at every acceptable precisification. But nothing is indeterminate at an acceptable precisficationi. Since her ersatz worlds are just classical models, it is not possible to further precisify them. But vP is true at a world if there are incompatible acceptable precisifications of that world, at least one of which makes P true and one of which makes P false.
Part of the mostly unrealized promise of Object-Oriented Ontology was the contribution it might make to histiography. The idea is that if to focus on the way things in themselves realize and fail to realize their teleology and how they systematically interact and fail to interact with other objects. While most of the metaphysical focus (especially by Graham Harman) thus far has been using continental tropes from German Idealism and Phenomenology to make metaphysical sense of causal underdetermination as well as the ways that properties of objects fail to causally interact (e.g. the table's hardness and the various properties of the neutrino that allow it to pass through), the characteristic ways that objects are theorized by people like Iain Boghost and Levy Bryant (and possibly people who don't characterize themselves as OOO types but whose work is relevant, especially Paul Livingston and Adrian Johnston) also allow us to begin to theorize causal overdetermination in novel ways.
According to Lyotard, post-modernism was to be characterized by a general distrust of totalizing narratives, because they always end up being incomplete, in spite of their pretensions. Paul Livingston looks at Goedel's Incompleteness Theorems (which Lyotard also discusses in his eponymous book) and Russell's Paradox to argue that incomplete phenomena are themselves incomplete in a sense. We also need to think about the possibility of complete, yet inconsistent theories.*
With respect to history, when we focus on the lives of objects we see that there are too many theories. I don't know if he's still writing this book, but for a while Levi Bryant was going to write a history of the world in terms of the battle between trees and grasses. Prior to humans there is a long history of evolutionary change as trees and grasses systematically beat one another back. Once humans came along, grasses developed a brilliant strategy to defeat the trees. They became edible to us. So we helped them out by cutting back all of the forest so that we could plant grain. As we look into the future we see that this is probably at best a partial victory, and perhaps a completely pyrrhic one. For the grass' technology (us) has gone Frankenstein and now we threaten the very existence of both trees and grasses.
The metaphysical point is that this story is just fine. Contra Lyotard, we shouldn't avoid metanarratives, but should rather develop a tolerance for accepting multiple ones that are inconsistent with one another, and also develop a metaphysics that makes sense of why this is the case.**
I've recently become pretty interested in the interior lives of weed-eaters, washing machines, and mops. For the most part, they try their best to do their job. But they are systematically failing. Weed eaters havent' demonstrably improved in thirty years. You still have the same choice of crappy gasoline and electric models and all of them are a pain in the neck because the wire is either constantly jamming or breaking. Mops suffer the same design flaws as well. Exposure to water ends up either causing them to rust or the wood to rot. Washing machines are the worst at slacking at their jobs. Top of the line machines routinely develop unfixable mildew problems and also break down in ways that the hapless Sears repair person (whose own job has been ruined by a rent-seeking Randroid CEO) can't do anything about.
I've been thinking about Elizabeth Barnes' fantastic 2010 paper "Ontic Vagueness: A Guide for the Perplexed" (Nous 44.4: 601-27),* and I've got a few worries about her specific proposal in the latter part of the paper.
Briefly, here it is. A proposition is ontically vague if it is indeterminate equally acceptable ersatz worlds disagree on it's truth value. An ersatz world is acceptable if it represents as true all of the propositions that are determinately true in the actual world.
There is much to recommend this view. Since ersatz worlds are abstract representations, the view manages to capture the way in which ontic vagueness should cause problems for our attempts to represent it. This virtue is not to be sniffed at, as it is the main reason that many of us find Lewis' genuine realism (where possible worlds are not different in kind from the actual world) to be preferrable to alternatives. Likewise, Barnes is able to preserve what is distinctive about supervaluationism, but with respect to the determinacy operator instead of the truth predicate. Since ersatz worlds themselves don't admit of indeterminate truth values, it is the case that every world makes any arbitrary proposition P, true or false. But then, since a proposition is determinately true when it is true at all acceptable ersatz worlds, it follows that bivalence is determinately true. Thus is classical logic saved.
This being said, I don't think that Barnes' view quite gets it. Here are some problems in increasing order of seriousness:
Williamson's critique of supervaluationism- Barnes is clear that she does not think that higher order vagueness (the supposed indeterminacy between the distinction between being determinately true and not being determinately true) is a problem, and so doesn't need to worrry about the way she recapitulates aspects of supervaluationism's three values. This is well worn territory. But Williamson's major criticism is that supervaluationism doesn't really recover classical logic. The semantics are sound and complete with respect to classical truth (so-called weak completeness), but not so clearly with respect to classical inference. I'll leave the focus on this to a futre post, but Williamson distinguishes between internal and external entailment, shows how one of these is not classical with respect to supervaluational semantics, and then argues that the non-classical one is the one that matters. I'm sure that an analogous argument can be made with respect to the behavior of Barnes' determinacy operator. On the other hand, I'm sure that such a challenge would be dispositive; supervaluationists have been dealing with Williamson's critiques for fifteen or so years now, and there is probably a well developed set of responses to which the Barnesian could help herself.
Etchemendy's problem- note how genuine realist has no problem here, copies of the actual world or just use the actual world. related to biggest problem. But just from reading Barnes' paper, it's not at all clear whether we should see the acceptable ersatz worlds as being just like our world but represented differently (say in one world where having 456 hairs counts as bald and another where it does not) or as genuinely different worlds. This may just be a problem for ersatzism, since Etchemendy's problem in this context presses on the way that ersatz worlds are representations themselves.Someone must discuss this somewhere in the vast wilderness of modal actualist literature. In any case, I think Lewisian genuine realism has an advantage here. The genuine realist could just take Barnes' ersatz worlds to be duplicates of the actual world and say that we are shifting the representational machinery. Or would this be to reduce vagueness to a semantic matter. I don't think so, for the genuine realist could just follow Barnes and say that the acceptability of different representations itself holds in virtue of the actual world's ontic indeterminacy. But this leads us to the next two problems.
Bait and switch- Early in the article Barnes differentiates ontic from other forms of vagueness by holding that in ontic vagueness there may be no acceptable precification, but she gives us a model of multiply acceptable ones. If we avail ourselves of the notion of a God's eye view (which, for Batterman type reasons, we arguably have to, lots of things are predicates at an infinite limit and only there), we see that these are two different things. God might determine that it is indeterminate which of two or more ersatz worlds is actualized because an instance of a predicate can with equal acceptability be categorized as in the extension or anti-extension. Or the case might be one where God cannot do so because there is no acceptable such ersatz world (please see this earlier related blog post by me). Again, early in the article Barnes suggests that it is the latter that is distinctive of ontic versus other forms of vagueness. This leads naturally to my main concern.
Fails to differentiate- Barnes' model is supposed to be a model of specifically ontic vagueness, but it could with equal justice be taken as a model of conceptual/semantic vagueness, Ungerian error theory,** and (maybe) Putnamian internal realism. The person who sees vagueness as a result of our human finitude/wretchedness** could easily see the indeterminacy of which ersatz world is actualized as holding entirely in virtue of the limitations imposed by the concepts by which the ersatz worlds represent. This is consistent with the Russellian (of the vagueness paper era) view (to some extent recapitulated in Dennett type naturalism) that some idealized language admits no vagueness and is such that it allows a true theory that uniquely represents the world.***
I think the last one is the main kicker for Barnes' precise model as a model of ontic vagueness. This being said, the virtues that I presented above makes me think it might still be useful. It's just that more needs to be said to make sense of genuinely metaphysical vagueness. I'm hoping that the work of Jessica Wilson will help fill this lacuna.
[*The LSU Philosophy Reading group is focusing on vagueness this semester. The above thoughts germinated in conversations with Joel Andrepont, Joshua Heller, and Deborah Goldgaber.
**My best titled paper is "Notes from the Ungerground."I presented it at a Central APA, but by the time I got it ready for submission, some bigger names had published a similar paper defending nihilism about vagueness. I should have rewritten the thing, but it's been a few years and I may never get around to it.
**From a historical standpoint, this is really weird. Prior to Russell's famed essay philoosphers tended to think that precision was the result of our finitude. For an brief account of Russell's inversion and defense of the earlier way of looking at things, see this paper by Frankie Worrell and me. Again, presented at the APA (Eastern) as part of the main program, but not accepted by journal.
***It's weird to me that this literature dividedes accounts of vagueness into semantic/conceptual, epistemic, and metaphysical/ontic. Russell/Unger type nihilism doesn't really fit into any of these views as far as I can tell.]
I've just started reading material on ontological indeterminacy, so I'm certailny missing some relevant discussions. Nonetheless, my impression is that more needs to be done to get clearer about what exactly differentiate various varieties of indeterminacy from one another. For me this is particularly important, because (as I discussed here) I suspect that the Evans argument against metaphysical indeterminacy systematically fails for at least some sorites succeptible forms of indeterminacy.
Here's a stab. Canonical vagueness involves: (1) indeterminate cases, (2) sorites susceptibility, and (3) impossibility of indeterminate cases becoming paradigm instances of the concept in question (without any change in meaning).
At least one other form of indeterminacy, let us call it underdetermination (since Mark Wilson's version of the underdetermination of meaning is for me and Robert Brandom (of this book) a canonical case) involves: (1) indeterminacy, and (2) the possibility of indeterminate cases becoming paradigm instances of the concept in question (without any change in meaning).
So a color patch in between red and orange is not going to become a paradigm instance of redness. But some novel yet at first hard to categorize building might come to be seen as paradigmatic of one or more distinct (and incompatible) extant architectural styles. Wilson gives examples of this kind of underdetermination from the history of science and Brandom argues that we cannot understand how stare decisis works in law courts without seeing it as a case of this. In virtue of Brandom's discussion, it is clear that some cases of underdetermination do admit sorites susceptibility.
One thing I'm interested in now is the extent to which Wilsonian underdetermination could ever be a case of ontic indeterminacy. In this paper, Frankie Worrell and I just sort of assume that it can, but this seems problematic to me now at least with respect to sorites susceptible cases of underdetermination.
One non-sorites susceptible case of underdetermination (as characterized above) is the indeterminant future. So we probably have at least three types of indeterminacy, all involving indeterminate cases, but differing in sorites susceptibility and whether or not indeterminate cases can come to be non-indeterminate.
Some of my colleagues and I are trying to come up with a dialectical/historical map of panpsychism. In what follows let us label any view that (without substantial change in meaning) attributes to the world categories that we might most naturally attribute to human minds. Panpsychism needn't be pan-theism or panentheism (the view that the physical universe is part of God), though one might argue that pan-theism should entail pan-psychism. To the extent that Spinoza is fairly labelled a pantheist, one can actually tell the story of German Idealism in terms of this issue.
Anyhow, here are the three main strands I could think of:
Post-Cartesian Panpsychism- This form comes out of the difficulty of making sense of how the mind and it's properties could exist if the non-mental world lacked those properties. The two people who have done the most to help the development of this kind of pan-psychism are David Chalmers and Galen Strawson. Note that this kind of panpsychism is not a priori committed to the claim that we can't make sense of the mind given the way physics describes the world. To the extent that doing physics might require attributing mental properties to a world we previously thought was non-mental physics (and on some interpretations of quantum physics this is the case), then physics might actually end up providing support to Post-Cartesian Panpsychism.
Early Schelling*/Hegelian/post-Heideggerian/post-McDowellian Panpsychism- Hegel can profitably be read as holding that one must either be a skeptic or attribute the kind of teleological properties that post-Cartesian science was locating entirely in the human (and God's) minds to the universe as it is in itself. Much of his Phenomenology can be understood as a deconstruction of skepticism, so that (given the dichotomy) the teleological conception of reality can be assumed when working out a positive metaphysics. Martin Heidegger's revolt against neo-Kantianism has been the biggest impetus for a return of this kind of pantheism. For Heidegger, descriptive vocabulary (Sellars' "space of causes") is actually founded on normative vocabulary. There is some sense in which we can't help but seeing the world as always already a site of deontic modals that are already there. But then, as soon as you ask if the world really is that way, you are confronted with Hegel's dichotomy between skepticism and metaphysical teleology. Note that this is precisely why there are quietist strands in Heidegger and McDowell. They accept the epistemic primacy of the space of reasons, but by undermining questions about the way the world really is they hope to not accept the metaphysical primacy in the sense of the Hegelian strand of panpsychism.
Late Schelling/Schopenhaurian/Nietzschean Panpsychism- If Hegelian pantheism concerns deontic modals, then Schopenhaurian panpsychism concerns alethic modals. Grossly simplified, the idea is like this. Contra Hume our inner phenomenology reveals what it is like to be frustrated and this gives us a qualitative experience of relative forms of impossibility and possibility. Then, once one agrees with Schelling that one is nature, one realizes that what happens in one's own body is happening in the world as it is, a metaphysics of will is born. Strangely, philosophers who find psychologically rich forms of alethic modality in the world as it is in itself have tended to be hostile to philosophers who find psychologically rich forms of deontic modality in the world. This might just be because of personal issues between Hegel and Schopenhauer. I don't know. I don't know the extent to which Bergosonian, Whitehead, and Deleuze should be understood as in this pantheistic tradition.
For me, the biggest contribution that Speculative Realism made to contemporary philosophy is to open up questions about the latter two forms of panpsychism, and to renew work in traditions where it's already been opened up (guerilla readings of classical phenomenologists, Simondon not just as a precursor to Deleuze, late period Merleau Ponty, new Spinozism, metaphysical readings of Deleuze, Derrida, etc. etc. etc.). I hope to do a post about this in a couple of days. But I'd be really interested if anyone has anything to add or detract from the above typology.