[Note: All vagueness posts are archived HERE.]
I've been thinking about Elizabeth Barnes' fantastic 2010 paper "Ontic Vagueness: A Guide for the Perplexed" (Nous 44.4: 601-27),* and I've got a few worries about her specific proposal in the latter part of the paper.
Briefly, here it is. A proposition is ontically vague if it is indeterminate equally acceptable ersatz worlds disagree on it's truth value. An ersatz world is acceptable if it represents as true all of the propositions that are determinately true in the actual world.
There is much to recommend this view. Since ersatz worlds are abstract representations, the view manages to capture the way in which ontic vagueness should cause problems for our attempts to represent it. This virtue is not to be sniffed at, as it is the main reason that many of us find Lewis' genuine realism (where possible worlds are not different in kind from the actual world) to be preferrable to alternatives. Likewise, Barnes is able to preserve what is distinctive about supervaluationism, but with respect to the determinacy operator instead of the truth predicate. Since ersatz worlds themselves don't admit of indeterminate truth values, it is the case that every world makes any arbitrary proposition P, true or false. But then, since a proposition is determinately true when it is true at all acceptable ersatz worlds, it follows that bivalence is determinately true. Thus is classical logic saved.
This being said, I don't think that Barnes' view quite gets it. Here are some problems in increasing order of seriousness:
- Williamson's critique of supervaluationism- Barnes is clear that she does not think that higher order vagueness (the supposed indeterminacy between the distinction between being determinately true and not being determinately true) is a problem, and so doesn't need to worrry about the way she recapitulates aspects of supervaluationism's three values. This is well worn territory. But Williamson's major criticism is that supervaluationism doesn't really recover classical logic. The semantics are sound and complete with respect to classical truth (so-called weak completeness), but not so clearly with respect to classical inference. I'll leave the focus on this to a futre post, but Williamson distinguishes between internal and external entailment, shows how one of these is not classical with respect to supervaluational semantics, and then argues that the non-classical one is the one that matters. I'm sure that an analogous argument can be made with respect to the behavior of Barnes' determinacy operator. On the other hand, I'm sure that such a challenge would be dispositive; supervaluationists have been dealing with Williamson's critiques for fifteen or so years now, and there is probably a well developed set of responses to which the Barnesian could help herself.
- Etchemendy's problem- note how genuine realist has no problem here, copies of the actual world or just use the actual world. related to biggest problem. But just from reading Barnes' paper, it's not at all clear whether we should see the acceptable ersatz worlds as being just like our world but represented differently (say in one world where having 456 hairs counts as bald and another where it does not) or as genuinely different worlds. This may just be a problem for ersatzism, since Etchemendy's problem in this context presses on the way that ersatz worlds are representations themselves.Someone must discuss this somewhere in the vast wilderness of modal actualist literature. In any case, I think Lewisian genuine realism has an advantage here. The genuine realist could just take Barnes' ersatz worlds to be duplicates of the actual world and say that we are shifting the representational machinery. Or would this be to reduce vagueness to a semantic matter. I don't think so, for the genuine realist could just follow Barnes and say that the acceptability of different representations itself holds in virtue of the actual world's ontic indeterminacy. But this leads us to the next two problems.
- Bait and switch- Early in the article Barnes differentiates ontic from other forms of vagueness by holding that in ontic vagueness there may be no acceptable precification, but she gives us a model of multiply acceptable ones. If we avail ourselves of the notion of a God's eye view (which, for Batterman type reasons, we arguably have to, lots of things are predicates at an infinite limit and only there), we see that these are two different things. God might determine that it is indeterminate which of two or more ersatz worlds is actualized because an instance of a predicate can with equal acceptability be categorized as in the extension or anti-extension. Or the case might be one where God cannot do so because there is no acceptable such ersatz world (please see this earlier related blog post by me). Again, early in the article Barnes suggests that it is the latter that is distinctive of ontic versus other forms of vagueness. This leads naturally to my main concern.
- Fails to differentiate- Barnes' model is supposed to be a model of specifically ontic vagueness, but it could with equal justice be taken as a model of conceptual/semantic vagueness, Ungerian error theory,** and (maybe) Putnamian internal realism. The person who sees vagueness as a result of our human finitude/wretchedness** could easily see the indeterminacy of which ersatz world is actualized as holding entirely in virtue of the limitations imposed by the concepts by which the ersatz worlds represent. This is consistent with the Russellian (of the vagueness paper era) view (to some extent recapitulated in Dennett type naturalism) that some idealized language admits no vagueness and is such that it allows a true theory that uniquely represents the world.***
I think the last one is the main kicker for Barnes' precise model as a model of ontic vagueness. This being said, the virtues that I presented above makes me think it might still be useful. It's just that more needs to be said to make sense of genuinely metaphysical vagueness. I'm hoping that the work of Jessica Wilson will help fill this lacuna.
[*The LSU Philosophy Reading group is focusing on vagueness this semester. The above thoughts germinated in conversations with Joel Andrepont, Joshua Heller, and Deborah Goldgaber.
**My best titled paper is "Notes from the Ungerground."I presented it at a Central APA, but by the time I got it ready for submission, some bigger names had published a similar paper defending nihilism about vagueness. I should have rewritten the thing, but it's been a few years and I may never get around to it.
**From a historical standpoint, this is really weird. Prior to Russell's famed essay philoosphers tended to think that precision was the result of our finitude. For an brief account of Russell's inversion and defense of the earlier way of looking at things, see this paper by Frankie Worrell and me. Again, presented at the APA (Eastern) as part of the main program, but not accepted by journal.
***It's weird to me that this literature dividedes accounts of vagueness into semantic/conceptual, epistemic, and metaphysical/ontic. Russell/Unger type nihilism doesn't really fit into any of these views as far as I can tell.]
- Is the Evans/Salmon argument against metaphysical indeterminacy merely a case of Moorean paradoxicality?
- Vagueness versus (Wilsonian/Brandomian) Underdetermination