Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

The Epistemic Closure Principle

First published Mon Dec 31, 2001; substantive revision Tue Dec 27, 2005

Most of us think we can always enlarge our knowledge base by accepting things that are entailed by (or logically implied by) things we know. The set of things we know is closed under entailment (or under deduction or logical implication), which means that we know that a given claim is true upon recognizing, and accepting thereby, that it follows from what we know. However, some theorists deny that knowledge is closed under entailment, and the issue remains controversial. The arguments against closure include the following:

The argument from the tracking analysis of knowledge: the correct analysis of knowledge, which includes a tracking condition, suggests that knowledge is not closed, so it isn't.

The argument from nonclosure of knowledge modes: since the modes of gaining, preserving or extending knowledge, such as perception, testimony, proof, memory, indication, and information are not individually closed, neither is knowledge.

The argument from unknowable (or not easily knowable) propositions: certain sorts of propositions cannot be known (without special measures); given closure, they could be known (without special measures), by deducing them from mundane claims we known, so knowledge is not closed.

The argument from skepticism: skepticism is false but it would be true if knowledge were closed, so knowledge is not closed.

While proponents of closure have responses to these arguments, they also argue, somewhat in the style of G. E. Moore (1959), that closure itself is a firm datum — it is obvious enough to rule out any understanding of knowledge or related notions that undermines closure.

A closely related idea is that it is rational (justifiable) for us to believe anything that follows from what it is rational for us to believe. This idea is intimately related to the thesis that knowledge is closed, since, according to some theorists, knowing p entails justifiably believing p. If knowledge entails justification, closure failure of the latter might lead to closure failure of the former.


1. The Closure Principle

Precisely what is meant by the claim that knowledge is closed under entailment? One response is that the following straight principle of closure of knowledge under entailment is true:

SP: If person S knows p, and p entails q, then S knows q.

The conditional involved in the straight principle might be the material conditional, the subjunctive conditional, or entailment, yielding three possibilities, each stronger than the next:

SP1: S knows p and p entails q only if S knows q.

SP2: If S were to know something, p, that entailed q, S would know q.

SP3: It is necessarily the case that: S knows p and p entails q only if S knows q.

However, each version of the straight principle is false, since we can know one thing, p, but fail to see that p entails q, or for some other reason fail to believe q. Since knowledge entails belief (according to nearly all theorists), we fail to know q. A less obvious worry is that we might reason badly in coming to believe that p entails q. Perhaps we think that p entails q because we think everything entails everything, or because we have a warm tingly feeling between our toes. Hawthorne (2005) raises the possibility that, in the course of grasping that p entails q, S will cease to know p. He also notes that SP1 is defensible on the (deviant) assumption that a thought, p, is equivalent to another, q, if p and q hold in all of the same possible worlds. Suppose p entails q. Then p is equivalent to the conjunction of p and q, and so the thought p is identical to the thought p and q. Hence in knowing p S knows p and q. Assuming that, in knowing p and q, S knows p and S knows q, then when S knows p S knows q, as SP1 says.

The straight principle needs qualifying, but this should not concern us so long as the qualifications are natural given the idea we are trying to capture, namely, that we can extend our knowledge by recognizing, and accepting thereby, things that follow from our knowledge. The qualifications embedded in the following principle (construed as a material conditional) seem natural enough:

K: If, while knowing p, S believes q because S knows that p entails q, then S knows q.

As Williamson (2002) notes, the idea that we can extend our knowledge by applying deduction to what we know supports a closure principle that is stronger than K. It is a principle that says we know things we believe on the grounds that they are jointly implied by several separate known items. Suppose I know Mary is tall and I know Mary is left handed. K does not authorize my putting these two pieces of knowledge together so as to know that Mary is tall and left handed. But the following generalized closure principle covers deductions involving separate known items:

GK: If, while knowing various propositions, S believes p because S knows that they entail p, then S knows p.

Proponents of closure are likely to accept both K and GK, perhaps further qualified in natural ways. By contrast, Fred Dretske and Robert Nozick reject K and therefore GK as well. They reject any closure principle, no matter how narrowly restricted, that warrants our arriving at antiskeptical knowledge (e.g., I am not a brain in a vat) on the basis of mundane knowledge claims (e.g., I am not in a vat). In addition to rejecting K and GK, they deny knowledge closure across instantiation and simplification, but not across equivalence (Nozick 1981: 227-229):

KI: If, while knowing that all things are P, S believes a particular thing a is P because S knows it is entailed by the fact that all things are P, then S knows a is P.

KS: If, while knowing p and q, S believes q because S knows that q is entailed by p and q, then S knows q.

KE: If, while knowing p, S believes q because S knows q is equivalent to p, then S knows q.

Let us turn to their arguments.

2. The Argument From the Tracking Analysis of Knowledge

Dretske and Nozick both defend analyses of knowledge that can be viewed as relevant alternatives accounts. According to Dretske (2003: 112-3; 2005: 19), any relevant alternatives account leads "naturally" but "not inevitably" to K failure, but in any case the analyses Dretske and Nozick defend are in tension with K. Therefore, we might speak of two versions of the argument from the analysis of knowledge. First, the correct account of knowledge, as developed, for example, by Dretske or Nozick, leads to K failure. Second, any relevant alternatives account, such as Dretske's and Nozick's, leads to K failure.

2.1 Closure Fails Due to the Tracking Condition on Knowledge

In rough outline, the first version involves defending say Dretske's or Nozick's tracking analysis of knowledge, then showing that it undermines K. We can skip the defense, which consists largely in showing that tracking does a better job than competitors in dealing with our epistemic intuitions about cases of purported knowledge. We may also simplify the analyses. According to Nozick, to know p is, very roughly, to have a belief p which meets the following condition (‘BT’ for belief tracking):

BT: were p false, S would not believe p.

That is, in the close worlds to the actual world in which not-p holds, S does not believe p. The actual world is one's situation as it is when one arrives at the belief p. BT requires that in all nearby not-p worlds S fails to believe p. (The semantics of subjunctive conditionals is clarified in Stalnaker 1968, Lewis 1973, and Nozick 1981 note 8.) On Dretske's view knowing p is roughly a matter of having a reason R for believing p which meets the following condition (‘CR’ for conclusive reason):

CR: were p false, R would not hold.

That is, in the close worlds to the actual world in which not-p holds, R does not. When R meets this condition, Dretske says R is a conclusive reason for believing p.

Dretske points out (2003, n. 9; 2005, n. 4) that his view does not face one of Kripke's objections to Nozick's account. Suppose I am driving through a neighborhood in which, unbeknownst to me, papier-mâché barns are scattered, and I see that the object in front of me is a barn. I also notice that it is red. Because I have barn-before-me percepts, I believe barn: the object in front of me is a (ordinary) barn (the example is attributed to Ginet in Goldman 1976). Our intuitions suggest that I fail to know barn. And so say BT and CR. But now suppose that the neighborhood has no fake red barns; the only fake barns are blue. (Call this the Kripkesque barn case.) Then on Nozick's view I can track the fact that there is no red barn, since I would not believe there was a red barn (via my (red)-barn-percepts) if no red barn were there, but I cannot track the fact that there is no barn, since I might believe there was a barn (via blue-barn-percepts) even if no barn were there. Dretske says this juxtaposition, in which I know something yet fail to know a second thing that is intimately related to the first (there being a red barn, which I know, entails there being a barn, which I do not), "is an embarrassment," and in this respect, he takes it, his view is superior to Nozick's. Let R, my basis for belief, be the fact that I have red-barn-percepts. If no barn were there, R would fail to hold, so I know a barn is there. Further, if no red barn were there, R would still fail to hold, so I know a red barn is there. So Dretske can avoid the objectionable juxtaposition. Still, it is surprising that Dretske cites the Kripkesque barn case as the basis for preferring his version of tracking over Nozick's. First, Dretske himself accepts juxtapositions of knowledge and ignorance that are at least equally bizarre, as we shall see. Second, Nozick avoids the very juxtaposition Dretske discusses by restating his account to make references to the methods via which we come to believe things (Hawthorne 2005). On a more polished version of his account, Nozick says that to know p is, roughly, to have a belief p, arrived at through a method M, which meets the following condition (‘BMT’ for belief method tracking):

BMT: were p false, S would not believe p via M.

Third, the Kripkesque barn case is one about which intuitions will vary. It is not obvious that I do know there is a red barn in the circumstances Dretske sketches, which differ from those in the original Ginet-Goldman barn case (where I fail to know barn) only in the stipulations that I see a red barn and that none of the barn simulacra are red.

The tracking accounts permit counterexamples to K. Dretske's well known illustration is the zebra case: suppose you are at a zoo in ordinary circumstances standing in front of a cage marked ‘zebra’; the animal in the cage is a zebra, and you believe zeb, the animal in the cage is a zebra, because you have zebra-in-a-cage visual percepts. It occurs to you that zeb entails not-mule, it is not the case that the animal in the cage is a cleverly disguised mule rather than a zebra. You then believe not-mule by deducing it from zeb. What do you know? You know zeb, since, if zeb were false, you would not have zebra-in-a-cage visual percepts; instead, you would have empty-cage percepts, or aardvark-in-a-cage percepts, or the like. Do you know not-mule? If not-mule were false, you would still have zebra-in-a-cage visual percepts (and you would still believe zeb, and you would still believe not-mule by deducing it from zeb). So you do not know not-mule. But notice that we have:

(a) You know zeb

(b) You believe not-mule by recognizing that zeb entails not-mule

(c) You do not know not-mule.

In view of (a)-(c), we have a counterexample to K, which entails that if (a) you know zeb, and (b) you believe not-mule by recognizing that zeb entails not-mule, then you do know not-mule, contrary to (c).

In response to this first version of the argument from the analysis of knowledge, some theorists (e.g., Luper 1984, BonJour 1987, DeRose 1995) have offered what might be called the argument from closure, which says that K has great plausibility in its own right (which Dretske acknowledges in 2005: 18) so it should be abandoned only in the face of compelling reasons, yet there are no such reasons.

To show there are no compelling reasons to abandon K, theorists have provided accounts of knowledge that (a) handle our intuitions at least as successfully as the tracking analyses and yet (b) underwrite K. One such account is as follows (Luper 1984; Sosa 1999, 2003). Knowing p is roughly a matter of having a reason R for believing p which meets the following condition (‘SI’ for safe indication):

SI: if R held, p would be true.

SI requires that p be true in the nearby R worlds. When R meets this condition, let us say that R is a safe indicator that p is true. SI is the contraposition of CR, but contrapositions of subjunctive conditions are not equivalent.

Let us suppose without argument that SI handles cases of knowledge and ignorance as intuitively as CR. [The Kripke-style barn case discussed earlier might constitute an obstacle to the safe indication view, as it might to the tracking account: my red-barn percepts are safe indicators that the object in front of me is a barn and that it is a red barn, so no objectionable juxtaposition occurs, but some theorists will insist that, in the circumstances sketched, I know neither that the object is a barn nor that it is a red barn.] Why say SI underwrites K? The key point is that if R safely indicates that p is true, then it safely indicates that q is true, where q is any of p's consequences. Put another way, the point is that the following reasoning is valid (being an instance of strengthening the consequence):

1. If R held, p would be true (i.e., R safely indicates that p)

2. p entails q

3. So if R held, q would be true (i.e., R safely indicates that q)

Hence, if a person S knows p on the basis of R, S is in a position to know q on the basis of R, where q follows from p. S is also in a position to know q on the basis of the conjunction of R together with the fact that p entails q. Thus if S knows p on some basis R, and believes q on the basis of R (on which p rests) together with the fact that p entails q, then S knows q. Again: if

(a) S knows p (on the basis of R), and

(b) S believes q by recognizing that p entails q (so that S believe q on the basis of R, on which p rests, together with the fact that p entails q),

then

(c) S knows q (on the basis of R and the fact that p entails q),

as K requires. To illustrate, let us use Dretske's example. Having based your belief zeb on your zebra-in-the-cage percepts, you know zeb according to SI: given your circumstances, if you had those percepts, zeb would be true. Moreover, when you believe not-mule by first believing zeb on the basis of your zebra-in-the-cage percepts then deducing not-mule from zeb, you know not-mule according to SI: if you had those percepts not only would zeb hold, so would its consequence not-mule.

2.2 Closure Fails on a Relevant Alternatives Approach

The second version of the argument from the analysis of knowledge has it that any relevant alternatives view, not just tracking accounts, is in tension with K. An analysis is a relevant alternatives account when it meets two conditions. First, it yields an appropriate understanding of ‘relevant alternative.’ Dretske's approach qualifies since it allows us to say that an alternative A to p is relevant if and only if:

CRA: were p false, A might hold.

According to the second condition, the analysis must say that knowing p requires ruling out all relevant alternatives to p but not all alternatives to p. Dretske's approach qualifies once again. It says an alternative A is ruled out on the basis of R if and only if the following condition is met:

CRR: were A to hold R would not hold.

And, on Dretske's approach, an alternative A must be ruled out if and only if A meets CRA.

So the tracking account is a relevant alternatives approach. But why say that relevant alternatives accounts of knowledge are in tension with K? We will say this if, like Dretske, we accept the following crucial tenet: the negation of a proposition p is automatically a relevant alternative to p (no matter how bizarre or remote not-p might be) but often not a relevant alternative to things that imply p. For a relevant alternatives theorist, this tenet suggests that we can know something p only if we can rule out not-p but we can know things that entail p even if we cannot rule out not-p, which opens up the possibility that there are cases that violate K. For while our inability to rule out not-p stops us from knowing p it does not stop us from knowing things that entail p. And an example is ready to hand: the zebra case. Perhaps you cannot rule out mule; but that stops you from knowing not-mule without stopping you from knowing zeb. These points can be restated in terms of the conclusive reasons account. For Dretske, the negation of a proposition p is automatically a relevant alternative since condition CRA is automatically met; that is, it is vacuously true that:

were p false, not-p might hold.

Therefore mule is a relevant alternative to not-mule. Furthermore, you fail to know not-mule since you cannot rule out mule: you believe not-mule on the basis of your zebra-in-the-cage percepts, but you would still have these if mule held, contrary to CRR. Yet you know zeb in spite of your inability to rule out mule, for were zeb false you would not have your zebra-in-the-cage percepts.

According to the second version of the argument from the analysis of knowledge, then, any relevant alternatives view is in tension with K. How compelling is this argument? As Dretske acknowledges (2003), it is actually a weak challenge to K since some relevant alternatives accounts are fully consistent with K. For an example, we have only to adapt the safe indication view so as to make it clear that it is a relevant alternatives account (Luper 1984, 1987c, 2006).

The safe indication view can be adapted in two steps. First, we say that an alternative to p, A, is relevant if and only if the following condition is met:

SRA: In S's circumstances, A might hold.

Thus any possibility that is remote is automatically irrelevant, failing SRA. Second, we say that A is ruled out on the basis of R if and only if the following condition is met:

SIR: were R to hold A would not hold.

This way of understanding relevant alternatives upholds K. The key point is that if S knows p on the basis of R, and is thus able to rule out p's relevant alternatives, then S can also rule out q's relevant alternatives, where q is anything p implies. If R were to hold, q's alternatives would not.

Apparently, the relevant alternatives account can be construed in a way that supports K as well as a way that does not. Hence Dretske is not well positioned to claim that the relevant alternatives view leads "naturally" to closure failure.

2.3 Closure and Reliabilism

Reliabilism is the view that one knows p if and only if one arrives at (or sustains) the belief p via a reliable method. Is the reliabilist committed to closure? The answer depends on precisely how the relevant notion of reliability is understood. One of the first reliabilist theories, offered by Alvin Goldman, is very similar to the tracking view, for Goldman argued that knowing p entails having the capacity to discriminate between the situation in which p is true, on the one hand, and alternative situations (in which p is false) that might arise given the circumstances at hand. If we understand reliability as tracking theorists do, we will reject closure. But there are other versions of reliabilism which sustain closure. For example, the safe indication account is a type of reliabilism. Also, we could say that a true belief p is reliably formed if and only if based on an event that usually would occur only if p (or a p-type belief) were true. Any event that, in this sense, reliably indicates that p is true will also reliably indicate that p's consequences are true.

3. The Argument From Nonclosure of Knowledge Modes

In recent publications (2003, 2005) Dretske has argued that we should expect K failure because none of the modes of gaining, preserving or extending knowledge are individually closed. Dretske makes his point in the form of a rhetorical question: "how is one supposed to get closure on something when every way of getting, extending and preserving it is open (2003: 113-4)?"

3.1 Knowledge Modes and Nonclosure

As examples of modes of gaining, sustaining and extending knowledge Dretske suggests perception, testimony, proof, memory, indication, and information. To say of these items that they are not individually closed is to say that the following modes closure principles, with or without the parenthetical qualifications, are false:

PC: If S perceives p, and (S believes q because S knows) p entails q, then S perceives q.

TC: If S has received testimony that p, and (S believes q because S knows) p entails q, then S has received testimony that q.

OC: If S has proven p, and (S believes q because S knows) p entails q, then S has proven q.

RC: If S remembers p, and (S believes q because S knows) p entails q, then S remembers q.

IC: If R indicates p, and (S believes q because S knows) p entails q, then R indicates q.

NC: If R carries the information p, and (S believes q because S knows) p entails q, then R carries the information q.

And, according to Dretske, each of these principles fails. We may perceive that we have hands, for example, without perceiving that there are physical things.

3.2 Responses to Dretske

There have been various rejoinders to Dretske's argument from nonclosure of knowledge modes.

First, failure of one or more of the modes closure principles does not imply that K fails. What matters is whether the various modes of knowledge Dretske discusses position us to know the consequences of the things we know. In other words, the issue is whether the following principle is true:

T: If, while knowing p via perception, testimony, proof, memory, or something that indicates or carries the information that p, S believes q because p entails q, then S knows q.

Second, theorists have defended some of these modes closure principles, such as PC, IC and NC. Dretske rejects these three principles because he thinks perception, indication and information are best analyzed in terms of conclusive reasons, which undermines closure. But the three principles (or something very much like them) may be defended if we analyze perception, indication and information in terms of safe indication. Consider IC and NC. Both are true if we analyze indication and information as follows:

R indicates p iff p would be true if R held.

R carries the information that p iff p would be true if R held.

A version of PC may be defended if we make use of Dretske's own notion of indirect perception (1969). Consider a scientist who studies the behavior of electrons by watching bubbles they leave behind in a cloud chamber. The electrons themselves are invisible, but the scientist can perceive that the (invisible) electrons are moving in certain ways by perceiving that the (visible) bubbles left behind are arranging themselves in specific ways. What we directly perceive positions us to perceive various things indirectly. Now assume that when we directly or indirectly perceive p, and this causes us to believe q, where p entails q, we are positioned to perceive q indirectly. Then we are well on our way to accepting some version of PC, such as, for example:

SPC: If S perceives p, and this causes S to believe q, then S perceives q.

4. The Argument From Not (Easily) Knowable Propositions

Another anticlosure argument is that there are some sorts of propositions we cannot know unless perhaps we take extraordinary measures, yet such propositions are entailed by mundane claims whose truth we do know. Since this would be impossible if K were correct, K must be false. The same difficulty is sometimes discussed under the heading problem of easy knowledge, since some theorists (Cohen 2002) believe that certain things are difficult to know — they cannot be known by deduction from banal knowledge. The argument has different versions depending on which propositions are said to be hard knowledge. According to Dretske (and perhaps Nozick as well), we cannot easily know that limiting propositions or heavyweight propositions are true. Another possibility is that we cannot easily know lottery propositions. A special case of the argument from unknowable propositions starts with the claim that we cannot know the falsity of skeptical hypotheses. We will consider this third view in the next section.

4.1 The Argument from Limiting Propositions

Dretske does not clearly delineate the class of propositions he calls "limiting" (in 2003) or "heavyweight" (in 2005). Some of the examples he provides are ‘There is a past,’ ‘There are physical objects,’ and ‘I am not being fooled by a clever deception.’ He appears to think that these propositions have a property we may call "elusiveness," where p is elusive for me if and only if p's falsity would not change my experiences. But being limiting does not coincide with being elusive. If there were no physical objects, my experiences would be changed dramatically, since I would not exist. So some limiting propositions are not elusive. As to whether all elusive claims are limiting, it is hard to say, because of the squishiness of the term ‘limiting’. Not-mule is elusive, but is it limiting?

Can't we know limiting propositions? If not, and if we do know things that entail them, Dretske thinks he has further support for his conclusive reasons view, assuming, as he does, that his view rules out our knowing limiting propositions (while allowing knowledge of things that entail them). However, this assumption is false (Hawthorne 2005, Luper 2006). We do have conclusive reason to believe some limiting propositions, such as that there are physical objects. Still, Dretske might abandon the notion of a limiting proposition in favor of the notion of elusive propositions, and cite, in favor of his conclusive reasons view, and against K, the facts that we cannot know elusive claims but we can know things that imply them.

In order to rule out knowledge of limiting/elusive propositions, Dretske offers two sorts of argument, which we may call the argument from perception and the argument from pseudocircularity.

The argument from perception starts with the claims that (a) we do not perceive that limiting/elusive claims hold and (b) we do not know, via perception, that limiting/elusive claims hold. Since it is hard to see how else we could know limiting/elusive propositions, (a) and (b) are good grounds for concluding that we just do not know that they hold.

There is no doubt that (a) and (b) have considerable plausibility. Nonetheless, they are controversial. To explain the truth of (a) and (b), Dretske counts on his conclusive reasons analysis of perception. His critics may cite the safe indication account of perception as the basis for rejecting (a) and (b). Luper (2006), for example, argues against both, chiefly on the grounds that we can perceive and know some elusive claims (such as not-mule) indirectly, by directly perceiving claims (such as zeb) that entail them.

Dretske suggests another reason for ruling out knowledge of limiting/elusive claims. He thinks we can know banal facts (e.g., we ate breakfast) without knowing limiting/elusive claims they entail (e.g., the past is real) so long as those limiting/elusive claims are true, but we cannot then turn around and employ the former as our basis for knowing the latter. Suppose we take ourselves to know some claim, q, by inferring it from another claim, p, which we know, but our knowing p in the first place depends on the truth of q. Call this pseudocircular reasoning. According to Dretske, pseudocircular reasoning is unacceptable, and yet it is precisely what we rely on when we attempt to know limiting/elusive claims such as denials of skeptical hypotheses by deducing them from ordinary knowledge claims that entail them: we will not know the latter in the first place unless the former are true. The problem Dretske here raises was pressed earlier by critics of broadly reliabilist accounts of knowledge, such as Richard Fumerton (1995, 178). Jonathan Vogel (2000) discusses it under the heading bootstrapping, the procedure employed when, e.g., someone who has no initial evidence about the reliability of a gas gauge, comes to believe p on several different occasions because the gauge indicates p, and thereby knows p according to reliabilist accounts of knowledge, then infers that the gauge is reliable, by induction. By bootstrapping we may move — illegitimately, according to Vogel — from beliefs formed through a reliable process to the knowledge that those beliefs were arrived at through a reliable process. One may know p using a gauge in the first instance only if that gauge is reliable; hence, to conclude it is reliable solely on the basis of its track record involves pseudocircular reasoning.

Theorists have long objected to a knowledge claim on the grounds that it depends on a fact that itself has not been established. It is also standard to reject any knowledge claim whose pedigree smacks of circularity. Many theorists will reject pseudocircular reasoning on precisely these traditional grounds, and hence share Dretske's reservations about pseudocircular reasoning. But there is a growing body of work that breaks with tradition and defends some forms of epistemic circularity (this work is heavily criticized, in turn, on the grounds that it is open to versions of traditional objections). Max Black (1949) and Nelson Goodman (1955) were perhaps first; others include Van Cleve 1979 and 2003; Luper 2004; Papineau 1992; and Alston 1993. Dretske himself means to break with tradition, writing under the banner of ‘externalism.’ He explicitly says that most, if not all, of our mundane knowledge claims depend on facts we have not established. Indeed, he cites this as a virtue of his conclusive reasons view. Yet nothing in the nature of the conclusive reasons account rules out our knowing limiting propositions using pseudocircular reasoning, which leaves his reservations mysterious. A set of jar-ish experiences can constitute a conclusive reason for believing jar, a jar of cookies is in front of me. If I then believe objects, there are physical objects, because it is entailed by jar, I have conclusive reason for believing objects, a limiting proposition. (If objects were false, jar would be too, and I would lack my jar-ish experiences.)

Dretske might fall back on the view that the conclusive reasons account rules out knowing elusive, as opposed to limiting, claims through pseudocircular reasoning, because we lack conclusive reasons for elusive claims no matter what sort of reasoning we employ. But this does not put Dretske's account at odds with pseudocircular reasoning. And even this more limited position can be challenged (adapting a charge against Nozick in Shatz 1987). We might insist that p itself is a conclusive reason for believing q when we know p and p entails q. After all, assuming p entails q, if q were false so would p be. On this strategy we have a further argument for K: if S knows p (relying on some conclusive reason R), and S believes q because S knows p entails q, S has a conclusive reason for believing q, namely p (rather than R), and hence S knows q.

Another doubt about knowing elusive claims deductively via mundane claims is that this maneuver is improperly ampliative. Cohen claims that knowing the table is red does not position us to know "I am not a brain-in-a-vat being deceived into believing that the table is red" nor "it's not the case that the table is white [but] illuminated by red lights" (2000: 313). In the transition from the former to the latter, our knowledge appears to have been amplified improperly. This concern may be due at least in large part to lack of precision in the application of entailment or deductive implication (Klein 2004). Let red be the proposition that the table is red, white the proposition that the table is white, and light the proposition that the table is being illuminated by a red light. Red does not entail anything about the conditions under which the table is illuminated. In particular it does not entail the conjunction, light & not-white. The most we can infer is that the conjunction, white & light, is false, and that gives us no information whatever about the lighting conditions of the table. One could as easily infer the falsity of the conjunction, white & not-light. No amplification of the original known proposition, red, has come about.

4.2 The Argument from Lottery Propositions

It seems apparent that I do not know not-win, I will not win the state lottery tonight, even though my odds for hitting it big are vanishingly small. But suppose my heart's desire is to own a 10 million dollar villa in the French Riviera. It seems plausible to say that I know not-buy, I will not buy that villa tomorrow, since I lack the means, and that I know the conditional, if win then buy, i.e., tomorrow I will buy the villa if I win the state lottery tonight. From the conditional and not-buy it follows that not-win, so, given closure, knowing the conditional and not-buy positions me to know not-win. As this reasoning shows, the unknowability of claims like not-win together with the knowability of claims like not-buy position us to launch another challenge to closure.

Let a lottery proposition be a proposition, like not-win, that (at least normally) is supportable only on the grounds that its probability is very high but less than 1. Vogel (1990, 2004) and Hawthorne (2005, 2006) have noted that a great number of propositions that do not actually involve lotteries resemble lottery propositions in that they can be given a probability that is close to but less than 1. Such propositions might be described as lotteryesque. The events mentioned in a claim can be subsumed under indefinitely many reference classes, and there is no authoritative way to choose which among these determines the probability of the subsumed events. By carefully selecting among these classes we can often find ways to suggest that the probability of a claim is less than 1. Take, for example, not-stolen, the proposition that the car you just parked in front of the house has not been stolen: by selecting the class, red cars stolen from in front of your house in the last hour, we can portray the statistical probability of not-stolen as 1. But by selecting, cars stolen in the U.S., we can portray the probability as significantly less than 1. If, like lottery propositions, lotteryesque propositions are not easily known, they increase the pressure on the closure principle, since they entail a wide range of mundane propositions which become unknowable, given closure.

How great a threat to K (and GK) are lottery and lotteryesque propositions? The matter is somewhat controversial. However, there is a great deal to be said for treating lottery propositions one way and lotteryesque propositions another.

As for lottery propositions: several theorists suggest that we do not in fact know that they are true because knowing them requires believing them because of something that establishes their truth, and we (normally) cannot establish the truth of lottery propositions. There are various ways to understand what is meant by “establishing” the truth of a claim. Dretske, as we have seen, thinks that knowledge entails having a conclusive reason for thinking as we do. David Armstrong (1973, p. 187) said that knowledge entails having a belief state that “ensures” truth. Safe indication theorists suggest that we know things when we believe them because of something that safely indicates their truth. And Harman and Sherman (2004, p. 492) say that knowledge requires believing as we do because of something “that settles the truth of that belief.” On all four views, we fail to know that a claim is true when our only grounds for believing it is that it is highly likely. However, the unknowability of lottery propositions is not a substantial threat to closure, since it is not obvious that there are propositions that are both known to be true and that entail lottery propositions. Consider the example discussed earlier: the conditional if win then buy together with not-buy. If I know these, then, by GK, I know not-win, a lottery proposition. But it is quite plausible to deny that I do know these. After all, I might win the lottery.

Now consider lotteryesque propositions. We cannot defend closure by denying that we know any mundane proposition that entails a lotteryesque proposition since it is clear that we know that many things are true that entail lotteryesque propositions. To defend closure we must instead say that lotteryesque propositions are knowable. They differ from genuine lottery propositions in that they may be supportable on grounds that establish their truth. If I base my belief not-stolen solely on crime statistics, I will fail to know that it is true. But I can instead base it on observations, such as having just parked it in my garage, and so forth, that, under the circumstances, establish that not-stolen holds.

5. The Argument From Skepticism

According to Dretske and Nozick, we can account for the appeal of skepticism and explain where it goes wrong if we accept their view of knowledge and reject K. Rejecting knowledge closure is therefore the key to resolving skepticism. Given the importance of insight into the problem of skepticism, they would seem to have a good case for denying closure. Let us consider the story they present, and some worries about its acceptability.

5.1 Skepticism and Antiskepticism

Dretske and Nozick focus on a form of skepticism that combines K with the assumption that we do not know that skeptical hypotheses are false. For example, I do not know not-biv: I am not a brain in a vat on a planet far from earth being deceiving by alien scientists. On the strength of these assumptions, skeptics argue that we do not know all sorts of commonsense claims that entail the falsity of skeptical hypotheses. For example, since not-biv is entailed by h, I am in San Antonio, skeptics may argue as follows:

1. K is true; i.e., if, while knowing p, S believes q because S knows that p entails q, then S knows q.

2. h entails not-biv.

3. So if I know h and I believe not-biv because I know it is entailed by h then I know not-biv.

4. But I do not know not-biv.

5. Hence I do not know h.

Dretske and Nozick are well aware that this argument can be turned on its head, as follows:

1. K is true; i.e., if, while knowing p, S believes q because S knows that p entails q, then S knows q.

2. h entails not-biv.

3. So if I know h and I believe not-biv because I know it is entailed by h then I know not-biv.

4′. I do not know h.

5′. Hence I do know not-biv.

Turning tables on the skeptic in this way was roughly Moore's (1959) antiskeptical strategy. However, instead of K, Moore presupposed the truth of a stronger principle:

PK: If, while knowing p, S believes q because S knows that q is entailed by S's knowing p, then S knows q.

Unlike K, PK underwrites Moore's famous argument: Moore knows he is standing; his knowing that he is standing entails that he is not dreaming; therefore, he knows he is not dreaming.

5.2 Tracking and Skepticism

According to Dretske and Nozick, skepticism is appealing because skeptics are partially right. They are correct when they say that we do not know that skeptical hypotheses fail to hold. For I do not track not-biv: if biv were true, I would still have the experiences that lead me to believe that biv is false. Something similar can be said about antiskepticism: antiskeptics are correct when they say we know all sorts of commonsense claims that entail the falsity of skeptical hypotheses. Having gotten this far, however, skeptics appeal to K, and argue that since I would know not-biv if I knew h, then I must not know h after all, while Moore-style antiskeptics appeal to K in order to conclude that I do know not-biv. But this is precisely where skeptics and antiskeptics alike go wrong, for K is false. Consider the position skeptics are in. Having accepted the tracking view — as they do when they deny that we know skeptical hypotheses are false — skeptics cannot appeal to the principle of closure, which is false on the tracking theory. We track (hence know) the truth of ordinary knowledge claims yet fail to track (or know) the truth of things that follow, such as that incompatible skeptical hypotheses are false.

One problem with this story is that it cannot come to terms with all types of skepticism. There are two main forms of skepticism (and various sub-categories): regress (or Pyrrhonian) skepticism, and indiscernability (Cartesian) skepticism. At best, Dretske and Nozick have provided a way of dealing with the latter.

Another worry about Dretske's and Nozick's response to Cartesian skepticism is that it forces us to give up K as well as GK, and closure across instantiation and simplification. Given the intuitive appeal of these principles, some theorists have looked for alternative ways of explaining skepticism, which they then offer as superior in part on the grounds that they do no violence to K. Consider two possibilities, one offered by advocates of the safe indication theory and one by contextualists.

5.3 Safe Indication and Skepticism

Advocates of the safe indication theory (Sosa 1999, Luper 1987c, 2003a) accept the gist of the tracking theorist explanation of the appeal of skepticism but retain the principle of closure. One reason skepticism tempts us is that we tend to confuse CR with SI. After all, CR — if p were false, R would not hold — closely resembles SI — R would hold only if p were true. When we run the two together, we sometimes apply CR and conclude that we do not know that skeptical scenarios do not hold. Then we shift back to the safe indication account, and go along with skeptics when they appeal to the principle of entailment, which is sustained by the safe indication account, and conclude that ordinary knowledge claims are false. But skeptics are wrong when they say we do not know that skeptical hypotheses are false. Roughly, we know skeptical possibilities do not hold since (given our circumstances) they are remote.

Skepticism might also result from the assumption that, if a belief formation method M were, in some situation, to yield a belief without enabling us to know the truth of that belief, then it cannot ever generate bona fide knowledge (of that sort of belief), no matter what circumstances it is used in. (M must be strengthened somehow, say with a supplemental method, or with evidence about the circumstances at hand, if knowledge is to be procured.) This assumption might rest on the idea that any belief M yields is, at best, accidentally correct, if in any circumstances M yields a false or an accidentally correct belief (Luper 1987b,c). On this assumption, we can rule out a method of belief formation M as a source of knowledge merely by sketching circumstances in which M yields a belief that is false or accidentally correct. Traditional skeptical scenarios suffice; so do Gettieresque situations. Externalist theorists reject the assumption, saying that M can generate knowledge when used in circumstances under which the belief it yields is not accidentally correct. In highly Gettierized circumstances M must put us in an especially strong epistemic position if M is to generate knowledge; in ordinary circumstances, less exacting methods can produce knowledge. The standards a method must meet to produce knowledge depend on the context in which it is used. This view, on which the requirements for a subject or agent S to know p vary with S's context (e.g., how exacting S's method of belief formation must be to yield knowledge depend on S's circumstances), might be called agent-centered (orsubject) contextualism. Both tracking theorists and safe indication theorists defend agent-centered contextualism.

5.4 Contextualism and Skepticism

Theorists writing under the label "contextualism," such as David Lewis (1979, 1996), Stewart Cohen (1988, 1999), and Keith DeRose (1995), offer a related way of explaining skepticism without denying closure. These contextualists contrast themselves with agent-centered contextualists. For clarity, we might call them speaker-centered (or attributor) contextualists. According to (speaker-centered) contextualists, whether it is correct for a judge to attribute knowledge to someone depends on that judge's context, and the standards for knowledge differ from context to context. When the man on the street judges knowledge, the applicable standards are relatively modest. But an epistemologist takes all sorts of possibilities seriously that are ignored by ordinary folk, and so must apply quite stringent standards in order to reach correct assessments. What passes for knowledge in ordinary contexts does not qualify for knowledge in contexts where heightened criteria apply. Skepticism is explained by the fact that the contextual variation of epistemic standards is easily overlooked. Skeptics note that in the epistemic context it is inappropriate to grant anyone knowledge. However, skeptics assume — falsely — that what goes in the epistemic context goes in all contexts. They assume that since those who take skepticism seriously must deny anyone knowledge, then everyone, regardless of context, should deny anyone knowledge. Yet people in ordinary contexts are perfectly correct in claiming that they know all sorts of things.

Furthermore, the closure principle is correct, contextualists say, so long as it is understood to operate within given contexts, not across contexts. That is, so long as we stay within a given context, we know the things we deduce from other things we know. But if I am in an ordinary context, knowing I am in San Antonio, I cannot come to know, via deduction, that I am not a brain in a vat on a distant planet, since the moment I take that skeptical possibility seriously, I transform my context into one in which heightened epistemic standards apply. When I take the vat possibility seriously, I must wield demanding standards that rule out my knowing I am not a brain in a vat. By the same token, these standards preclude my knowing I am in San Antonio. Thinking seriously about knowledge undermines our knowledge.

6. Closure of Rational Belief

To say that justified belief is closed under entailment is to say that something like the following principle is correct:

J: If, while justifiably believing p, S believes q because S knows p entails q, then S justifiably believes q.

According to justificationism, as we may call the traditional view that knowledge entails justification, we know p only if we are justified in believing p. A great number of theorists have abandoned justificationism, and count, as known, those basic (noninferential) beliefs that are arrived at (or sustained) via reliable methods. Other theorists (e.g., Goldman 1979) accept an unorthodox form of justificationism, according to which even noninferential beliefs can count as justified so long as they are arrived at (sustained) via reliable methods.

Suppose, however, that justificationism were true. How would it bear on knowledge closure? The position that K holds only if J does may be called the linkage thesis. Does justificationism commit us to the linkage thesis, so that closure failure in the case of justification carries over to closure in the case of knowledge?

6.1 The Linkage Thesis

Even if justificationism were true, there would be ways to reject the linkage thesis. When S believes p upon seeing it is entailed by something S knows, let us say that p is knowledge secured. When S believes p upon seeing it is entailed by something S justifiably believes, let us say that p is justification secured. According to K, we know p if p is knowledge secured. By justificationism, we are justified in believing p if we know p. Hence we are justified in believing anything that is knowledge secured. Nonetheless, with some ingenuity, we can craft accounts of knowledge and justification by which knowledge security entails justified belief, but justification security does not entail justified belief, thus upholding K but not J. For example, consider the following stipulations:

1. S is justified in believing p iff either p is not knowledge secured and S tracks p, or else p is knowledge secured.

2. S knows p iff S has evidence that entails p.

By 2, knowledge security implies knowledge: evidence that entails p also entails anything that p entails, so if S has evidence that entails p, and believes q upon seeing it is entailed by p, then S's evidence entails q. By 1, knowledge security implies justification. But 1 and 2 undermine J. Suppose I track zeb but lack evidence that entails zeb, so that, by 1, I justifiably believe zeb, but, by 2, I fail to know zeb. Suppose, further, that I believenot-mule by deducing it from zeb. I am not justified in believing not-mule: it is not knowledge secured and I fail to track it. Hence J is false: not-mule is justification secured for me but not justified.

Variations on 1 and 2 yield the same result. Consider the following schemata:

1′. S is justified in believing p iff either p is not knowledge secured and _____, or else p is knowledge secured.

2′. S knows p iff _____.

For the blank in 1′ we may substitute various conditions that undermine J. For example, we could employ an account of justification based on Nelson Goodman's (1955) notion of selective confirmation. And for the blank in 2′ we could substitute one of many conditions, so long as it does not reduce to the condition we substitute into 1′.

Still, this way of resisting linkage seems ad hoc; justificationists are likely to accept linkage.

6.2 Justification Closure

How plausible is J? The matter remains controversial. Some argue against it using counterexamples like Dretske's own zebra case: because the zebra is in plain sight, you seem fully justified in believing, and know, zeb, but it is not so clear that you are justified in believing not-mule, even if you deduce this belief from zeb.

One response is that cases such as Dretske's do not count against J, but rather against the following principle (of the transmissibility of evidence):

E: If E is evidence for p, and p entails q, then E is evidence for q.

Even if we reject this principle, it does not follow that justification is not closed under entailment, as Peter Klein (1981) pointed out. Arguably, for justification closure, all that is necessary is that when, given all of our relevant evidence E, we are justified in believing p, we also have sufficient justification for believing each of p's consequences. Our justification for p's consequences need not be E. Instead, it might be p itself, which is, after all, a justified belief. And since p entails its consequences, it is sufficient to justify them. Moreover, any good evidence we have against a consequence of p counts against p itself, preventing us from being justified in believing p in the first place, so if we are justified in believing p, considering all our evidence, pro and con, we will not have overwhelming evidence against propositions entailed by p. (A similar move could be defended against the tracking theorists when they deny the closure of knowledge: if we track p, and believe q by deducing it from p, then we track q if we take p as our basis for believing q.) Looked at in this way, J seems plausible.

However, it must be understood that J applies only to the implications of individual propositions, not to conjunctions of propositions. We are not always justified in believing the conjunction of claims that are individually justified. We can reject:

GJ: If, while justifiably believing various propositions, S believes p because S knows that they entail p, then S justifiably believes p.

For GJ generates paradoxes. To see why, notice that if the chances of winning a lottery are sufficiently remote, I am justified in believing that ticket 1 will lose. I am also justified in believing that ticket 2 will lose, and that 3 will lose, and so on. However, I am not justified in believing the conjunction of these propositions. If I were, I would justifiably believe that no ticket will win. Yet I might know that some ticket will. If a proposition is justified when probable enough, lottery examples undermine GJ. No matter how great the probability that suffices for justification, short of certainty, in some lotteries we will be justified in believing, of an arbitrary ticket, that it will lose, and thus, by GJ, justified in believing all of the tickets will lose.

Some final observations can be made using Roderick Firth's (1978) distinction between propositional and doxastic justification. Proposition p has propositional justification for S if and only if, given the grounds S possesses, p would count as rational. That p has propositional justification for S does not require that S actually base p on these grounds, or even that S believe p. Whether S's belief has doxastic justification depends on S's actual grounds for believing p: if, on these grounds, p would count as rational, then p possesses doxastic justification. Consider the following principles:

JD: If p is doxastically justified for S, and p entails q, then q is doxastically justified for S.

JP: If p is propositionally justified for S, and p entails q, then q is propositionally justified for S.

Clearly JD faces two fatal objections. First, we might fail to believe some of the things implied by our beliefs. Second, we may have perfectly respectable reasons for believing something p, yet, failing to see that p entails q, we might not be aware of any grounds for believing q, or, worse, we might believe q for bogus reasons. But neither difficulty threatens JP. First, propositional justification does not entail belief. Second, S might be propositionally justified in believing q on the basis of p whether or not S fails to see that p entails q, and even if S believes q for bogus reasons. As further support for JP, we might cite the fact that, if p entails q, whatever counts against q also counts against p.

Bibliography

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

confirmation | evidence | knowledge: analysis of | skepticism