Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Epistemic Paradoxes

First published Wed 21 Jun, 2006

Epistemic paradoxes are riddles that turn on the concept of knowledge (episteme is Greek for knowledge). Typically, there are conflicting, well-credentialed answers to these questions (or pseudo-questions). Thus the riddle immediately informs us of an inconsistency. In the long run, the riddle goads and guides us into correcting at least one deep error – if not directly about knowledge, then about satellite concepts such as justification, evidence, or rational belief.

Such corrections are of interest to epistemologists. Historians can date the origin of epistemology by the appearance of skeptics. As manifest in Plato's dialogues featuring Socrates, epistemic paradoxes have been discussed for twenty five hundred years. Given their hardiness, some of these riddles about knowledge will be discussed for the next twenty five hundred years.

This essay is structured as a wheel. The rim is a belt of generalizations about knowledge and paradoxes. The spokes are epistemic paradoxes. The hub is the surprise test paradox.


1. The Surprise Test Paradox

A teacher announces that there will be a surprise test next week. A student objects that this is impossible: The class meets on Monday, Wednesday, and Friday. If the test is given on Friday, then on Thursday I would be able to predict that the test is on Friday. It would not be a surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the test will not be on Friday (thanks to the previous reasoning) and know that the test was not on Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I would know that the test must be on Monday. So a Monday test would also fail to be a surprise. Therefore, it is impossible for there to be a surprise test.”

The riddle is: Can the teacher fulfill his announcement? We have an embarrassment of riches. On the one hand, we have the student's elimination argument. On the other hand, common sense says that surprise tests are possible even when we have had advance warning that one will occur at some point. Either of the answers would be decisive were not for the credentials of the rival answer. Thus we have a paradox. But a paradox of what kind? ‘Surprise test’ is being defined in terms of what can be known. Specifically, a test is a surprise if and only if the student cannot know beforehand which day the test will occur. Therefore the riddle of the surprise test qualifies as an epistemic paradox.

The solution to a complex epistemic paradox relies on solutions (or partial solutions) to more fundamental epistemic paradoxes. For instance, the surprise test paradox is sometimes pictured as a Russian doll; inside the enigma of the surprise test is the preface paradox; inside the preface paradox is Moore's paradox. In addition to this depth-wise connection, there are lateral connections to other epistemic paradoxes such as the knower paradox and the problem of foreknowledge.

There are also connections to issues that are not clearly paradoxes – or to issues whose status as paradoxes is at least contested. Some philosophers find only irony in pragmatic paradoxes, only cognitive illusion in the lottery paradox, only an embarrassment in the “knowability paradox”. Calling a problem a paradox tends to quarantine it from the rest of our inquiries. Those who wish to dis-inhibit us will therefore deny that there is any paradox and scold us for not making use of all our evidence.

The surprise test paradox has yet more oblique connections to some paradoxes that are not epistemic, such as the liar paradox and Pseudo-Scotus' paradoxes of validity. They will be mentioned in passing, chiefly to set boundaries. A survey of solutions to the surprise test paradox, a remarkably gregarious riddle, is therefore also a survey of epistemic paradoxes.

We can look forward to future philosophers drawing surprising historical connections. The backward elimination argument underlying the surprise test paradox can be discerned in German folktales dating back to 1756 (Sorensen 2003a, 267). Perhaps, medieval scholars explored these slippery slopes. But let me turn to commentary to which we presently have access.

1.1 Self-defeating prophecies and pragmatic paradoxes

In the twentieth century, the first published reaction to the surprise text paradox was to endorse the student's elimination argument. D. J. O'Connor  (1948) regarded the teacher's announcement as self-defeating. If the teacher had not announced that there would be a surprise test, the teacher would have been able to give the surprise test. The pedagogical moral of the paradox would then be that if you want to give a surprise test do not announce your intention to your students!

More precisely, O'Connor compared the teacher's announcement to sentences such as ‘I remember nothing at all’ and ‘I am not speaking now’. Although these sentences are consistent, they “could not conceivably be true in any circumstances” (O'Connor 1948, 358). L. Jonathan Cohen (1950) agreed and classified the announcement as a pragmatic paradox. He defined a pragmatic paradox to be a statement that is falsified by its own utterance. The teacher overlooked how the manner in which a statement is disseminated can doom it to falsehood.

Cohen's classification is too monolithic. True, the teacher's announcement does compromise one aspect of the surprise: Students now know that there will be a test. But this compromise is not itself enough to make the announcement self-falsifying. The existence of a surprise test has been revealed but there is surviving uncertainty as to which day the test will occur. The announcement of a forthcoming surprise aims at changing uninformed ignorance into action-guiding awareness of ignorance. A student who misses the announcement does not realize that there is a test about which to be ignorant. If no one passes on the intelligence about the surprise test, the student with simple ignorance will be less prepared than classmates who know they do not know the day of the test.

The value of knowing what you do not know is a favorite theme of spymasters. Defense Secretary Donald Rumsfeld remarks on the topic were set to verse:

The Unknown
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don't know
We don't know.
    Feb. 12, 2002, Department of Defense news briefing

Announcements are made to serve different goals simultaneously. Competition between accuracy and helpfulness makes it possible for an announcement to be self-fulfilling by being self-defeating. Consider a weatherman who warns ‘The midnight tsunami will cause fatalities along the shore’. Because of the warning, spectacle-seekers make a special trip to witness the wave. Some drown. The weatherman's announcement succeeds as a prediction by backfiring as a warning.

1.2 Predictive determinism

Instead of viewing self-defeating predictions as showing how the teacher is refuted, some philosophers construe self-defeating predictions as showing how the student is refuted. The student's elimination argument embodies hypothetical predictions about which day the teacher will give a test. Isn't the student overlooking the teacher's ability and desire to thwart those expectations? Some game theorists suggest that the teacher could defeat this strategy by choosing the test date at random.

This randomizing solution has a flaw. If, through the chance process, the last day happens to be selected, then abiding by the outcome means giving an unsurprising test. For as in the original scenario, the student has knowledge of the teacher's announcement and awareness of past testless days. So the teacher must exclude random selection of the last day. The student is clever. He will replicate this reasoning that excludes a test on the last day. Can the teacher abide by the random selection of the next to last day? Now the reasoning becomes all too familiar.

Another critique of the student's replication of the teacher's reasoning adapts a thought experiment from Michael Scriven (1964). To refute predictive determinism (the thesis that all events are foreseeable), Scriven conjures an agent “Predictor” who has all the data, laws, and calculating capacity needed to predict the choices of others. Scriven goes on to imagine, “Avoider”, whose dominant motivation is to avoid prediction. Therefore, Predictor must conceal his prediction. The catch is that Avoider has access to the same data, laws, and calculating capacity as Predictor. Thus he can duplicate Predictor's reasoning. Consequently, the optimal predictor cannot predict Avoider. Let the teacher be Avoider and the student be Predictor. Avoider must win. Therefore, it is possible to give a surprise test.

Scriven's original argument assumes that Predictor and Avoider can simultaneously have all the needed data, laws, and calculating capacity. David Lewis and Jane Richardson object:

… the amount of calculation required to let the predictor finish his prediction depends on the amount of calculation done by the avoider, and the amount required to let the avoider finish duplicating the predictor's calculation depends on the amount done by the predictor. Scriven takes for granted that the requirement-functions are compatible: i.e., that there is some pair of amounts of calculation available to the predictor and the avoider such that each has enough to finish, given the amount the other has. (Lewis and Richardson 1966, 70–71)

According to Lewis and Richardson, Scriven equivocates on ‘Both Predictor and Avoider have enough time to finish their calculations'. Reading the sentence one way yields a truth: against any given avoider, Predictor can finish and against any given predictor, Avoider can finish. However, the compatibility premise requires the false reading in which Predictor and Avoider can finish against each other.

Idealizing the teacher and student along the lines of Avoider and Predictor would fail to defeat the student's elimination argument. We would have merely formulated a riddle that falsely presupposes that the two types of agent are co-possible. It would be like asking ‘If Bill is smarter than anyone else and Hillary is smarter than anyone else, which of the two is the smartest?’.

Predictive determinism states that everything is foreseeable. Metaphysical determinism states that there is only one way the future could be given the way the past is. Simon LaPlace used metaphysical determinism as a premise for predictive determinism. He reasoned that since every event has a cause, a complete description of any stage of history combined with the laws of nature implies what happens at any other stage of the universe. Scriven was only challenging predictive determinism in his thought experiment. The next approach challenges metaphysical determinism.

1.3 The Problem of Foreknowledge

Prior knowledge of an action seems incompatible with it being a free action. If I know that you will finish reading this article tomorrow, then you will finish tomorrow (because knowledge implies truth). But that means you will finish the article even if you resolve not to. After all, given that you will finish, nothing can stop you from finishing. So if I know that you will finish reading this article tomorrow, you are not free to do otherwise.

Maybe all of your reading is compulsory. If God exists, then he knows everything. So the threat to freedom becomes total for the theist. The problem of divine foreknowledge insinuates that theism precludes morality.

In response to the apparent conflict between freedom and foreknowledge, medieval philosophers denied that future contingent propositions have a truth-value. They took themselves to be extending a solution Aristotle discusses in De Interpretatione to the problem of logical fatalism. According to this truth-value gap approach, ‘You will finish this article tomorrow’ is not true now. The prediction will become true tomorrow. God's omniscience only requires that He knows every true proposition. God will know ‘You will finish this article tomorrow’ as soon it becomes true – but not before.

The teacher has freewill. Therefore, predictions about what he will do are not true (prior to the examination). Accordingly, Paul Weiss (1952) concludes that the student's argument falsely assumes he knows that the announcement is true. The student can know that the announcement is true after it becomes true – but not before.

W. V. Quine (1953) agrees with Weiss' conclusion that the teacher's announcement of a surprise test fails to give the student knowledge that there will be a surprise test. Yet Quine abominates Weiss' reasoning. Weiss breeches the law of bivalence (which states that every proposition has a truth-value, true or false). Quine believes that the riddle of the surprise test should not be answered by surrendering classical logic.

2. Intellectual suicide

W. V. Quine insists that the student's elimination argument is only a reductio ad absurdum of the supposition that the student knows that the announcement is true (rather than a reductio of the announcement itself). He accepts this reductio. Given the student's ignorance of the announcement, Quine concludes that a test on any day would be unforeseen.

Common sense suggests that the students are informed by the announcement. The teacher is assuming that the announcement will enlighten the students. He seems right to assume that the announcement of this intention produces the same sort of knowledge as his other declarations of intentions (about which topics will be selected for lecture, the grading scale, how long he will be absent for minor surgery, and so on).

There are extreme, philosophical premises that could yield Quine's conclusion that the students do not know the announcement is true. If no one can know anything about the future, as suggested by David Hume's problem of induction, then the student cannot know that the teacher's announcement is true. But this is like using a cannon to kill a fly.

In later writings, Quine evinces general reservations about the concept of knowledge. One of his pet objections is that ‘know’ is vague. If knowledge entails absolute certainty, then too little will count as known. Quine infers that we must equate knowledge with firmly held true belief. Asking just how firm the belief must be is like asking just how big something has to be to count as being big. There is no answer to the question because ‘big’ lacks the sort of boundary enjoyed by precise words.

There is no place in science for bigness, because of this lack of boundary; but there is a place for the relation of biggerness. Here we see the familiar and widely applicable rectification of vagueness: disclaim the vague positive and cleave to the precise comparative. But it is inapplicable to the verb ‘know’, even grammatically. Verbs have no comparative and superlative inflections … . I think that for scientific or philosophical purposes the best we can do is give up the notion of knowledge as a bad job and make do rather with its separate ingredients. We can still speak of a belief as true, and of one belief as firmer or more certain, to the believer's mind, than another (1987, 109).

Quine is alluding to Rudolph Carnap's generalization that scientists replace qualitative terms (tall) with comparatives (taller than) and then replace the comparatives with quantitative terms (being n millimeters in height).

It is true that some borderline cases of a qualitative term are not borderline cases for the corresponding comparative. But the reverse holds as well. A big man who stoops may stand less high than another big man who is not as lengthy. Both men are clearly big. It is unclear that ‘The lengthier man is bigger’. Qualitative terms can be applied when a vague quota is satisfied without the need to sort out the details. Only comparative terms are bedeviled by tie-breaking issues.

Science is about what is the case rather than what ought to be case. This seems to imply that science does not tell us what we ought to believe. The traditional way to fill the normative gap is to delegate issues of justification to epistemologists. However, Quine is uncomfortable with delegating such authority to philosophers. He prefers the thesis that psychology is enough to handle the issues traditionally addressed by epistemologists (or at least the issues still worth addressing in an Age of Science). This “naturalistic epistemology” seems to imply that ‘know’ and ‘justified’ are antiquated terms – as empty as ‘phlogiston’ or ‘soul’.

Those willing to abandon the concept of knowledge can dissolve the surprise test paradox. But to epistemologists, this is like using a suicide bomb to kill a fly.

Our suicide bomber may protest that the flies have been undercounted. Epistemic eliminativism dissolves all epistemic paradoxes. According to the eliminativist, epistemic paradoxes are symptoms of a problem with the very concept of knowledge.

Notice that the eliminativist is more radical than the skeptic. The skeptic thinks the concept of knowledge is fine. We just fall short of being knowers. The skeptic treats ‘No man is a knower’ like ‘No man is an immortal’. There is nothing wrong with the concept of immortality. Biology just winds up guaranteeing that every man falls short of being immortal.

Unlike the believer in ‘No man is an immortal’, the skeptic has trouble asserting ‘There is no knowledge’. For assertion expresses the belief that one knows. That is why Sextus Empiricus condemns the assertion ‘There is no knowledge’ as dogmatic skepticism. Sextus often seems to prefer agnosticism about knowledge rather than skepticism (considered as “atheism” about knowledge). Yet it also seems inconsistent to assert ‘No one can know whether anything is known’. For that conveys the belief that one knows that no one can know whether anything is known.

The eliminativist has even more severe difficulties in stating his position than the skeptic. Some eliminativists dismiss the threat of self-defeat by drawing an analogy. Those who denied the existence of souls used to be accused of undermining a necessary condition for asserting anything. However, the soul theorist's account of what is needed to make an assertion begs the question against those who believe that a healthy brain is enough for mental states.

If the eliminativist thinks that assertion only imposes the aim of expressing a truth, then he can consistently assert that ‘know’ is a defective term. However, an epistemologist can revive the charge of self-defeat by showing that assertion does indeed require the speaker to attribute knowledge to himself. This knowledge-based account of assertion has recently been supported by a paradox that originated among philosophers of science rather than philosophers of language.

3. Lotteries and the Lottery Paradox

Lotteries pose a problem for the theory that we can assert whatever we think is true. Given that there are a million tickets and only one winner, the probability of ‘This ticket is a losing ticket’ is very high. If our aim were merely to utter truths, we should be willing to assert the proposition. Yet we are reluctant.

What is missing? Speakers will assert the proposition after seeing the result of the lottery drawing or hearing about the winning ticket from a newscaster or remembering what the winning ticket was. This suggests that knowledge is required for assertion (Williamson 2000, 249–255). Perception, testimony, and memory are reliable processes that furnish knowledge.

But, this skeptic asks, do these processes furnish certainty? When pressed, we admit there is a small chance that we misperceived the drawing or that newscaster made a mistake or that we are misremembering. While in this conciliatory mood, we are apt to relinquish our claim to know. The skeptic generalizes from this surrender (Hawthorne 2004). For any contingent proposition, there is a lottery statement that is more probable and which is unknown. A known proposition cannot be less probable than an unknown proposition. So no contingent proposition is known.

Notice that the skeptic's mild suggestions about how we might be mistaken are not the extraordinary possibilities invoked by Rene Descartes' skeptic. The Cartesian skeptic tries to undermine vast swaths of knowledge with a single counter-explanation of the evidence (such as the hypothesis that you are dreaming or the hypothesis that an evil demon is deceiving you). These comprehensive alternatives are designed to evade any empirical refutation. The probabilistic skeptic, in contrast, points to pedestrian counter-explanations that are easy to verify: maybe you transposed the digits of a phone number, maybe the ticket agent thought you wanted to fly to Moscow, Russia rather than Moscow, Idaho, etc. You can check for errors, but any check itself has a small chance of being wrong. So there is always something to check, given that the issues cannot be ignored on grounds of improbability.

You can check any of these possible errors but you cannot check them all. You cannot discount these pedestrian possibilities as science fiction. For they are exactly the sorts of possibilities we check when something goes wrong. For instance, you think you know that you have an appointment to meet a prospective employer for lunch at noon. When he fails to show at the expected time, you begin a forced march backwards through your premises: Is your watch slow? Are you remembering the right restaurant? Could there be another restaurant in the city with same name?  Is he just detained? Could he have just forgotten? Could there have been a miscommunication?

Probabilistic skepticism dates back to Arcelius who took over the Academy two generations after Plato's death. This mild kind of skepticism allows for justified belief. Many scientists are attracted to probabilism and dismiss the epistemologist's preoccupation with knowledge as old-fashioned.

Despite the early start of the qualitative theory of probability, the quantitative theory did not develop until Blaise Pascal's study of gambling in the seventeenth century (Hacking 1975). Only in the eighteenth century did it begin to penetrate the insurance industry (despite the fortune to be made). Only in the nineteenth century did probability make a mark in physics. And only in the twentieth century do probabilists make important advances over Arcelius.

Most of these philosophical advances are reactions to the use of probability by scientists. In the twentieth century, editors of science journals began to demand that the authors' hypothesis should be accepted only when it was sufficiently probable – as measured by statistical tests. The threshold for acceptance was acknowledged to be somewhat arbitrary. And it was also acknowledged that the acceptance rule might have to vary with one's purposes. For instance, we demand a higher probability when the cost of accepting a false hypothesis is high.

In 1961 Henry Kyburg pointed out that this policy conflicted with a principle of doxastic logic (the logic of belief). Logicians thought that rational belief should agglomerate: If you should believe p and should believe q then you should believe both p and q. Little pictures should sum to a big picture. These logicians also required that rational belief be consistent. But if rational belief can be based on an acceptance rule that only requires a high probability, there will be rational belief in a contradiction! Suppose the acceptance rule permits belief in any proposition that has a probability of at least .99. Given a lottery with 100 tickets and exactly one winner, the probability of ‘Ticket n is a loser’ licenses belief. Symbolize propositions about ticket n being a loser as pn. Symbolize ‘I rationally believe’ as B. Belief in a contradiction follows:

  1. B~(p1 & p2 & … & p100),
       by the probabilistic acceptance rule.
  2. Bp1 & Bp2 & … & Bp100,
       by the probabilistic acceptance rule.
  3. B(p1 & p2 & … & p100),
        from (2) and the principle that belief agglomerates.
  4. B[(p1 & p2 & … & p100) & ~(p1 & p2 & … & p100)],
       from (1) and (3) by the principle that belief agglomerates.

Since belief in an obvious contradiction is a paradigm example of irrationality, Kyburg poses a dilemma: either reject agglomeration or reject probabilistic acceptance rules. Kyburg chooses to reject agglomeration. He promotes toleration of joint inconsistency (having beliefs that cannot all be true together) to avoid belief in contradictions. Reason forbids us from believing a proposition that is necessarily false but permits us to have a set of beliefs that necessarily contains a falsehood. Henry Kyburg's choice was soon supported by the discovery of a companion paradox.

4. Preface Paradox

In D. C. Makinson's (1965) preface paradox, an author believes each of the assertions in his book. But since the author regards himself as fallible, he believes the conjunction of all his assertions is false. If the agglomeration principle holds, (Bp & Bq) → B(p & q), the author must both believe and disbelieve the conjunction of all the assertions in his book.

The preface paradox does not rely on a probabilistic acceptance rule. The preface belief is generated in a qualitative fashion. The author is merely reflecting on his similarity to other authors who are fallible, the imperfections of fact checking, and so on.

At this point many philosophers join Kyburg in rejecting agglomeration and conclude that it can be rational to have jointly inconsistent beliefs. Kyburg's solution to the preface paradox raises an interesting question about the nature of paradox. How can paradoxes change our minds if joint inconsistency is permitted?

A paradox is commonly defined as a set of propositions that are individually plausible but jointly inconsistent. Paradoxes force us to change our minds in a highly structured way. For instance, much epistemology responds to a riddle posed by the regress of justification, namely, which of the following is false?

  1. A belief can only be justified by another justified belief.
  2. There are no circular chains of justification.
  3. All justificatory chains have a finite length.
  4. Some beliefs are justified.

Foundationalists reject (1). They take some propositions to be self-evident. Coherentists reject (2). They tolerate some forms of circular reasoning. For instance, Nelson Goodman (1965) has characterized the method of reflective equilibrium as virtuously circular. Charles Pierce rejected (3). He believed that infinitely long chains of justification are no more impossible than infinitely long chains of causation. Finally, the epistemological anarchist rejects (4).

Very elegant! But if joint inconsistency is rationally tolerable, why do these philosophers bother to offer solutions? Why is it not rational to believe each of (1)–(4), in spite of their joint inconsistency?

Kyburg might answer that there is a scale effect. Although the dull pressure of joint inconsistency is tolerable when diffusely distributed over a large set of propositions, the pain of contradiction sharpens as the set gets smaller (Knight 2002). And indeed, paradoxes are always represented as a small set of propositions.

If you know that your beliefs are jointly inconsistent (and I think you do know), then you should reject R. M. Sainsbury's definition of a paradox as “an apparently unacceptable conclusion derived by apparently acceptable reasoning from apparently acceptable premises” (1995, 1). Take the negation of any of your beliefs as a conclusion and your remaining beliefs as the premises. You should judge this jumble argument as valid, and as having premises that you accept, and yet as having a conclusion you reject (Sorensen 2003b, 104–110). If the conclusion of this argument counts as a paradox, then the negation of any of your beliefs counts as a paradox (because for each of your beliefs, there is a jumble argument against it).

The preface paradox also pressures Kyburg to extend his tolerance of joint inconsistency to the acceptance of contradictions (Sorensen 2001, 156–158). Consider a logic student who is required to pick one hundred truths from a mixed list of tautologies and contradictions. Although the modest student believes each of his answers, A1, A2, …, A100, he also believes that at least of one these answers is false. This ensures he believes a contradiction. If any of his answers is false, then the student believes a contradiction (because the only falsehoods on the question list are contradictions). If all of his test answers are true, then the student believes the following contradiction: ~(A1 & A2 & … & A100). After all, a conjunction of tautologies is itself a tautology and the negation of any tautology is a contradiction.

If paradoxes were always sets of propositions or arguments or conclusions, then they would always be meaningful. But some paradoxes are meaningless (Sorensen 2003b, 352) and some have answers that are backed by a pseudo-argument employing a meaningless “lemma”. Kurt Grelling's paradox, for instance, opens with a distinction between autological and heterological words. An autological word describes itself, e.g., ‘polysyllabic’ is polysllabic, ‘English’ is English, ‘noun’ is a noun, etc. A heterological word does not describe itself, e.g., ‘monosyllabic’ is not monosyllabic, ‘Chinese’ is not Chinese, ‘verb’ is not a verb, etc. Now for the riddle: Is ‘heterological’ heterological or autological? If ‘heterological’ is heterological, then since it describes itself, it is autological. But if ‘heterological’ is autological, then since it is a word that does not describe itself, it is heterological. The common solution to this puzzle is that ‘heterological’, as defined by Grelling, is not a genuine predicate (Thomson 1962). In other words, “Is ‘heterological’ heterological?” is ill formed (and so meaningless on syntactic grounds).

The eliminativist, who thinks that ‘know’ or ‘justified’ is meaningless, will diagnose the epistemic paradoxes as questions that only appear to be well-formed. For instance, the eliminativist about justification would not accept proposition (4) in the regress paradox: ‘Some beliefs are justified’. His point would not be the anarchist theme that ostensible authorities fail to meet a minimal standard of legitimacy. The eliminativist unromantically diagnoses ‘justified’ as a pathological term; like ‘heterological’, declarative sentences that apply the word fail to express a proposition. Just as the astronomer ignores ‘Are there a zillion stars?’ on the grounds that ‘zillion’ is not a genuine numeral, the eliminativist ignores ‘Are some beliefs justified?’ on the grounds that ‘justified’ is not a genuine adjective.

In the twentieth century, suspicions about conceptual pathology were strongest for the liar paradox: Is ‘This sentence is false’ true? Philosophers who thought that there was something deeply defective with the surprise test paradox assimilated it to the liar paradox. Let us review the assimilation process.

5. Anti-expertise

In the surprise test paradox, the student's premises are self-defeating. Any reason the student has for predicting a test date or a non-test date is available to the teacher. Thus the teacher can simulate the student's forecast and know what the student is expecting.

The student's overall conclusion, that the test is impossible, is also self-defeating. If the student believes his conclusion then he will not expect the test. So if he receives a test, it will be a surprise. The event will be all the more unexpected because the student has deluded himself into thinking the test is impossible.

Just as someone's awareness of a prediction can affect the likelihood of it being true, awareness of that sensitivity to his awareness can also affect its truth. If each cycle of awareness is self-defeating, then there is no stable resting place for a conclusion.

Suppose a psychologist offers you a red box and a blue box (Skyrms 1982). The psychologist can predict which box you will choose with 90% accuracy. He has put one dollar in the box he predicts you will choose and ten dollars in the other box. Should you choose the red box or the blue box? You cannot decide. For any choice becomes a reason to reverse your decision. (Newcomb's problem also turns on a predicted decision.)

Epistemic paradoxes affect decision theory because rational choices are based on beliefs and desires. If the agent cannot form a rational belief, it is difficult to interpret his behavior as a choice. You cannot rationally choose an option that you believe to be inferior. So if you make a choice, then you cannot really believe that you were doing so as an anti-expert, that is, someone whose opinions on a topic are reliably wrong  (Egan and Elga 2005).

The medieval philosopher Jean Buridan gave a starkly minimal example of such instability:

(B) You do not believe this sentence.

If you believe (B) it is false. If you do not believe (B) it is true. You are an anti-expert about (B); your opinion is reliably wrong. An outsider who monitors your opinion can reckon whether (B) is true. But you are not able to exploit your anti-expertise.

5.1 The Knower Paradox

David Kaplan and Richard Montague think the announcement by the teacher in our surprise exam example is equivalent to the self-referential

(AKM) Either the test is on Monday but you do not know it before Monday, or the test is on Wednesday but you do not know it before Wednesday, or the test is on Friday but you do not know it before Friday, or this announcement is known to be false.

Kaplan and Montague note that the number of alternative tests can be increased indefinitely. Shockingly, they claim the number of alternatives can be reduced to zero! The announcement is then equivalent to

(KC) This sentence is known to be false.

If (KC) is true then it known to be false. Whatever is known to be false, is false. Since no proposition can be both true and false, we have proven that (KC) is false. Given that proof produces knowledge, (KC) is known to be false. But wait! That is exactly what (KC) says — so (KC) must be true.

The (KC) argument stinks of the liar paradox. Subsequent commentators sloppily switch the negation sign in the formal presentations of the reasoning from K~p to ~Kp. Ironically, this garbled transmission results in a cleaner variation of the knower:

(K) No one knows this very sentence.

Is (K) true? On the one hand, if (K) is true, then what it says is true, so no one knows it. On the other hand, that very reasoning seems to be a proof of (K). Proving a proposition is sufficient for knowledge of it, so someone must know (K). But then (K) is false! Since no one can know a proposition that is false, (K) is not known.

The skeptic could hope to solve (KC) by denying that anything is known. This remedy does not cure (K). If nothing is known then (K) is true. Can the skeptic instead challenge the premise that proving a proposition is sufficient for knowing it? This solution would be particularly embarrassing to the skeptic. The skeptic presents himself as a stickler for proof. If it turns out that even proof will not sway him, he looks more like the dogmatist he so frequently chides.

But the skeptic should not lose his nerve. A student taking a logic examination can be surprised that he soundly deduced a theorem. The student did not know the conclusion because it seemed implausible and he was only guessing that a key inference rule was valid. His instructor might have trouble getting the student to understand why his answer constitutes a valid proof (rather than merely a desperate bid for partial credit).

The logical myth that “You cannot prove a universal negative” is itself a universal negative. So it implies its own unprovability. This implication of unprovability is correct but only because the principle is false. For instance, exhaustive inspection proves the universal negative ‘No adverbs appear in this sentence’. Reductio ad absurdum proves the universal negative ‘There is no largest prime number’.

Trivially, false propositions cannot be proved true. Are there any true propositions that cannot be proved true?

Yes, there are infinitely many. Kurt Godel demonstrated that any system that is strong enough to express arithmetic is also strong enough to express a formal counterpart of the self-referential proposition in the surprise test example ‘This statement cannot be proved in this system’. If the system cannot prove its “Godel sentence”, then this sentence is true. If the system can prove its Godel sentence, the system is inconsistent. So either the system is incomplete or inconsistent.

Of course, this result concerns provability relative to a system. One system can prove another system's Godel sentence. Kurt Godel thought that mathematical intuition gave him knowledge that arithmetic is consistent. Human knowledge is not restricted to what human beings can prove.

J. R. Lucas (1964) claims that this reveals human beings are not machines. A computer is a concrete instantiation of a formal system. Hence, its “knowledge” is restricted to what it can prove. By Godel's theorem, the computer will be either inconsistent or incomplete. However, Lucas draws an invidious comparison: a human being with a full command of arithmetic can be consistent (even if he is actually inconsistent due to inattention or wishful thinking).

Other philosophers defend the parity between people and computers. They think we have our own Godel sentences (Lewis 1999, 166–173). In this egalitarian spirit, G. C. Nerlich (1961) models the student's beliefs in the surprise test example as a logical system. The teacher's announcement is then a Godel sentence about the student: There will be a test next week but you will not be able to prove which day it will occur on the basis of this announcement and memory of what has happened on previous exam days. When the number of exam days equals zero the announcement is equivalent to sentence K.

Several commentators on the surprise test paradox object that interpreting surprise as unprovability changes the topic. Instead of posing the surprise test paradox, it poses a variation of the liar paradox. Other concepts can be blended with the liar. For instance, mixing in alethic notions generates the possible liar:  Is ‘This statement is possibly false’ true? (Post 1970) (If  it is false, then it is false that it is possibly false, so it is necessarily true. But if it is necessarily true, then it cannot be possibly false.) Since the semantic concept of validity involves the notion of possibility, one can also derive validity liars such as Pseudo-Scotus' paradox: ‘Squares are squares, therefore, this argument is invalid’ (Read 1979). If Pseudo-Scotus' argument is valid then, since its premise is true, its conclusion is true – which means it is invalid. If Pseudo-Scotus' argument is invalid, it is possible for the premise to be true and conclusion false. But if an argument is invalid, it is necessarily invalid. A similar predicament follows from ‘The test is on Friday but this prediction cannot be soundly deduced from this announcement’.

One can mock up a complicated liar paradox that resembles the surprise test paradox. But this complex variant of the liar is not an epistemic paradox. For the paradoxes turn on the semantic concept of truth rather than an epistemic concept.

5.2 The “Knowability Paradox”

Frederic Fitch (1963) reports that in 1945 he first learned of this proof of unknowable truths from a referee report on a manuscript he never published. Thanks to Joe Salerno's archival research, we now know that referee was Alonzo Church.

Assume there is a true sentence of the form ‘p but p is not known’. Although this sentence is consistent, modest principles of epistemic logic imply that sentences of this form are unknowable. In particular, the uncontroversial KE ("Knowledge implies truth") and KD ("Knowledge distributes over conjunction") suffice for a simple proof that some truths are unknowable.

1. K(p & ~Kp) (Assumption)
2. Kp & K~Kp 1, Knowledge distributes over conjunction
3. ~Kp 2, Knowledge implies truth (from the second conjunct)
4. Kp & ~Kp 2, 3 by conjunction elimination of the first conjunct and  then conjunction introduction
5. ~K(p & ~Kp) 1, 4 Reductio ad absurdum

Since all the assumptions are discharged, the conclusion is a necessary truth.

The cautious will draw a conditional moral: If there are actual unknown truths, there are unknowable truths. A theist who believes that it is contingently true that an omniscient being exists, will accept this conditional as vacuously true (on the grounds that the antecedent is actually false). A theist who believes that an omniscient being necessarily exists, will also accept the conditional as vacuously true (on the grounds that the antecedent is a necessary falsehood).

But many idealists and virtually all logical positivists and other secular verificationists concede that there are some actual unknown truths while also maintaining that all truths are knowable.  Astonishingly, they seem refuted by this pinch of epistemic logic.

Timothy Williamson doubts such astonishment is enough for the result to qualify as a paradox:

The conclusion that there are unknowable truths is an affront to various philosophical theories, but not to common sense. If proponents (and opponents) of those theories long overlooked a simple counterexample, that is an embarrassment, not a paradox. (2000, 271)

The rhetorical intent of denying that the result is paradox is to remove an inhibition. Williamson does not want us to quarantine the theorem with such suspicious characters as the liar paradox.

Those who believe that the Church-Fitch result is a paradox can respond to Williamson with examples of paradoxes that accord with common sense. For instance, since the quantifiers of standard logic (first order predicate logic with identity) have existential import, the logician can prove that something exists from the principle that everything is identical to itself. Most philosophers balk at this simple proof because they feel that the existence of something cannot be proved by sheer logic. Likewise, many philosophers balk at the proof of unknowables because they feel that such a profound result cannot be obtained from such limited means.

5.3 Moore's problem

Church's referee report was composed in 1945. The timing and structure of his argument for unknowables suggests that Church may have been by inspired G. E. Moore's (1942, 543) sentence:

(M) I went to the pictures last Tuesday, but I don't believe that I did.

Moore's problem is to explain what is odd about declarative utterances such as (M). This explanation needs to encompass both readings of (M): ‘p & B~p’ and ‘p & ~Bp’. (This scope ambiguity is behind my favorite joke about Rene Descartes: Descartes is sitting in a bar, having a drink. The bartender asks him if he would like another. "I think not," he says, and disappears.)

The common explanation of Moore's absurdity is that the speaker has managed to contradict himself without uttering a contradiction. So the sentence is odd because it is a counterexample to the generalization that anyone who contradicts himself utters a contradiction.

There is no problem in third person counterparts of (M): ‘Camels have three eye lids but Roy Sorensen does not believe it’. (M) can also be embedded unparadoxically in conditionals: ‘If those membranes are eye lids, then camels have three eye lids but I do not believe it’. The past tense is fine: ‘Camels have three eye lids but I did not believe it’. The future tense, ‘Camels have three eye lids but I will not believe it’, is a bit more of a stretch (Bovens 1995). We tend to picture our future selves as better informed. Later selves are, as it were, experts to whom earlier selves should defer. When an earlier self foresees that his later self believes p, then the prediction is a reason to believe p. Bas van Fraassen (1984, 244) dubs this “the principle of reflection”: I ought to believe a proposition given that I will believe it at some future time.

Robert Binkley (1968) anticipates van Fraassen by applying the reflection principle to the surprise test paradox. The student can foresee that he will not believe the announcement if no test is given by Thursday. The conjunction of the history of testless days and the announcement will imply the Moorean sentence:

(A′) The test is on Friday but you do not believe it.

Since the weaker element of the conjunction is the announcement, the student will not believe the announcement. At the beginning of the week, the student foresees that his future self will not believe the announcement. So he will not believe the announcement when it is first uttered.

Binkley fortifies this reasoning with doxastic logic. The principle of this logic of belief can be understood as idealizing the student into an ideal reasoner. In general terms, an ideal reasoner is someone who infers what he ought and refrains from inferring anymore than he ought. Since there is no constraint on his premises, we may disagree with the ideal reasoner. But if we agree with the ideal reasoner's premises, we appear bound to agree with his conclusion. Binkley specifies some requirements to give teeth the student's status as an ideal reasoner: the student is perfectly consistent, believes all the logical consequences of his beliefs, and does not forget. Binkley further assumes that the ideal reasoner is aware that he is an ideal reasoner. According to Binkley, this ensures that if the ideal reasoner believes p, then he believes that he will believe p thereafter.

Binkley's account of the student's epistemic state on Thursday is compelling. But his argument for spreading the incredulity from the future to the past is open to three challenges.

The first objection is that it delivers the wrong result. The student is informed by the teacher's announcement, so Binkley ought not to use a model in which the announcement is as absurd as ‘Canada extends to the North Pole but I do not believe it’.

Second, the future mental state envisaged by Binkley is only hypothetical: If no test is given by Thursday, the student will find the announcement incredible. At the beginning of the week, the student does not know (or believe) that the teacher will wait that long. A principle that tells me to defer to the opinions of my future self does not imply that I should defer to the opinions of my hypothetical future self. For my hypothetical future self is responding to propositions that are not actually true.

Third, the principle of reflection may have more qualifications than Binkley anticipates. Binkley realizes that an ordinary agent foresees that he will forget details. That is why we write reminders for our own benefit. An ordinary agent foresees periods of impaired judgment. That is why we limit how much money we bring to the bar. Binkley correctly regards these qualifications as distractions that should be ignored. He idealizes them away with the assumption that the agent never loses the knowledge he accumulates. As we shall see, this idealization is too repressive, suppressing relevant qualifications along with the irrelevant.

5.4 Blindspots

A blindspot is a consistent but inaccessible proposition. Blindspots are relative to the means of reaching the proposition and the person making the attempt. Although I cannot know the epistemic blindspot ‘There is intelligent extra-terrestrial life but no one knows it’, I can suspect it. Although I cannot rationally believe ‘Polar bears have black skin but I do not believe it’ you can. This means there can be disagreement between ideal reasoners (even under strong idealizations such as Binkley's). The anthropologist Gontran de Poncins begins his chapter on the arctic missionary, Father Henry, with a prediction:

I am going to say to you that a human being can live without complaint in an ice-house built for seals at a temperature of fifty-five degrees below zero, and you are going to doubt my word. Yet what I say is true, for this was how Father Henry lived; … . (Poncins 1988, 240)

Gontran de Poncins's subsequent testimony might lead the reader to believe someone can indeed be content to live in an ice-house. The same testimony might lead another reader to doubt that Poncins is telling the truth. But no reader ought to believe ‘Someone can be content to live in an ice house and I doubt it’.

If Gontran believes a proposition that is a blindspot to his reader, then he cannot furnish good grounds for his reader to share his belief. This holds even if they are ideal reasoners. So one implication of blindspots is that there can be disagreement among ideal reasoners because they differ in their blindspots.

This is relevant to the surprise test paradox. The students are the surprisees. Since the date of the surprise test is a blindspot for them, non-surprisees cannot persuade them.

The same point holds for intra-personal disagreement over time. Evidence that persuaded me on Sunday that ‘My new locker combination is 18–36–14 but on Friday I will not believe it’ should no longer persuade me on Friday (given my belief that the day is Friday). For that proposition is a blindspot to my Friday self.

Although each blindspot is inaccessible, a disjunction of blindspots is normally not a blindspot. I can believe that ‘Either the number of stars is even and I do not believe it, or the number of stars is odd and I do not believe it’. The author's preface statement that there is some mistake in his book is equivalent to a very long disjunction of blindspots. The author is saying he either falsely believes his first statement or falsely believes his second statement or … or falsely believes his last statement.

The teacher's announcement that there will be a surprise test is equivalent to a disjunction of future mistakes: ‘Either there will be a test on Monday and the student will not believe it beforehand or there will be a test Wednesday and the student will not believe it beforehand or the test is on Friday and the student will not believe it beforehand.’

The points made so far suggest a solution to the surprise test paradox (Sorensen 1988, 328–343). As Binkley asserts, the test would be a surprise even if the teacher waited until the last day. Yet it can still be true that the teacher's announcement is informative. At the beginning of the week, the students are justified in believing the teacher's announcement that there will be a surprise test.  This announcement is equivalent to:

(A) Either
  1. the test is on Monday and the student does not know it before Monday, or
  2. the test is on Wednesday and the student does not know it before Wednesday, or
  3. the test is on Friday and the student does not know it before Friday.

Consider the student's predicament on Thursday (given that the test has not been on Monday or Wednesday). If he knows that no test has been given, he cannot also know that (A) is true. Because that would imply

(iii) The test is on Friday and the student does not know it before Friday.

Although (iii) is consistent and might be knowable by others, (iii) cannot be known by the student before Friday. (iii) is a "blindspot" for the students but not for, say, the teacher's colleagues. Hence, the teacher can give a surprise test on Friday because that would force the students to lose their knowledge of the original announcement (A). Knowledge can be lost without forgetting anything.

This solution makes who you are relevant to what you can know. In addition to compromising the impersonality of knowledge, there will be compromise on its temporal neutrality.

Since the surprise test paradox can also be formulated in terms of rational belief, there will be parallel adjustments for what we ought to believe. We are criticized for failures to believe the logical consequences of what we believe and criticized for believing propositions that conflict with each other. Anyone who meets these ideals of completeness and consistency will be unable to believe a range of consistent propositions that are accessible to other complete and consistent thinkers. In particular, they will not be able to believe propositions attributing specific errors to them, and propositions that entail these off-limit propositions.

Some people wear T-shirts with Question Authority! written on them. Questioning authority is generally regarded as a matter of individual discretion. The surprise test paradox shows that it is sometimes mandatory. The student is forced to doubt the teacher's announcement even though the teacher has not given any evidence of being unreliable. Indeed, the student can foresee that their change of mind opens a new opportunity for surprise.

Another consequence is that there can be disagreement amongst ideal reasoners who agree on the same impersonal data. Consider the colleagues of the teachers. They are not amongst those that teacher targets for surprise. Since ‘surprise’ here means ‘surprise to the students’, the teacher's colleagues can consistently infer that the test will be on the last day from the premise that it has not been given on any previous day.

6. Dynamic Epistemic Paradoxes

The above anomalies (losing knowledge without forgetting, disagreement amongst equally well-informed ideal reasoners, rationally changing your mind without the acquisition of counter-evidence) would be more tolerable if reinforced by separate lines of reasoning. The most fertile source of this collateral support is in puzzles about updating beliefs.

The natural strategy is to focus on the knower when he is stationary. However, just as it is easier for an Eskimo to observe an arctic fox when it moves, we often get a better understanding of the knower dynamically, when he is in the process of gaining or losing knowledge.

6.1 Meno's Paradox of Inquiry: A puzzle about gaining knowledge

Socrates says that he knows only that he knows nothing. But this is a contradiction. If he knows only that he knows nothing, then he knows something (the proposition that he knows nothing) and yet does not know anything (because knowledge implies truth).

At first blush, Socrates' ignorance nicely explains why he is asking the questions. But eventually, Meno discerns a conflict between Socratic ignorance and Socratic inquiry. How would Socrates recognize the correct answer even if Meno gave it?

The general structure of Meno's paradox is a dilemma: If you know the answer to the question you are asking, then nothing can be learned by asking. If you do not know the answer, then you cannot recognize a correct answer even if it is given to you. Therefore, one cannot learn anything by asking questions.

The natural solution to Meno's paradox is to characterize the inquirer as only partially ignorant. He knows enough to recognize a correct answer but not enough to answer on his own. For instance, dictionaries are useless to six year old children because they seldom know more than the first letter of the word in question. Ten year old children have enough partial knowledge of the word's spelling to narrow the field of candidates. Dictionaries are useless to those with (perfect) knowledge of spelling and those with (perfect) ignorance of spelling. But most of us have an intermediate amount of knowledge.

It is natural to analyze partial knowledge as knowledge of conditionals. The ten year old child knows that ‘If the dictionary spells the month after January as F-e-b-r-u-a-r-y, then that spelling is correct’. Consulting the dictionary gives him knowledge of the antecedent of the conditional.

Much of our learning from conditionals runs as smoothly as this example suggests. Knowledge of the conditional is conditional knowledge (that is, conditional upon learning the antecedent and applying the inference rule modus ponens: If P then Q, P, therefore Q). But the next section is devoted to some known conditionals that are repudiated when we learn their antecedents.

6.2 Dogmatism paradox: A puzzle about losing knowledge

Gilbert Harman attributes this paradox to Saul Kripke:

If I know that h is true, I know that any evidence against h is evidence against something that is true; I know that such evidence is misleading. But I should disregard evidence that I know is misleading. So, once I know that h is true, I am in a position to disregard any future evidence that seems to tell against h. (1973, 148)

Dogmatists accept this reasoning. For them, knowledge closes inquiry. Any “evidence” that conflicts with what is known can be dismissed as misleading evidence. Forewarned is forearmed.

This conservativeness crosses the line from confidence to intransigence. To illustrate the excessive inflexibility, here is a chain argument for the dogmatic conclusion that my reliable colleague Doug has given me a misleading report (from Sorensen 1988b):

(C1) My car is in the parking lot.

(C2) If my car is in the parking lot and Doug reports otherwise, then Doug's report is misleading.

(C3) If Doug reports that my car is not in the parking lot, then his report is misleading.

(C4) Doug reports that my car is not in the parking lot.

(C5) Doug's report is misleading.

By hypothesis, I am justified in believing (C1). Premise (C2) is a certainty because it is analytically true. The argument from (C1) and (C2) to (C3) is valid. Therefore, my degree of confidence in (C3) must equal my degree of confidence in (C1). Since we are also assuming that I gain sufficient justification for (C4), it seems to follow that I am justified in believing (C5) by modus ponens. Similar arguments will lead me to dismiss further evidence such as a phone call from the towing service and my failure to see my car when I confidently stride over to the parking lot.

Gilbert Harman diagnoses the paradox as follows:

The argument for paradox overlooks the way actually having evidence can make a difference. Since I now know [my car is in the parking lot], I now know that any evidence that appears to indicate something else is misleading. That does not warrant me in simply disregarding any further evidence, since getting that further evidence can change what I know. In particular, after I get such further evidence I may no longer know that it is misleading. For having the new evidence can make it true that I no longer know that new evidence is misleading. (1973, 149)

In effect, Harman denies the hardiness of knowledge. The hardiness principle states that one knows only if there is no evidence such that if one knew about the evidence one would not be justified in believing one's conclusion. New knowledge cannot undermine old knowledge. Harman disagrees.

Harman's belief that new knowledge can undermine old knowledge may be relevant to the surprise test paradox. Perhaps the students lose knowledge of the test announcement even though they do not forget the announcement or do anything else incompatible with their credentials as ideal reasoners. A student on Thursday is better informed about the outcomes of test days than he was on Sunday. He knows the test was not on Monday or Wednesday. But he can only predict that the test is on Friday if he continues to know the announcement. Perhaps the extra knowledge of the testless knowledge undermines knowledge of the announcement.

The dogmatism paradox shows how new knowledge can undermine old knowledge. The next paradox shows how old knowledge can self-destruct like a time bomb – and then, to underscore the miracle, the old knowledge reassembles itself as if in a reversed explosion.

6.3 Sleeping Beauty

Sleeping Beauty is an ideal reasoner who knows she will be given a sleeping pill that induces limited amnesia. She knows that after she falls asleep a coin will be flipped. If it lands heads, she will be awakened on Monday and asked: “What is the probability that the coin landed heads?’. She will not be informed which day it is. If the coin lands tails, she will be awaken on both Monday and on Tuesday and asked the same question each time. The amnesia insures that, if awakened on Tuesday she will not remember being woken on Monday. What will her answer be to the questions? Some say Sleeping Beauty will answer 1/2. After all, the coin is fair and nothing new has been learned. Others say she will answer 1/3 (Elga 2000). After all, there are three possibilities: the coin landed heads and so she is now being asked on Monday, the coin landed tails and she is now being asked on Monday, and the coin landed tails and she is now being asked on Tuesday. Sleeping beauty has no more evidence for one possibility rather than any other. So she should consider the three possibilities as equally likely.

Let us now adapt Sleeping Beauty to the surprise test paradox. Sleeping Beauty gets the news: On Sunday morning she will witness a coin flip. If the coin lands heads, she will receive a test on Friday. If the coin lands tails, then there will be no test but she will be given a sleeping pill on Sunday evening that induces a pseudo-memory of the coin landing heads (along with amnesia about seeing the coin land tails and taking the amnesia pill). This pseudo-memory will be indistinguishable from a real memory.

On Monday, Sleeping Beauty wakes knowing that she either is genuinely remembering that the coin landed heads or she is pseudo-remembering the coin landing heads. Since she cannot tell which, Sleeping Beauty does not know whether there will be a test on Friday. If there is a test on Friday, it will be a surprise in the sense that immediately before the test, Sleeping Beauty will not know that there will be a test on Friday.

Given that she witnessed the coin land heads, the test will be foreseen on Sunday afternoon. For on Sunday morning Sleeping Beauty was informed that the test will take place on Friday. Sleeping Beauty possesses this knowledge until she goes to sleep on Sunday night. Before bedtime on Sunday, Sleeping Beauty knows that she will not be given a pill. Sleeping Beauty knows that her memory is operating normally. She is not going to forget that the test is on Friday. Yet she also foresees that on Monday she will no longer know.

Given that the coin lands heads, Sleeping Beauty's knowledge has been lost without forgetting and without the acquisition of new evidence that undermined the old opinion. Sleeping Beauty is completely normal and attentive. Yet she becomes ignorant.

Sleeping Beauty also loses her belief that there will be a test on Friday. For on Monday, Sleeping Beauty thinks it is no more likely that there will be a test than there will not be. She is neutral. Before bedtime on Sunday, Sleeping Beauty anticipates all this. Sleeping Beauty worries that she will risk not studying or, in any case, will not study as hard as she would if she knew or believed there was a test on Friday.

6.4 The Future of Epistemic Paradoxes

Jon Wynne-Tyson attributes the following quotation to Leonardo Da Vinci: "I have learned from an early age to abjure the use of meat, and the time will come when men such as I will look upon the murder of animals as they now look upon the murder of men." (1985, 65) By predicting this progress, Leonardo is showing that he already believes that the murder of animals is the same as the murder of men.

There would be no problem if Leonardo thinks the moral progress lies in the moral preferability of the vegetarian belief rather than the truth of the matter. One might admire vegetarianism without accepting the correctness of vegetarianism. But Leonardo is endorsing the correctness of the belief. This sentence embodies a Moorean absurdity. It is like saying ‘Leonardo took twenty five years to complete The Virgin on the Rocks but I will first believe so tomorrow’. (This absurdity will prompt some to object that I have uncharitably interpreted Leonardo; he must have intended to make an exception for himself and only be referring to men of his kind.)

I cannot specifically anticipate the first acquisition of the true belief that p. For that prediction would show that I already have the true belief that p. The truth cannot wait. The impatience of the truth imposes a limit on the prediction of discoveries.

Bibliography

Other Internet Resources

Related Entries

fatalism | Fitch's paradox of knowability | logic: epistemic | logic: of belief revision | probability, interpretations of | prophecy | Simpson's paradox | skepticism | vagueness