Stanford Encyclopedia of Philosophy

Supplement to Inductive Logic

Proof of the Falsification Theorem

Likelihood Ratio Convergence Theorem 1—The Falsification Theorem:
Suppose the evidence stream cn contains precisely m experiments or observations on which hj is not fully outcome-compatible with hi. And suppose that the Independent Evidence Conditions hold for evidence stream cn with respect to each of these hypotheses. Furthermore, suppose there is a lower bound δ > 0 such that for each ck on which hj is not fully outcome-compatible with hi, P[∨{ oku : P[oku | hj·b·ck] = 0} | hi·b·ck]  ≥  δ—i.e. hi (together with b·ck) says, via a likelihood with value no smaller than δ, that one of the outcomes will occur that hj says cannot occur). Then,

P[∨{ en : P[en| hj·b·cn]/P[en | hi·b·cn] = 0}   |   hi·b·cn]

    =     P[∨{ en : P[en | hj·b·cn] = 0}   |   hi·b·cn]

    ≥     1−(1−δ)m,

which approaches 1 for large m.

Proof

First notice that according to the supposition of the theorem, for each of the m experiments or observations ck on which hj is not fully outcome-compatible with hi we have

(1−δ) P[∨{oku : P[oku  |  hj·b·ck] > 0}   |   hi·b·ck]  
= {okuOk: P[oku | hj·b·ck] > 0}   P[oku | hi·b·ck].

And for each of the other ck in the evidence stream cn—i.e. for each of the ck on which hj is fully outcome-compatible with hi,

P[∨{oku : P[oku | hj·b·ck] > 0}   |   hi·b·ck]  =  1.

Then, we may iteratively decompose P[∨{en : P[en | hj·b·cn] > 0}   |   hi·b·cn] into it's components as follows:

P[∨{en : P[en | hj·b·cn] > 0}   |   hi·b·cn]
=   ∑{en:P[en | hj·b·cn] > 0}   P[en | hi·b·cn]
=   ∑{en:P[en | hj·b·cn·cn−1·en−1] × P[en−1 | hi·b·cn·cn−1] > 0} P[en | hj·b·cn·cn−1·en−1] ×
                     P[en−1 | hi·b·cn·cn−1]
=  ∑{en:P[en | hj·b·cn] × P[en−1 | hi·b·cn−1] > 0} P[en | hj·b·cn] × P[en−1 | hi·b·cn−1]
= ∑{en:P[en | hj·b·cn] > 0 & P[en−1 | hi·b·cn−1] > 0} P[en | hj·b·cn] × P[en−1 | hi·b·cn−1]
=  ∑{en−1: P[en−1 | hj·b·cn−1] > 0} {onuOn:P[onu | hj·b·cn] > 0} P[onu | hi·b·cn] ×
                      P[en−1 | hi·b·cn−1]
=  ∑{en−1: P[en−1 | hj·b·cn−1] > 0} P[∨{onu: P[onu | hj·b·cn] > 0} | hi·b·cn]  ×
                   P[en−1 | hi·b·cn−1]
 ≤  (1−γ) × ∑{en−1: P[en−1 | hj·b·cn−1] > 0}   P[en−1 | hi·b·cn−1],
 if cn is an observation on which hj is not fully outcome-compatible with hi
or
 =   ∑{en−1: P[en−1 | hj·b·cn−1] > 0}   P[en−1 | hi·b·cn−1],
 if cn is an observation on which hj is fully outcome-compatible with hi
continuining this process of decomposing terms of form {ek: P[ek | hj·b·ck] > 0}   P[ek | hi·b·ck] (in each disjunct of the ‘or’ above, using the same decomposition process shown in the six lines preceding that disjunction), and realizing that according to the supposition of the theorem, this decomposition leads to terms of the form of the first disjunct exactly m times, we get
m
Π
k = 1
(1−γ)   =   (1−γ)m.

So,

P[∨{en : P[en  |  hj·b·cn] = 0}   |   hi·b·cn]
  =  1 −  P[∨{en : P[en  |  hj·b·cn] > 0}   |   hi·b·cn]   ≥     1 − (1−γ)m.

We also have,

P[∨{en : P[en | hj·b·cn]/P[en | hi·b·cn] = 0} | hi·b·cn]
  =  P[∨{en : P[en | hj·b·cn] = 0} | hi·b·cn],

because

P[∨{en : P[en | hj·b·cn]/P[en | hi·b·cn] > 0} | hi·b·cn]
= {en: P[en | hj·b·cn]/P[en | hi·b·cn] > 0} P[en | hi·b·cn]
= {enP[en | h j·b·cn] > 0 & P[en | hi·b·cn] > 0} P[en | hi·b·cn]
= {enP[en | h j·b·cn] > 0} P[en | hi·b·cn]
= P[∨{en : P[en | hj·b·cn] > 0}   |   hi·b·cn].

[Back to Text]