spohntalk.ppt

Download Report

Transcript spohntalk.ppt

Inductive Amnesia
The Reliability
of
Iterated Belief Revision
A Table of Opposites







Even
Straight
Reliability
Performance
Correctness
Classical statistics
Learning theory
Odd
Crooked
“Confirmation”
“Primitive norms”
“Coherence”
Bayesianism
Belief Revision Theory
The Idea




Belief revision is inductive reasoning
A restrictive norm prevents us from finding truths
we could have found by other means
Some proposed belief revision methods are
restrictive
The restrictiveness is expressed as inductive
amnesia
Inductive Amnesia
No restriction on memory...
 No restriction on predictive power...
 But prediction causes memory loss...
 And perfect memory precludes prediction!
 Fundamental dilemma

Outline







I.
II.
III.
IV.
V.
VI.
VII.
Seven belief revision methods
Belief revision as learning
Properties of the methods
The Goodman hierarchy
Negative results
Positive results
Discussion
Points of Interest
Strong negative and positive results
 Short run advice from limiting analysis
 2 is magic for reliable belief revision
 Learning as cube rotation
 Grue

Part I
Iterated Belief Revision
Bayesian (Vanilla) Updating
B
Propositions are sets of “possible worlds”
Bayesian (Vanilla) Updating
new
evidence
E
B
Bayesian (Vanilla) Updating


Perfect memory
No inductive leaps
E
B
B’
B’ = B *E = B  E
Epistemic Hell
B
Epistemic Hell
E
Surprise!
B
Epistemic
hell
Epistemic Hell






Scientific revolutions
Suppositional reasoning
Conditional pragmatics
Decision theory
Game theory
Data bases
E
B
Epistemic
hell
Ordinal Entrenchment
Spohn 88




Epistemic state S maps worlds to ordinals
Belief state of S = b (S ) = S -1(0)
Determines “centrality” of beliefs
Model: orders of infinitesimal probability
w+1
w
2
1
B = b (S)
S
0
Belief Revision Methods
* takes an epistemic state and a proposition to an epistemic state
S
b(S)
S’
E
*
b (S *E )
Spohn Conditioning *C
Spohn 88
b (S )
S
Spohn Conditioning *C
Spohn 88
new
evidence
contradicting b (S )
E
b (S )
S
Spohn Conditioning *C
Spohn 88
E
*C
b (S )
S
B’
S *C E
Spohn Conditioning *C
Spohn 88





Conditions an entire
entrenchment ordering
Perfect memory
Inductive leaps
No epistemic hell on consistent
sequences
Epistemic hell on inconsistent
sequences
E
*C
b (S )
S
B’
S *C E
Lexicographic Updating *L
Spohn 88, Nayak 94
S
Lexicographic Updating *L
Spohn 88, Nayak 94
E
S
Lexicographic Updating *L
Spohn 88, Nayak 94




Lift refuted possibilities above
non-refuted possibilities
preserving order.
Perfect memory on consistent
sequences
Inductive leaps
No epistemic hell
E
*L
B’
S
S *L E
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93
B
S
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93
E
B
S
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93




Drop the lowest
possibilities consistent
with the data to the bottom
and raise everything else
up one notch
inductive leaps
No epistemic hell
But...
E
*M
S
S *M E
Amnesia


What goes up can come down
Belief no longer entails past data
E
Amnesia


What goes up can come down
Belief no longer entails past data
E’
E
*M
Amnesia


What goes up can come down
Belief no longer entails past data
E’
E
*M
*M
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94
B
S
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94
a
E
E
S
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94




Send non-E worlds to a
fixed level a and drop E worlds rigidly to the
bottom
Perfect memory on
sequentially consistent
data if a is high enough
Inductive leaps
No epistemic hell
a
E
E
*F,a
S
S *F,a E
Ordinal Jeffrey Conditioning *J,a
Spohn 88
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88
E
E
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88
E
E
a
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88





Drop E worlds to the
bottom. Drop non-E worlds
to the bottom and then jack
them up to level a
Perfect memory on
consistent sequences if a is
large enough
No epistemic hell
Reversible
But...
*J,a
E
E
a
B
S
B’
S *J,a E
Empirical Backsliding
Empirical Backsliding
E
a
Empirical Backsliding

Ordinal Jeffrey
conditioning can
increase the
plausibility of a
refuted possibility
E
a
The Ratchet Method *R,a
Darwiche and Pearl 97
S
The Ratchet Method *R,a
Darwiche and Pearl 97
b +a
b
E
S
The Ratchet Method *R,a
Darwiche and Pearl 97




Like ordinal Jeffrey
conditioning except
refuted possibilities move
up by a from their current
positions
Perfect memory if a is
large enough
E
Inductive leaps
No epistemic hell
b +a
b
*R,a
B
S
B’
S *R,a E
Part II
Belief Revision as Learning
Iterated Belief Revision


(S0 * ()) = S0
(S0 * (E0, ..., En, En+1)) = (S0 * (E0, ..., En, )) * En+1
S0
S0
S1
S2
E0
*
b (S0)
b (S1)
b (S2)
E1
A Very Simple Learning Paradigm
outcome
sequence
0 0 1 0 0
mysterious
system
possible infinite trajectories
e
n
e|n
Empirical Propositions


Empirical propositions are sets of possible trajectories
Some special cases:
e
“fan”
s
[s] = the proposition
that s has occurred
k
n
[k, n] = the
proposition that
k occurs at stage n
{e} = the
proposition that the
future trajectory
is exactly e
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
possible trajectories
e
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 0)
e
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 1)
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 2)
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 3)
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 4)
convergence to {e }
Trajectory Identification

(*, S0) identifies e  for all but finitely many n,
b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
b (S 5)
etc...
Reliability
Let K be a set of possible outcome
trajectories
 (*, S0) identifies K  (*, S0) identifies each
e in K

Identifiability Characterized

Proposition: K is identifiable just in case K is countable
Completeness and Restrictiveness
* is complete  each ientifiable K is
identifiable by (*, S0), for some choice of
S 0.
 Else * is restrictive.

Part III
Properties of the Methods
Timidity and Stubbornness


timidity: no inductive leaps
without refutation
stubbornness: no retractions
without refutation
B’
B
Timidity and Stubbornness


timidity: no inductive leaps
without refutation
stubbornness: no retractions
without refutation
B’
B
Timidity and Stubbornness



“Belief is Bayesian in the nonproblematic case”
All the proposed methods are
timid and stubborn
Vestige of the dogma that
probability rules induction
B’
B
Local Consistency


Local consistency: The
updated belief must
always be consistent with
the current datum
All the methods under
consideration are designed
to be locally consistent
Timidity and Stubbornness


timidity: no inductive leaps
without refutation
stubbornness: no retractions
without refutation
B’
B
Positive Order-invariance


Positive order-invariance:
ranking among worlds
satisfying all the data so far
are preserved
All the methods considered
are positively orderinvariant
Data-Retentiveness




Data-retentiveness: Each
world satisfying all the data is
placed above each world
failing to satisfy some datum
Data-retentiveness is sufficient
but not necessary for perfect
memory
*C, *L are data-retentive
*R,a, *J,a are data-retentive if
a is above the top of S.
S
Enumerate and Test






A method enumerates and
tests just in case it is:
locally consistent,
positively order-invariant,
data-retentive
Enumerate and test
methods: *C, *L
The methods with
parameter a if a is above
the top of S 0.
epistemic
dump for
refuted
possibilities
preserved
entrenchment
ordering on live
possibilities
Completeness


Proposition: If * enumerates and tests, then * is
complete
Proof: Let S0 be an enumeration of K
Completeness



Proposition: If * enumerates and tests, then * is
complete
Proof: Let S0 be an enumeration of K
Let e be in K
e
Completeness


Feed successive data along e:
[0, e(0)], [1, e(1)], ..., [n, e(n)], ...
[0, e (0)]
e
Completeness
data retentiveness
[0, e (0)]
e
e
local consistency
positive invariance
Completeness
[1, e (1)]
[0, e (0)]
e
e
Completeness
data retentiveness
[1, e (1)]
[0, e (0)]
e
e
e
local consistency
positive invariance
Completeness
[1, e (1)]
[0, e (0)]
[2, e (2)]
e
e
e
Completeness
Convergence
data
retentiveness
[1, e (1)]
[0, e (0)]
[2, e (2)]
e
local consist
positive invar
e
e
e
Question
What about the methods that aren’t data
retentive?
 Are they complete?
 If not, can they be objectively compared?

Part IV:
The Goodman Hierarchy
The Grue Operation
Nelson Goodman
A way to generate inductive problems
of ever higher difficulty
 e ‡ n = (e|n)¬(n|e)

n
e
The Grue Operation
Nelson Goodman
A way to generate inductive problems
of ever higher difficulty
 e ‡ n = (e|n)¬(n|e)

k
n
e‡n
e
The Grue Operation
Nelson Goodman
A way to generate inductive problems
of ever higher difficulty
 e ‡ n = (e|n)¬(n|e)

m
k
n
(e ‡ n) ‡ m
e‡n
e
The Grue Operation
Nelson Goodman
A way to generate inductive problems
of ever higher difficulty
 e ‡ n = (e|n)¬(n|e)

m
k
n
((e ‡ n) ‡ m) ‡ k
(e ‡ n) ‡ m
e‡n
e
The Goodman Hierarchy
Gn(e) = the set of all trajectories you can get by
gruing e up to n positions
 Gneven(e) = the set of all trajectories you can get
by gruing e an even number of distinct positions
up to 2n

G3(e)
G2(e)
G1(e)
G0(e)
m
k
n
G1even (e)
G0even(e)
The Goodman Limit
Gw(e) = n Gn(e)
 Gweven(e) = n Gneven (e)
 Proposition: Gweven(e) = the set of all finite
variants of e

The Goodman Spectrum
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
The Even Goodman Spectrum
Min
Flush Jeffrey Ratch Lex
Gweven (e)
no
a=w
a=1
a=1
yes
yes
Gneven (e)
no
a = n +1 a = 1
a=1
yes
yes
G2even (e)
G1even (e)
G0even (e)
Cond
no
a=3
a=1
a=1
yes
yes
no
a=2
a=1
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Part V:
Negative Results
Epistemic Duality
...
“conjectures and refutations”
Popperian
“tabula rasa”
Bayesian
Epistemic Extremes
*J,2
a =2
...
projects the future
may forget
perfect memory
no projections
Opposing Epistemic Pressures
Opposing Epistemic Pressures
rarefaction for inductive leaps
Opposing Epistemic Pressures
compression for memory


rarefaction for inductive leaps
Identification
requires both
Is there a critical
value of a for which
they can be balanced
for a given problem
K?
Methods *S,1; *M Fail on
1
G (e)
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Methods *S,1; *M Fail on


Proof: Suppose otherwise
Feed e until e is uniquely at the bottom
1
G (e)
Methods *S,1; *M Fail on


Proof: Suppose otherwise
Feed e until e is uniquely at the bottom
?
e
data so far
1
G (e)
Methods *S,1; *M Fail on

1
G (e)
By the well-ordering condition,
else...
?
e
data so far
Methods *S,1; *M Fail on


1
G (e)
Now feed e’ forever
By stage n, the picture is the same
e’
positive order invariance
e’’
?
e
e’
timidity and stubbornness
n
Methods *S,1; *M Fail on


e’
e’’

?

e
e’
n
1
G (e)
At stage n +1, e stays at
the bottom (timid and
stubborn).
So e’ can’t travel down
(definitions of the rules)
e’’ doesn’t rise
(definitions of the rules)
Now e’’ makes it to the
bottom at least as soon as
e’
Method *R,1 Fails on
2
G (e)
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Method *R,1 Fails on G2(e)
with Oliver Schulte


Proof: Suppose otherwise
Bring e uniquely to the bottom, say at stage k
k
e
Method *R,1 Fails on G2(e)
with Oliver Schulte

Start feeding a = e ‡ k
k
e
a
Method *R,1 Fails on G2(e)
with Oliver Schulte


By some stage k’, a is uniquely down
So between k + 1 and k’, there is a first stage j
when no finite variant of e is at the bottom
k
a
a
k’
Method *R,1 Fails on G2(e)
with Oliver Schulte

k
c
a
j
k’
Let c in G2(e) be a finite variant
of e that rises to level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte

k
c
a
j
k’
Let c in G2(e) be a finite variant
of e that rises to level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte

k
c
a
j
k’
So c(j - 1)  a(j - 1)
Method *R,1 Fails on G2(e)
with Oliver Schulte


k
d
1 c
a
j
k’

Let d be a up to j and e
thereafter
So is in G2(e)
Since d differs from e, d is at
least as high as level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte

k
d
1 c
a
j
k’
Show: c agrees with e after j.
Method *R,1 Fails on G2(e)
with Oliver Schulte


kj
d
1 c
a
k’
Case: j = k+1
Then c could have been chosen
as e since e is uniquely at the
bottom at k
Method *R,1 Fails on G2(e)
with Oliver Schulte


k
d
1 c
a
j
k’
Case: j > k+1
Then c wouldn’t have been at
the bottom if it hadn’t agreed
with a (disagreed with e)
Method *R,1 Fails on G2(e)
with Oliver Schulte


k
d
1 c
a
j
k’
Case: j > k+1
So c has already used up its two
grues against e
Method *R,1 Fails on G2(e)
with Oliver Schulte


k
d
1 c
d
j
k’
Feed c forever after
By positive invariance, either
never projects or forgets the
refutation of c at j-1
The Internal Problem of Induction


Necessary condition for success by pos ord-invar methods:
no data stream is a k-limit point of data streams as low as it
after it has been presented for k steps
bad
The Internal Problem of Induction


Necessary condition for success by pos ord-invar methods:
no data stream is a k-limit point of data streams as low as it
after it has been presented for k steps
bad
good
Corollary: Stacking Lemma


Necessary condition for
identification of Gn+1(e)
by positively orderinvariant methods
If e is at the bottom level
after being presented up
to stage k, then some
data stream e’ in Gn+1(e)
- Gn (e) agreeing with
the data so far is at least
at level n+1
Corollary: Stacking Lemma


Necessary condition for
identification of Gn+1(e)
by positively orderinvariant methods
If e is at the bottom level
after being presented up
to stage k, then some
data stream e’ in Gn+1(e)
- Gn (e) in agreeing with
the data so far is at least
at level n+1
Why?
Corollary: Stacking Lemma


Necessary condition for
identification of Gn+1(e)
by positively orderinvariant methods
If e is at the bottom level
after being presented up
to stage k, then some
data stream e’ in Gn+1(e)
- Gn (e) in agreeing with
the data so far is at least
at level n+1
Else!
Even Stacking Lemma

Similarly for Gn+1even(e)
Even Stacking Lemma

Similarly for Gn+1even(e)
Why?
Even Stacking Lemma

Similarly for Gn+1even(e)
Else!
Method *F,n Fails on
n
G (e)
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Method *F,n Fails on


Proof for a = 4: Suppose otherwise
Bring e uniquely to the bottom
n
G (e)
Method *F,n Fails on


e
Proof for a = 4: Suppose otherwise
Bring e uniquely to the bottom
n
G (e)
Method *F,n Fails on
k
e’’
?



e’
4
0 e

n
G (e)
Apply stacking lemma
Let e’ be in G4(e) - G3(e) at or
above level 4
Let e’’ be the same except at a
the first place k where e’
differs from e
Feed e’ forever after
Method *F,n Fails on
k
e’’
?
e’
4
0 e

n
G (e)
Timidity, stubbornness and
positive invariance hold the
picture fixed up to k
Method *F,n Fails on
k
e’’
?

Ouch! Positive invariance,
timidity and stubbornness and
a=4
k
e’
4
0 e
n
G (e)
e’
a=4
0 e
Method *F,n Fails on

n
G
Same, using the even stacking lemma
even (e)
Part VI:
Positive Results
How to Program Epistemic States
Hamming Distance:
 D(e, e’) = {n: e(n) ° e’(n)}
 r (e, e’) = | D(e, e’) |

e
e’
r (e, e’ ) = 9
Hamming Algebra

a H b mod e  D (e, a) is a subset of D (e, b)
Hamming
111
110
101
011
100
010
001
000
Epistemic States as Boolean Ranks
Hamming Algebra
Gweven (e)
e
Advantage of Hamming Rank

No violations of limit point condition over
finite levels:
nobody lower can match this
*R,1 ,*J,1 can identify
w
G
even(e)
Min
Flush Jeffrey Ratch Lex
Gweven (e)
no
a=w
a=1
a=1
yes
yes
Gneven (e)
no
a = n +1 a = 1
a=1
yes
yes
G2even (e)
G1even (e)
G0even (e)
Cond
no
a=3
a=1
a=1
yes
yes
no
a=2
a=1
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
*R,1 ,*J,1 can identify

w
G
even(e)
Proof: Let S be generated by
finite ranks of the Hamming
algebra rooted at e
Gweven (e)
e
S
*R,1 ,*J,1 can identify


w
G
even(e)
Let a be an arbitrary element of
Gweven(e)
So a is at a finite level of S
S
a
e
*R,1 ,*J,1 can identify


w
G
even(e)
Consider the principal ideal of a
These are the possibilities that
differ from e only where a does
S
a
e
*R,1 ,*J,1 can identify

w
G
even(e)
So these are the possibilities that
have just one difference from
the truth for each level below
the truth
a
3
3
S
a
e
e
*R,1 ,*J,1 can identify

w
G
Induction = hypercube rotation
even(e)
*R,1 ,*J,1 can identify
Example
a
e
w
G
even(e)
*R,1 ,*J,1 can identify
a
e
Example
a
e
w
G
even(e)
*R,1 ,*J,1 can identify
a
e
w
G
even(e)
*R,1 ,*J,1 can identify
e
a
w
G
even(e)
*R,1 ,*J,1 can identify

Convergence
e
a
w
G
even(e)
*R,1 ,*J,1 can identify
w
G
even(e)
Other possibilities have more than enough
differences to climb above the truth
 *J,1 doesn’t backslide since the rotation
keeps refuted possibilities rising

*R,2 is Complete
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
*R,2 is Complete
Proposition: *R,2 is a complete function
identifier
 Proof: Let K be countable
 Partition K into finite variant classes
C0, C1, ..., Cn , ...

*R,2 is Complete
Impose the Hamming distance ranking on
each equivalence class
 Now raise the nth Hamming ranking by n

S
C0 C1 C2 C3 C4
*R,2 is Complete

Else might generate horizontal limit points:
S
C0 C1 C2 C3 C4
*R,2 is Complete
Data streams in different columns differ
infinitely often from the truth.
zoom!

truth
C0 C1 C2 C3 C4
*R,2 is Complete

Data streams in the same column just barely
make it because they jump by 2 for each
difference from the truth
S
C0 C1 C2 C3 C4
*R,2 is Complete

Data streams in the same column just barely
make it because they jump by 2 for each
difference from the truth
S
C0 C1 C2 C3 C4
*R,2 is Complete

Convengence at least by the stage when 2m
differences from e have been observed for
each ei below e that is not in the column of e
m
e
S
e4
e3
e0
C0 C1 C2 C3 C4
How about *J,2?

The same thing works, right?
S
C0 C1 C2 C3 C4
The Wrench in the Works
Proposition: Even *J,2 can’t succeed if we
add ¬ e to Gweven(e) when S extends the
Hamming ranking
 Proof: Suppose otherwise

The Wrench in the Works

Feed ¬ e until it is uniquely at the bottom
k
¬e
The Wrench in the Works
Let n exceed k and the original height of ¬e
 a = ¬e ‡ n
 b = ¬e ‡ n+1

k n
a
b
¬e
The Wrench in the Works

By positive invariance, timidity and stubb.
k n
a
b
¬e
The Wrench in the Works

By positive invariance, timidity,
stubbornness and the fact that ¬e was alone
in the basement
k n
a
b
¬e
Ouch!!!
Solution: A Different Initial State
Goodman Distance:
 G(e, e’) = {n: e’ grues e at n)}
 g (e, e’) = | G(e, e’) |

e
e’
g (e, e’ ) = 6
Hamming vs. Goodman Algebras


a H b mod e  D (e, a) is a subset of D (e, b)
a G b mod e  G (e, a) is a subset of G (e, b)
Hamming
111
110
101
011
010
001
100
000
101
Goodman
100
110
010
111
011
001
000
Epistemic States as Boolean Ranks
Hamming
Goodman
Gwodd (e)
Gweven (e)
e
Gw(e)
e
Epistemic States as Boolean Ranks
Goodman
G3(e)
G2(e)
G1(e)
G0(e)
e
*J,2 can identify
w
G (e)
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
*J,2 can identify


w
G (e)
Proof: Use the Goodman ranking as initial state
Show by induction that the method projects e
until a grue occurs at n, then projects e ‡ n, until
another grue occurs at n’, etc.
Part VII:
Discussion
Summary









Belief revision as inductive inquiry
Reliability vs. intuitive symmetries
Intuitive symmetries imply reliability for large a
Intuitive symmetries restrict reliability for small a
Sharp discriminations among proposed methods
Isolation of fundamental epistemic dilemma
a = 2 as fundamental epistemic invariant
Learning as cube rotation
Surprising relevance of tail reversals