You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[likelihood_bayes] Update Two Likelihood Lectures (#506)
* Tom's July 31 edits of likelihood and bayes law lecture
* Tom's edits of Blume Easley section July 31
* updates
* minor updates
* Tom's Aug 2 edits of likelihood ratio lecture, especially Blume Easley model
* minor typo fixes
* fix minor typos
* update labels
---------
Co-authored-by: thomassargent30 <[email protected]>
We thus conclude that the likelihood ratio process is a key ingredient of the formula {eq}`eq_Bayeslaw1033` for
415
-
a Bayesian's posterior probabilty that nature has drawn history $w^t$ as repeated draws from density
436
+
a Bayesian's posterior probability that nature has drawn history $w^t$ as repeated draws from density
416
437
$f$.
417
438
418
439
@@ -425,8 +446,11 @@ Until now we assumed that before time $1$ nature somehow chose to draw $w^t$ as
425
446
426
447
Nature's decision about whether to draw from $f$ or $g$ was thus **permanent**.
427
448
428
-
We now assume a different timing protocol in which before **each period** $t =1, 2, \ldots$ nature flips an $x$-weighted coin and with probability
429
-
$x \in (0,1)$ draws from $f$ in period $t$ and with probability $1 - x $ draws from $g$.
449
+
We now assume a different timing protocol in which before **each period** $t =1, 2, \ldots$ nature
450
+
451
+
* flips an $x$-weighted coin, then
452
+
* draws from $f$ if it has drawn a "head"
453
+
* draws from $g$ if it has drawn a "tail".
430
454
431
455
Under this timing protocol, nature draws permanently from **neither** $f$ **nor** $g$, so a statistician who thinks that nature is drawing
432
456
i.i.d. draws **permanently** from one of them is mistaken.
@@ -479,7 +503,7 @@ Let's generate a sequence of observations from this mixture model with a true mi
479
503
We will first use this sequence to study how $\pi_t$ behaves.
480
504
481
505
```{note}
482
-
Later, we can use it to study how a statistician who knows that an $x$-mixture of $f$ and $g$ could construct maximum likelihood or Bayesian estimators of $x$ along with the free parameters of $f$ and $g$.
506
+
Later, we can use it to study how a statistician who knows that nature generates data from an $x$-mixture of $f$ and $g$ could construct maximum likelihood or Bayesian estimators of $x$ along with the free parameters of $f$ and $g$.
Since $KL(m, f) < KL(m, g)$, $f$ is "closer" to the mixture distribution $m$.
564
588
565
589
Hence by our discussion on KL divergence and likelihood ratio process in
566
-
{doc}`likelihood_ratio_process`, $log(L_t) \to \infty$ as $t \to \infty$.
590
+
{doc}`likelihood_ratio_process`, $\log(L_t) \to \infty$ as $t \to \infty$.
567
591
568
592
Now looking back to the key equation {eq}`eq_Bayeslaw1033`.
569
593
@@ -611,7 +635,7 @@ The worker's initial beliefs induce a joint probability distribution
611
635
Bayes' law is simply an application of laws of
612
636
probability to compute the conditional distribution of the $t$th draw $w_t$ conditional on $[w_0, \ldots, w_{t-1}]$.
613
637
614
-
After our worker puts a subjective probability $\pi_{-1}$ on nature having selected distribution $F$, we have in effect assumes from the start that the decision maker **knows** the joint distribution for the process $\{w_t\}_{t=0}$.
638
+
After our worker puts a subjective probability $\pi_{-1}$ on nature having selected distribution $F$, we have in effect assumed from the start that the decision maker **knows** the joint distribution for the process $\{w_t\}_{t=0}$.
615
639
616
640
We assume that the worker also knows the laws of probability theory.
617
641
@@ -632,7 +656,7 @@ $$
632
656
Let $a \in \{ f, g\} $ be an index that indicates whether nature chose permanently to draw from distribution $f$ or from distribution $g$.
633
657
634
658
After drawing $w_0$, the worker uses Bayes' law to deduce that
635
-
the posterior probability $\pi_0 = {\rm Prob}({a = f | w_0}) $
659
+
the posterior probability $\pi_0 = {\rm Prob}({a = f | w_0}) $
636
660
that the density is $f(w)$ is
637
661
638
662
$$
@@ -691,7 +715,7 @@ Because $\{\pi_t\}$ is a bounded martingale sequence, it follows from the **mart
691
715
Practically, this means that probability one is attached to sample paths
692
716
$\{\pi_t\}_{t=0}^\infty$ that converge.
693
717
694
-
According to the theorem, it different sample paths can converge to different limiting values.
718
+
According to the theorem, different sample paths can converge to different limiting values.
695
719
696
720
Thus, let $\{\pi_t(\omega)\}_{t=0}^\infty$ denote a particular sample path indexed by a particular $\omega
697
721
\in \Omega$.
@@ -908,7 +932,7 @@ $w_t$'s and the $\pi_t$ sequences that gave rise to them.
908
932
909
933
Notice that one of the paths involves systematically higher $w_t$'s, outcomes that push $\pi_t$ upward.
910
934
911
-
The luck of the draw early in a simulation push the subjective distribution to draw from
935
+
The luck of the draw early in a simulation pushes the subjective distribution to draw from
912
936
$F$ more frequently along a sample path, and this pushes $\pi_t$ toward $0$.
913
937
914
938
```{code-cell} ipython3
@@ -938,7 +962,7 @@ In the following table, the left column in bold face reports an assumed value of
938
962
939
963
The second column reports the fraction of $N = 10000$ simulations for which $\pi_{t}$ had converged to $0$ at the terminal date $T=500$ for each simulation.
940
964
941
-
The third column reports the fraction of $N = 10000$ simulations for which $\pi_{t}$ had converged to $1$ as the terminal date $T=500$ for each simulation.
965
+
The third column reports the fraction of $N = 10000$ simulations for which $\pi_{t}$ had converged to $1$ at the terminal date $T=500$ for each simulation.
0 commit comments