1. Bayes Theorem
1.1. Bayes theorem
1.2. Prior predictive distribution
1.3. Posterior predictive distribution
2. Fundamental Distributions
| Name | PDF/PMF | Mean | Variance | Mode |
|---|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
TableΒ 1: Single Variate Distributions3. Functions
3.1. Beta Function
Properties:
4. Conjugate Prior
The idea of conjugate prior is that for a give likelihood we choose a prior distribution such that, after observing data and applying Bayesβ theorem, the posterior distribution belongs to the same family as the prior.
That is, if and have the same distributional form, then the prior is called a conjugate prior for the likelihood model.
This is useful because it makes Bayesian updating analytically tractable. Instead of performing difficult integration or numerical approximation, we can often derive the posterior parameters in closed form.
5. Conjugate Prior for Exponential Families
Note general exponential family:
Likelihood of a sequence of i.i.d.samples:
π(π¦|π)=β
β
β
β
β exp(β
β
β
β
β β
β
β
β
β ββ
β
β
β
β )β
So conjugate prior for that likelihood is
Posterior is
6. Proper and Improper Prior Distributions
A prior is called proper if it is a valid probability distribution:
And improper if
- If a prior is proper, so must the posterior.
- If a prior is improper, the posterior could be proper or improper.
In theory, all priors are acceptable, as long as the posterior is proper.
7. Fisher Information Matrix
8. Jeffreysβ Prior
9. Pivotal Quantities
For the binomial and other single-parameter models, different principles give (slightly) different noninformative prior distributions. But for two casesβlocation parameters and scale parametersβall principles seem to agree[1].
9.1. Location Parameter
π(π)βΌ19.2. Scale Parameter
π(π)βΌ1π10. Predictive Accuracy
People care about the accuracy in two different ways. First to assume that the model is all we known and check posterior predictions. The second is to compare several candidate models. Even if all of the models being considered have mismatches with the data, it can be informative to evaluate their predictive accuracy, compare them, and consider where to go next[2].
11. KL Divergence
12. Linear Algebra
12.1. Convex Combination
A subset π΄ of a vector space π is said to be convex if
for all vectors
, and all scalars π in [0,1].
Via induction, this can be seen to be equivalent to the requirement that
for all vectors
, and for all scalars π1,π2,β¦,ππβ₯0 such that βππ=1.
Bibliography
- [1] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and others, Bayesian Data Analysis, Third. Boca Raton, Florida: Crc, 2013. [Online]. Available: https://stat.columbia.edu/~gelman/book/
- [2] A. Gelman, J. Hwang, and A. Vehtari, βUnderstanding predictive information criteria for Bayesian models,β Statistics and Computing, vol. 24, no. 6, pp. 997β1016, Nov. 2014, doi: 10.1007/s11222-013-9416-2.