Bayesian probability and prior probabilities

Conditional probabilities

Conditional probabilities are essential part of statistics because they allow us to demonstrate how information changes our beliefs.

$$P(A, B) = P(A) \times P(B | A)$$

Bayes' theorem

One of the most amazing things we can do with conditional probabilities is reversing the condition to calculate the probability of the event we are conditioning on, we can use $P(A|B)$ to arrive at $P(B|A)$.

$$P(A|B) = \frac{ P(A)P(B|A) }{ P(B) }$$

Our beliefs describe the world we know, so when we observe something, its conditional probability represents the likelihood of what we have seen given what we believe:

$$P(\text{observed} | \text{belief})$$

Bayes' theorem allows us to reverse $P(\text{observed} | \text{belief})$, this is fundamental to understanding how we can use data to update what we believe about the world.

  • $P(\text{belief | data})$ is the posterior probability
  • $P(\text{data | belief})$ is the likelihood
  • $P(\text{belief})$ is the prior probability

If both the likelihood and prior are represented using beta distribution, it is really easy to calculate normalized posterior,

$$\text{Beta}(\alpha_{\text{posterior}}, \beta_{\text{posterior}}) = \text{Beta}( \alpha_{\text{likelihood}} + \alpha_{\text{prior}}, \beta_{\text{likelihood}} + \beta_{\text{prior}} )$$