Bayesian Probability

Mathematics\Probability\Bayesian Probability

Description:

Bayesian Probability represents a framework for understanding and manipulating uncertainties by applying Bayes’ theorem, which is central to Bayesian inference. This approach to probability emphasizes the use of prior knowledge or beliefs combined with new evidence or data to update the probability of a hypothesis.

Core Concepts:

  1. Prior Probability (\(P(H)\)): This is the initial degree of belief in the hypothesis \(H\) before observing any data. The prior probability reflects any existing knowledge or assumptions about \(H\).

  2. Likelihood (\(P(E|H)\)): This term represents the probability of the evidence \(E\) given that the hypothesis \(H\) is true. The likelihood quantifies how well the hypothesis explains the observed data.

  3. Marginal Probability (\(P(E)\)): The marginal probability of the evidence is the total probability of observing \(E\) under all possible hypotheses. It is calculated as:
    \[
    P(E) = \sum_{i} P(E|H_i) P(H_i),
    \]
    where \(H_i\) are the possible hypotheses.

  4. Posterior Probability (\(P(H|E)\)): This is the updated probability of the hypothesis \(H\) after considering the evidence \(E\). It is given by Bayes’ theorem:
    \[
    P(H|E) = \frac{P(E|H)P(H)}{P(E)}.
    \]
    The posterior probability combines the prior probability and the likelihood to provide a refined probability of the hypothesis given the new data.

Application:

Bayesian Probability is utilized in a wide array of disciplines including statistics, machine learning, economics, and medicine. It is particularly advantageous in scenarios where it is important to integrate expert knowledge or prior information. For instance, in medical diagnosis, doctors can use prior knowledge about a disease prevalence (prior probability) and update it with the test results (likelihood) to determine the probability that a patient has a disease (posterior probability).

Mathematical Formulation:

Suppose we have a hypothesis \(H\) and evidence \(E\). Bayes’ theorem is formulated as:
\[
P(H|E) = \frac{P(E|H) P(H)}{P(E)},
\]
where:
- \(P(H|E)\) is the posterior probability of the hypothesis given the evidence.
- \(P(E|H)\) is the likelihood of observing the evidence under the hypothesis.
- \(P(H)\) is the prior probability of the hypothesis.
- \(P(E)\) is the marginal probability of the evidence.

Example:

Consider the case where we want to determine the probability that a patient has a certain disease (\(D\)) given a positive test result (\(T\)).

  • Prior Probability: \(P(D)\) is the initial probability of the disease.
  • Likelihood: \(P(T|D)\) is the probability that the test result is positive given that the patient has the disease.
  • Marginal Probability: \(P(T) = P(T|D)P(D) + P(T|\neg D)P(\neg D)\), where \(P(T|\neg D)\) is the probability of a positive test result given no disease.

By Bayes’ theorem,
\[
P(D|T) = \frac{P(T|D)P(D)}{P(T)}.
\]

In summary, Bayesian Probability provides a coherent mechanism for updating the probability of a hypothesis in light of new evidence. It is a fundamental tool for decision-making under uncertainty and enhancing predictions with contextual knowledge.