# Why we need a prior when computing a Bayes Factor (R code provided)?

by rnorouzian   Last Updated June 21, 2017 01:19 AM

I'm new to Bayesian statistics and have a fundamental question about "Bayes Factors". Specifically, I understand if we want to compute the posterior probability of a hypothesis (e.g., \$p(H_1 |Data)\$, we need a prior probability (i.e., \$p(H_1)\$) for the hypothesis in question as per Bayes' rule.

BUT I'm wondering when computing only a "Bayes Factor" which is a factor by which we update our prior belief about the hypothesis, why we need a prior for the determination of \$H_1\$. In other words, when we do NOT want to compute any posterior probability, why we talk about a prior when computing a Bayes Factor?

In fact, I even want to know what does the mathematical integration for the following 2 \$H_1\$s (one with a prior and the other without a prior) that I have used below (R code) exactly do?:

``````## With a Cauchy prior:

H1 = integrate(function(delta) dcauchy(delta, 0, sqrt(2)/2) * dt(2.6, 98, delta*sqrt(20)), -Inf, Inf)[[1]]
# > 0.06127036

## Without any kind of prior:

H11 = integrate(function(delta) dt(2.6, 98, delta*sqrt(20)), -Inf, Inf)[[1]]
# > 0.2230371

## H0
H0 = dt(2.6, 98)

## BF10:

BF10 = H1 / H0

BF10 = H11 / H0
``````
Tags :