- Probability of Event (A and B) being True:
P(A and B) = P(A) * P(B) = probability of A and B being True (this assumes A and B event are independent of each other).
- Bayes;
(1) P(A) * P(B | A) |
where P(B | A) is the probability of B conditionned on A. |
(2) P(B) * P(A | B) |
(1) = (2) ==> P(A) * P(B | A) = P(B) * P(A | B) |
Bayes’ Theorem:
Diachromic interpretation: probability of my hypothesis given new evidence.
is called posterior, is normalizing constant.
If you see new evidence, you can update your hypothesis based on that new evidence:
-
likelihood of seing that evidence if your hypothesis was correct
-
likelihood of that evidence under any circumstance.
Naive Bayesian Algorithm uses a statistical approach: the Bayesian approach is robust and less likely to find false patterns in noisy data (overfit the data).
Bayes theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
where $P(A)$ is the probability of event A occuring, P(A|B) is the probability A conditioned on B ocuring. $A$ is the desired outcome.
Bayesian inference allows to make justified decisions on a granular level by modeling the variationin the observed data.
roas= revenue/cost = revenue / conversions conversions/cost= revenue/conversion 1/cpa
we can assume that the ad sets are similar to each other:
Log-normal probability density function: