With all else being equal, a simpler explanation is more likely
to be the correct one is what we understand as Occam's razor (William Ockham). In other words, simpler models are favored until the data is able to explain, with a few assumptions, more complex ones. This philosophical notion is applied in several scientific disciplines.
Given Occam's principle, how does one go about reaching a simple parsimonious model to predict disease risk? What is the difference between adopting a probabilistic frequentist or a subjective (Bayesian) approach, in other words should there be a Bayesian viewpoint in epidemiological research data?
Frequentist and subjective probability
Frequentist methods are known as the Fisherian
P-values (RA Fisher) and the confidence intervals that remain the norm in biostatistics and epidemiological data under analysis (what we see in published clinical and epidemiological studies). They are based on notions of objectivity and likelihood functions that help with setting conclusions. Frequentist techniques are highly effective in randomized trials. However in observational studies, a frequentist model may become more questionable (potentially misleading) as we are more likely to be confronted with confounding, selection bias, and measurement error. That is when Bayesian methods may potentially be worthwhile looking into. Even though Bayesian methods have been criticized for their imprecision, their total reliance on prior parameter distribution, and that they are largely based on subjective and arbitrary elements, it has been suggested that they may be useful where prior estimates may be generated by applying the same formulas that frequentists use. An article by Sander Greenland in 2006 provides a clear explanation of this topic with clear examples.
What are Baysian methods (subjective probability)?
Subjective probability can be defined as the degree of belief that x is true. The probability in this context does not represent the external world but rather features of personal subjective interpretations. In subjective probability, we are not interested in any kind of long run frequency behaviors.
For example, what is the probability of your flight to Hawaii on January 28, 2014 will be cancelled?
In this case, you are not interested in a frequency behavior, but you are interested in predicting whether your flight will be cancelled on this specific day (one single occasion). There is a certain degree of belief in whether this event will occur. Subjective attitude toward the belief that the flight will be cancelled. You know that flights are more likely to get cancelled due to it being in the middle of "storm season", you may determine in this case that the probability is high. Another example, is what is the probability of you getting a heart attack on your 80th birthday? Betting games are widely known as developed based on subjective probability. In larger contexts and data sets, Bayesian methods are applied through subjectively determining prior distributions and applying them to current models.
The parallel between Bayesian and frequency methods is the conditional model.
conditional probability (Baye's rule): P(data/parameters)
P (A/B) = (P(A) x P(B/A))/P(B)
Where Probability that A is true given that B is true is known as the Posterior probability of A.
If you had observational data and wanted to determine the outcome Y of breast cancer given several parameters x, you would need to build a logistic model using either automated mechanical methods or confounding and interaction assumptions. In certain cases, the epidemiological data model will be based on statistical cut off points potentially conflicting with contextual information. Most models are criticized as being biased with too many assumptions. In this case, can one consider developing a model using Bayesian priors equally arbitrarily as a frequency data model? Can one replace arbitrary variable selection by prior distributions? The articles by Greenland suggest 'yes'. The concept of pooling studies (hypothetical prior with current study) is suggested (adding results from the hypothetical study of priors as a new strata...).
In conclusion, would the recipe of reaching a parsimonious model (making Ockham happy) using observational data include:
1) Common sense and ingenuity
2) A frequency model with few assumptions (frequentist approach)
3) A priors model (Bayesian perspective)
Should this become common practice?
_____________________________________________________________________
References and good reads:
Savage LJ. Subjective Probability and Statistical Practice. The Foundations of Statistical Inference. 1962
Greenland S. Bayesian perspectives for epidemiological
research: I. Foundations and basic methods. Int J Epidemiol. 2006
Jun;35(3):765-75.
Greenland S. Bayesian perspectives for epidemiological
research. II. Regression analysis. Int J Epidemiol. 2007 Feb;36(1):195-202.