For many years statisticians (e.g., Berkson 1942; Wasserstein & Lazar 2016) have warned applied scientists of the dangers of the mechanistic use of p-values as a license for making a claim of a scientific finding. The erroneous belief that a significant p-value justifies a scientific claim has led to many extremely unbelievable, bizarre and obscene findings in (social) psychology that do not replicate (OSF, 2015). Despite the ongoing verbal assault, empirical disciplines such as psychology, biology, and medicine continue to depend on p-values as the standard method of drawing conclusions from data. An important reason for the lingering popularity of the p-value is arguably the perceived lack of an accessible alternative method. 

An alternative to the p-value, which has recently gained popularity in the applied sciences, is the Bayes factor. To construct a Bayes factor, one has to (1) select a pair of priors and (2) calculate two integrals. In this talk I'll elaborate on how priors are selected for normal linear model comparisons, based on the ideas of Harold Jeffreys (1939) and the subsequent work of Liang et al. 2008 and Bayarri et al 2012. [Joint work with EJ Wagenmakers and the JASP team] 


Alexander Ly

Research Area

Statistics Seminar


University of Amsterdam


Wed, 26/02/2020 - 4:00pm


RC-4082, The Red Centre, UNSW