Multiple hypothesis testing is concerned with controlling the rate of
false positives when testing several hypotheses simultaneously. One multiple
hypothesis testing error measure is the false discovery rate (FDR), which is
loosely defined to be the expected proportion of false positives among all
significant hypotheses. The FDR is especially appropriate for exploratory
analyses in which one is interested in finding several significant results among
many tests. In this work, we introduce a modified version of the FDR called
the “positive false discovery rate” (pFDR). We discuss the advantages and
disadvantages of the pFDR and investigate its statistical properties. When
assuming the test statistics follow a mixture distribution, we show that the
pFDR can be written as a Bayesian posterior probability and can be connected
to classification theory. These properties remain asymptotically true under
fairly general conditions, even under certain forms of dependence. Also,
a new quantity called the “q-value” is introduced and investigated, which
is a natural “Bayesian posterior p-value,” or rather the pFDR analogue
of the p-value.