Significance vs Prediction

Hypothesis testing is relatively straightfoward. You have two statistical models of the world, and there is an implicit bias towards one of them (the alternate) making a more substantive claim about reality than the other (the null). You then operate under the assumption that the null is the reality, and then check to see how likely your observed data is under said reality. Suppose it’s incredibly unlikely – then you’ve still only made statements about the unlikely nature of the null. Perhaps you can do a power analysis to be more confident that the alternate is the right alternate.

Essentially, a \(p\)-value is providing a measure of how unlikely the null model is – we reject the null when the \(p\)-value is significant. But now suppose you make a slight shift in perspective. Suppose you have two models (make them the same one)