The administration looks to be getting ready to unveil some new job stimulus (HT Mark Thoma), and meanwhile the disagreement among economists over the last stimulus seems to be as large as ever. Professional modelers, and the economists who trust them, argue that the models show the stimulus worked. Critics say, or seem to be saying, that the assumptions of these models drove the pre-stimulus estimates that the stimulus would work, the assumptions have not changed, and so no empirical reality that could have arisen would show that the stimulus didn’t work. These critics suggest that by the nature of macroeconomics, the effects of the stimulus are unknowable. Either way, there is obvious disagreement about the impacts of the $787 billion.
My modest proposal is that if there is to be a second round of stimulus we should structure it in a way such that it’s efficacy will be knowable. One way to do that would be to randomize some small portion of the stimulus, say 1%. For instance, if we are going to cut payroll taxes across the board by 1%, cut it by 2% for a randomly selected subset of counties. Then we can look at the counties that randomly received 2% and see how much, it at all, employment improved over the control group. The randomization will prevent any endogeneity problems that would make causality difficult to tease out econometrically. Obviously it would be much trickier to randomize infrastructure spending, but surely some clever person can devise some way induce enough randomization into the process to estimate impacts reliably, right?
There will be those who argue that this would be a waste of precious stimulating potential. But it must be worth 1% of the stimulus so that next time we know a) which programs work the best, or work at all, and b) so we can avoid either spending money that has no impact, or forgoing the opportunity to reduce the suffering and costs of unemployment. Had we done it the first time we might not need to be having the debate we’re having now.