The future of scientific computing will be driven by highly distributed parallel machines with millions of compute nodes. In order to take advantage of this already arriving wave of computing capability we must identify and remove the remaining barriers to parallel scaling in the Diffusion Monte Carlo algorithm. To address these scaling issues in a simple way, we propose that a time delay be introduced into the population control feedback. In order to assess this algorithm, we investigate the behavior of population fluctuations and the population control bias (which will emerge into greater relevance with larger physical systems and requirements of higher accuracy) in a model system for both the standard and time delayed DMC algorithms. We then condense our findings into a simple set of recommendations to improve the scaling of DMC while managing the population control bias.