Efforts to replicate portions of the scientific literature have lead to widely varying and often low rates of replicability. This has raised concerns over a ``replication crisis’’ whereby many of the statistically significant claims in the published literature are thought to be false positives, due to some combination of publication bias and widespread use of questionable research practices. Here, we re-analyze data from large-scale replication efforts and show that few, if any, replication failures can be attributed to false positives. We then present a minimal, alternative model of how replication failures can occur even in the absence of false positives. Using our model, we show that variation in estimates of replicability across social science appears largely to be an artifact of replication sample size. Our results further suggest that file-drawer sizes are likely much smaller, and Questionable Research Practices less abundant, than commonly assumed. We anticipate our findings will be a starting point for more formal and nuanced discussion of the health of the scientific literature and areas for improvement.