Or, at least, finds itself directly contradicted by the basic scientific method:
The problem now is that we’re rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, “every decision increases your error prospects.” She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error….
It’s pretty obvious that these factors create a host of potential problems, but Young provided the best measure of where the field stands. In a survey of the recent literature, he found that 95 percent of the results of observational studies on human health had failed replication when tested using a rigorous, double blind trial. So, how do we fix this?
The consensus seems to be that we simply can’t rely on the researchers to do it. As Shaffer noted, experimentalists who produce the raw data want it to generate results, and the statisticians do what they can to help them find them. The problems with this are well recognized within the statistics community, but they’re loath to engage in the sort of self-criticism that could make a difference. (The attitude, as Young described it, is “We’re both living in glass houses, we both have bricks.”)
To me, the central problem appears to be that few scientists understand statistics and probability well enough to be permitted to make use of them in a manner which merits any credibility. The widening gap between econometric models and the performance of the real economy, combined with situations like the recent revelation that many, if not most genetic studies purporting to show natural selection were based entirely upon false positives, highlights the importance of performing actual science according to the scientific method rather than substituting a derivative and passing it off as science.
Like logic and philosophy, statistical analysis is informative and useful, but it is not intrinsically science.