have to agree that statistics can be used to prove / disprove lots of things.
one that comes to mind was the one quoted for soldiers when they first started using steel helmets.
the stats showed that after using steel helmets the number of head injuries rose in number which could misleed you into believing that they were a bad thing .
in reality the soldiers who received head injuries whilst wearing their new helmets would previously have been killed without them, but the death rates were a different set of statistics.
That is not a fault of the statistics, that is a fault of the research methodology. All the "statistics" are in this instance a recording of what was observed, that the observations where inadequate or mis-interpreted is not the fault of statistics and may also reflect the understanding and type of research undertaken at the time [1].
Research is just not about the "results" the statistics, it also about the prior literature, the theoretical model, the research model and the interpretation. If you read "exciting" research papers
you should see those elements coming through all the time. I suspect part of the problem is that we often "report" the outcomes, the conclusions when what is really needed is to take those conclusions in context of the full paper. Of course with blog postings, forum postings, newspaper articles etc publishing the full paper is not practical (or possible, e.g., copyright laws) so a reference to the source is important to allow for those interested to go back to get the complete research picture.
Replicated research, meta analysis, contempt peer review are all key aspects of growing our knowledge and that is why care is always or should always be taken with single studies. It is much smarter to look at the broader research for patterns and developments (you will often hear comments of this nature made in the context of medical research for example and for good reason).
Of course also proper and careful analysis of the published research papers should form part of that; not relying on Internet postings or newspaper articles and the like. One of the reasons I am always keen to go back to the actual research ... the references or source data
[1] I am assuming this was research related to the first or second world wars. It is interesting therefore that Cohen in his extensively referenced, dare I suggest, seminal work on statistical power analysis in 1969 wrote in the preface to that original text,
"... I became increasingly impressed with the importance of statistical power analysis, an importance which was increased an order of magnitude by its neglect in our textbooks and curricula. The case for its importance is easily made: What behavioral scientist would view with equanimity the question of probability that his investigation would lead to statistically significant results, i,e., its power? And it was clear to me that most behavioral scientists not only could not answer this and related questions, but were even unaware that such questions were answerable. Causal observation suggested this deficit in training, and a review of a volume of the Journal of Abnormal and Social Psychology (JASP) (Cohen, 1962), supported by a small grant from the National Institute of Mental Health (M-517A), demonstrated the neglect of power issues and suggested seriousness." (Cohen, 1988, p. xix).
Cohen's observations suggest that the development of research to the standards we are familiar with now did not extensively exist prior to 1969 so to be fair to the researchers their findings may in part at least reflect the research techniques of the time.
The Cohen (1988) reference is:
Cohen, J. (1988).
Statistical power analysis for the behavioral sciences (2nd Ed.). Hillsdale, N.J.: Lawrence Erlbaum Associates, Publishers.
Regards
Andrew