On Friday, Andrew Gelman at Columbia (his blog: andrewgelman.com) was on a panel about ethics at the University of Wisconsin-Madison. He specifically talked about ethics of publishing statistics research and open data. He addressed a few topics that you can read about in his first Chance column here [Link]. Responses to this article are here [Link]. He has another article about statisticians not practicing what they teach [Link].
As a statistician, I think the key point is to recognize that different analyses can give different perspectives on a data set. I am not suggesting that researchers be regularly subjected to forensic analyses of all their decisions in data collection and analysis, explaining every email exchange or every new version of a data set that had a transformation or data exclusion. But openness should be the norm.
Gelman mentioned that too often extreme case studies are used in classroom studies with a clear Evildoer. These case studies are not good for helping students deal with ethics of what he called “tough cases” where there isn’t an obvious conclusion. The tough cases have nuances and shades of gray, so they can trivialize ethical issues, implying that anything is ethical if you align it a certain way (i.e., if you are a weasel).
Another interesting point was about doing bad statistics. Being incompetent isn’t unethical, but if someone tells you that you should redo your analysis because your conclusions cannot be supported and you don’t, then your refusal to do better science can be unethical. Case in point, these ridiculous US Department of Transportation “forecasts.” Clearly bad science. The US DOT continued to use the forecasts years after their data suggested that the forecasts were way off.
Gelman pointed out that operations research often addresses different ethical concerns regarding how to allocate scarce resources when there is not enough to go around. I’ll write more about that another time.
Please share ethics issues in the comments.