Tuesday, November 08, 2005

Scientific ethics begins with leadership

After reading a few of the posts at Dr. FreeRides site I've been thinking a bit about the topic of ethics in science. In one of his posts, Dr. FreeRide asks his readers what they would put in a course on ethics... I posted a short comment in response, but the more I've thought about it, the more I've felt that a more detailed answer is deserved.

Ethics courses on scientific misconduct have never impressed me much. After a certain point in life, we either have a well developed idea of what is right or wrong, or we don't. By graduate school its a bit too late to change that... sure, we can instill the fear of God and the Dean by outlining the long list of punishments that will be dealt out to cheaters, but as any school teacher or police officer will tell you, the people most affected by those sort of speeches are the ones least likely to break the rules anyway.

Fortunately, I think that most of those who enter graduate school have a determination to seek the truth... that is the job of a scientist, after all. Unlike most other fields, science is constantly cross-checking itself and publicising for all to see both the results and the methods. This creates an environment where cheating is difficult to sustain... eventually someone will try to repeat what you do, and if they can't replicate the experiment, questions will be raised.

There are a few high flyers who thumb their nose at the system... people like Luke Van Parijs, John Darsee, and Hendrik Schön. These 'scientists' blatantly fake data, draw graphs from scratch, change labels at a whim, and do so for no other apparent reason then self-promotion. They can do quite a bit of damage in their careers, as people waste time using or trying to repeat their results, but on the good side, they are often so obvious that it doesn't take that long until they are discovered.

For some reason, people like these tend to get a lot of attention from scientific ethicists... I guess they have the same sort of attraction that serial killers have to criminal psychologists. Who would you rather profile... Charlie Manson, mass murderer, or Joe Schmoe, robber of liquor stores? Unfortunately, its the Joe Schmoes knocking over corner stores rather than the Mansons of the world that do more damage in the long run. Similarly, I would bet that its the small scale examples of scientific misconduct rather than the high flyers that waste the most time and resources. But are we really dealing with the low level scientific misconduct, or are we acting like 'Officer Friendly', telling the kids at school exactly what happens at a PMITA prison and hoping they will 'just say no'?

Creating data out of whole cloth is difficult, as van Parijs, Darsee, and Schon found out much to their chagrin... on the other hand, massaging data is disturbingly easy. Outlying data points can be dropped. Unsatisfactory experiments can be 'replicated' until desired results are obtained. Graphs can be 'cleaned up' to produce smoother curves. Judgement calls can be made that bias results one way or another. Experiments in molecular biology are rarely 'double blind' (how many researchers ask their lab mate to load their gel for them, just to be safe?), again allowing observer bias. Areas in photographs can be digitally enhanced to turn background into signal (or to remove unwanted signals).

The real question for scientific ethics isn't what makes a van Parijs or a Schon do what they do, but why an ordinary researcher working on an ordinary project feels it is acceptable to alter the data in order to produce better results. This is where we step away from the black and white examples taught in ethics class into the muddy world of real lab-bench science. There are going to be some researchers who are going to alter data.... even if only slightly or in the grey area of judgement calls... for no better reason than the advancement of careers. But there are also researchers, particularly at the bottom ranks (grad students, post docs) who are going to see data massaging as a matter of self-preservation in the 'will I still have a job tomorrow' sense of the word.

The field of science creates a uniquely perverse working environment in which low level researchers, particularly grad students, are completely dependant upon their immediate supervisors and often have no recourse should unusual or extreme demands be placed on them. This is particularly true for foreign researchers, who face the additional demands of restrictive immigration laws, who are unable to claim unemployment benefits, and who often have to support family members who are unable to work. At one university I attended, it was said (only half-jokingly) that you could tell the personality and quality of a supervisor by the nationality of his graduate students... native citizens, after all, had the freedom to leave an undesirable supervisor.

More than most fields, science should be tolerant of failure... failure eliminates fruitless lines of enquiry allowing us to focus our research, while the freedom to fail encourages the sort of risk-taking and experimentation that leads to great breakthroughs. Realistically, however, failure means that grants are not renewed, contracts are ended, and degrees go ungranted. Fourty years ago, a freshly graduated PhD. could look forward to his own laboratory and research program in a university. Now, you generally need several low paying post-docs before you have even a chance at a permanent position... and if those degrees and post-docs aren't at the right universities, and don't produce enough papers, you may as well quit while your still young enough to retrain.

It doesn't take a series of post-docs at Harvard to see that this creates an unhealthy environment that would promote low level cheating... the sort that doesn't try to stand out, but which may mean the difference between an unproductive year and a mid-level publication. How much falsification actually takes place, I don't know, but I've seen some numbers floating around for biomedical science that suggest the amount is low, but not insignificant. (I suspect that more cheating occurs in biomedical science than other fields, simply because results are often fuzzier... small differences in blood serum components, or qualitative patient assessments of well-being. Its hard for an astronomer to fake the existance of a new star or a taxonomist to fake a new species, although in the latter case I have seen some really bad judgement calls that created new species where none really existed).

So what does this have to do with teaching ethics? Whether a person crosses the line from ethical scientist to fraudulant science depends on whether they feel the benefits of cheating balance out their own internal moral compass. Undue pressure to produce positive results, confirm a supervisors hypothesis, or support a funding agencies preferred conclusion acts like a massive thumb on the ethical scale, pushing the researcher across that line. Ethics courses that I have run across only deal with one side of the scales... the moral compass. They rarely deal with the larger picture of the work environment, and how to cope with the pressures to alter results that a researcher may face. Instead, the ethicist comes off as 'Officer Friendly' - strictly crime and punishment.

In addition to the basics of right and wrong, young researchers need to know how to deal with the pressure to produce in order to avoid the temptation to cheat - this includes learning about university expectations for supervisors, grievance procedures, university standards for handling corporate funding of research, and even transferring academic skills from the university environment into the general workforce ('leaving academia'). It also means that universities have to have expectations of supervisor mentorship, effective ombudsmen to aid grad students and post docs, clearly stated guidelines for accepting outside funding, and a view of the world that goes beyond the ivory tower. It also means that ethics classes have to be for professors, not just students. Good ethical behavior is learned not from lectures but from observation of those in leadership positions. Its one thing for a supervisor to tell a student that pruning data is wrong. Its another to have a supervisor suggest that an unfavorable experiment be rerun until a desired result is obtained.

I hope that the level of fabrication is low (after all, I depend on other peoples results for my own research), but I'm not naive, and I've seen and heard some things that have made me wonder. It would be good to see some solid data on why researchers fabricate their results, although that may be hard to obtain. I suspect most small scale cheating that is discovered is quietly buried... after all, more than one reputation is on the line. Ultimately, a response to the problem that deals with only one side of the problem will be ineffective. We have seen how well the law-and-order approach has worked on the War against Drugs, War against Poverty, and War against Terror.... without a two pronged approach that not only deals with the symptoms but also the underlying problems all of these 'wars' have stagnated. The last thing we need is a similarly effective War against Fabrication. Its not like Nancy Reagan needs the work.