I am a fan of failure, and I encourage students to try new things and make mistakes when doing OR. I never seem to learn much from playing it safe (and neither does anyone else for that matter). However, as someone who does computational work, I am a little uncomfortable about errors that could be a matter of life or death. A blog entry by Dr. Wes at Evanston Northwest Healthcare in Illinois makes an argument for medical errors that doesn’t entirely sit well with me even though I applaud the basic premise. He disputes the fact that most medical errors can be prevented and that medical errors are a leading cause of death (see WebMD article here).
However, I ultimately agree with Dr. Wes’s assessment. He writes:
[M]edical errors serve as an invaluable resource and irreplaceable learning tool for our housestaff, physician attendings and nurses. For instance, most medical school and hospital medical and surgical programs are required to have “Morbidity and Mortality” conferences as part of their ongoing training curricula. Here, surgical mistakes and deaths are reviewed critically by scores of those involved in a patient’s care.
So each medical error could prevent 100 people from making the same medical error if the system works right.
Requiring mistakes to be discussed at medical conferences is a nifty idea. We don’t do enough of this in OR. Our modeling gone awry usually means our code runs slower, not that a life is in jeopardy. And the peer review system for journal publication is mainly an outlet for us to publicize our successes, not discuss our failures. Maybe I should embrace this opportunity and discuss OR mistakes in the classroom a little more often.
December 6th, 2007 at 1:06 pm
This is an excellent topic and idea. I remember reading one article in Interfaces about a failed or “partially successful” OR implementation. It was one of the more interesting articles I’ve seen.
I am just completing my first semester in graduate school and I would welcome the opportunity to learn from others’ mistakes.