I am at the INFORMS Healthcare Conference in Rotterdam. Dimitris Bertsimas at MIT delivered the opening plenary entitled “Personalized Medicine: A Vision for Research and Education.” He talked about research in operations research, healthcare analytics, and opportunities for analytics education. It was a great talk, where Bertsimas discussed how analytical methods could and should be used for making personalized medical decisions. He was frank and honest about some of the mistakes he made along the way, and those confessions were the best parts of the talk. I captured the talk outline in a picture.
Bertsimas claimed that data is often an afterthought in many models. I agree. His main takeaway that generated a lot of questions dealt with model transparency. Bertsimas stressed the need to make models transparent so that they can be adopted by physicians and healthcare service providers. He warned that models will be “dead on arrival” if they are not transparent. However, transparency can be a challenge when using many machine learning methodologies such as neural networks. He confessed he learned that transparency is far more important than accuracy the “hard way.”
Side note: transparency is not just a sticking point with physicians. The New York City Police Department’s Domain Awareness System was a 2016 Edelman finalist. Police officers also demanded model transparency. This limited the kinds of analytics that could be used within the tool, but the 30,000 police officers bought into the transparency, used the tool, and kept New York City safer.
Have you ever been required to sacrifice accuracy for transparency?
July 31st, 2017 at 11:05 am
University of Washington had some recent interesting research around explanatory models for arbitrary classifiers, including neural networks. A clever way to provide transparency for models that don’t readily provide insight into how they arrived at their predictions.