I recently read an article on Forbes about a forthcoming JAMA article about screening test uncertainty and invasive medical procedures [Link]. In the study [Study Link], the researchers gave 727 men hypothetical PSA scores for screening for prostate cancer. The control group was not given a PSA result. Those who not in the control group were given one of three outcomes, leading to four groups.
- Normal PSA test
- No PSA test (control group) – no uncertainty
- Inconclusive PSA test – high uncertainty
- Elevated PSA test
Men in each group were asked if they would pursue a biopsy, an invasive and expensive medical procedure compared to the PSA test. Groups 2 and 3 are similar in that they don’t have conclusive of cancer. However, group 3 has more uncertainty.
Probability of patients who opted for a biopsy:
- Normal PSA test – 13%
- No PSA test (control group) – 25%
- Inconclusive PSA test – 40%
- Elevated PSA test – 62%
The issue here is that an inconclusive test gives the same information as doing no test at all, yet those with an inconclusive test want to get a biopsy at a higher rate. In the study, the biopsies are requested by the patients, but in real life, doctors often turn inconclusive tests into expensive and invasive medical procedures.
I had a few other reactions to the issues associated with screening for disease.
With limited resources, inexpensive screening in theory helps to keep health care costs down. A cheap test is supposed to be used to weed out some of the population that does not need more invasive screening test. If disease cannot be ruled out, it may make some sense to retest. Of course, the issue here is that biopsies are chosen at different rates for different inconclusive patients. The results of this study suggests that screening for disease starts a process that is hard to turn off – screening could result in more men being biopsied instead of fewer. This isn’t just an issue with PSA tests.
I also found it interesting that 13% of the population apparently refused to be weeded out.
Humans do not do a good job of figuring out how to effectively use resources to screen for and manage disease. And as we see in this study, humans do not always make better decisions with better information. This is why we need good models of disease and its treatment. With those models in place, we can explore how to effectively target limited healthcare resources at the patients who most need them.
This is a growing area for operations research – identifying how to make good decisions at the patient level for understanding when to do more testing and when to take a wait-and-see approach. For slow-growing cancers such as prostate cancer, a wait-and-see approach may be a good one most of the time (disclaimer: for what it’s worth, I’m not a medical doctor). Waiting and retesting was not an option considered in the study, and maybe the results would be different. A wait-and-see approach is often used for other types of cancers, such as for pre-cancerous legions that could lead to cervical cancer.
For more reading about screening policies, inaccurate screening tests, and unnecessary treatment, read this NY Times article about breast cancer: [Link] Breast cancer can be quite aggressive, especially when it affects younger people, and it is common. I’ll admit that delaying or avoiding mammography is hard to fathom. The article highlights how aggressive mammography policies have led to the discovery of more cancerous and pre-cancerous legions (and thus cancer survivors) but it has not led to higher cancer survival rates. Other issues such as self-exams are also discussed.
I’m looking forward to learning about the latest research in this area at the INFORMS Healthcare Conference in Chicago this summer [Link]. I hope to see many of you there.
April 29th, 2013 at 5:36 pm
“The issue here is that an inconclusive test gives the same information as doing no test at all, …” Does it? Is the probability of an inconclusive PSA result equal for positive and negative patients?
April 30th, 2013 at 9:24 pm
[…] tests and invasive medical procedures” https://punkrockor.wordpress.com/… on lauramclay‘s […]
May 1st, 2013 at 1:09 am
The situation is pretty discouraging. As measurement technologies improve, the rate of detected “anomalies” in the human body goes way up, especially in the first decade when there is little understanding of “normal” test results. The fad for whole body scanning was an extreme case of misunderstanding the implications of Bayesian statistics.
Second, I wonder about the role of OR here. Aren’t the issues mostly psychological, much more than factual? The article on breast cancer screening was very good on that point – fear of cancer has been raised by incessant “public service” advertising, and marketing by Komen and others. The usual OR toolbox has little to say about this. (To put this another way, the technical analysis of decision trees for different test results is pretty straightforward. As technology and historical data change, the trees need to be updated, but does this require much OR?)
If we had an OR toolbox for helping people truly internalize the implications of prior and posterior probabilities, we could be more useful! I’ll state the usual disclaimer: researching such methods would be very hard to publish in an OR journal.
May 1st, 2013 at 11:46 am
@prubin73, that was my first reaction, too. It sounds like they explained the situation well to the study participants, but it’s hard to imagine that the participants didn’t use outside knowledge to bias their reactions.
@Roger. I think this is why we really need OR. Medical math modeling can be used to construct good policies that will help to justify why doctors and patients should deviate from their instinct. Politicians seem to derail this at every turn – think about the debate about mammography a few years back when Congress overruled new policies to cut back on too many mammograms. (The problem here is that good OR cannot always help someone get reelected!)