Many data science companies aim to leverage machine learning to personalize healthcare. The problem is that probability distributions of the potential causes of the patients' ailments do not create value for healthcare providers. Optimization does. But only when it can handle uncertainty. Can you?
Note: This is a puzzle and not a real case. The situation below is made up and is not based on real data.
AI Health is a startup in DS City. They have created a machine learning model that provides diagnoses based on a natural language description of the symptoms, electronic health records, as well as genetic markers. Unfortunately, no healthcare provider is buying their product. The issue is that the diagnoses provided by AI Health carry more uncertainty than human doctors' assessments after conducting additional tests.
AI Health now wants to overhaul its product. Instead of providing a personalized diagnosis for the patient, they want to provide a personalized test and treatment plan. As this is an actionable insight, it would create tangible value for healthcare providers.
For the optimization, the following data is available:
- The list of the potential causes of the patient's ailments, with a personalized probability of each cause based on EHR, genetics, and symptoms.
- A list of tests that can be conducted, each with a cost.
- A list of treatments that can be prescribed.
- A matrix of costs for each pair of causes and initially prescribed treatments.
- A matrix of probabilities for positive test results for each pair of tests and causes.
Based on this data, AI Health wants to provide a personalized test and treatment plan in the form of a tree. Each inner node is associated with a test to be conducted. Depending on the test outcome, we follow the left (negative test) or the right (positive test) branch in the tree. Leaf nodes are then associated with a prescription for the initial treatment.
They show you the example of a patient with lower back pain. The system they already built lists four potential causes and their probabilities for this patient: arthritis (75%), a ligament strain (10%), a genetic skeletal deformation (10%), or a ruptured spinal disc (5%). There are five tests that can be conducted: Arthro ($500), Theanos ($10), DiaFate ($270), URFLT ($280), and Radiol ($300). For each potential cause and test, the matrix below gives the probability (in percent) that the test will come back positive.
You note that no test is perfect and that false positive and false negative rates for the tests fluctuate depending on the real medical issue. For example, when the Theanos test comes back positive, you can say with certainty that the patient does not have arthritis, but there is a 50/50 chance that the patient has a strained ligament.
As for the costs of treatment, the following matrix gives the total treatment costs (in dollars) if a (real) cause is initially treated with one of three available treatments: medication, rehabilitation, or surgery.
These costs reflect the total cost of treatment, including the recovery from initial misdiagnoses and mistreatments. You note that the costs are not symmetric. A strained ligament treated with meds simply incurs the cost of rehab ($1,500) plus the cost of initially prescribed (useless) meds ($1,000). However, arthritis treated with rehab incurs more than the cost of the pharmaceutical treatment plus the initially prescribed rehab, possibly because a delayed medical treatment requires higher dosages or more expensive medicine. You note that the cost of initial mistreatment can be very high. In the case of a ruptured disc that is initially treated with rehab, the costs skyrocket to $14,500, or $10,000 more than the cost of the operation plus the cost of the erroneously prescribed rehab.
AI Health asks you to provide a test and treatment plan in the form of a tree for this patient, which will minimize the total expected costs, which are the sum of the expected test costs and the expected treatment costs.
For each patient, the healthcare provider does not want to conduct more than three tests, and all tests have to be different (on the path from the root to each leaf, in the tree as a whole, you may have duplicate tests and also more than three different ones). This is because the false positive or false negative rate of the tests is systemic and not caused by a faulty application of the test, which is why repeating a test must not increase our confidence in the result.
Can you provide such a cost-efficient test and treatment plan?
Are you confident that your plan is cost-optimal? If you are not, maybe think some more? You can also visit https://colab.research.google.com/ and build a Seeker model to help you find a good plan.
If you are confident that your plan is optimal, can you assess the risk of complications (any treatment cost exceeding $3,500 is considered a complication)? How would you change your plan if it is okay to have expected total costs of up to $2,000 but the risk of complications must be minimized?
One plan with low expected costs is this:
The expected costs of this test and treatment plan are $1746, with expected test costs of $308, and expected treatment costs of $1437. Did you find a better one?
Real-world optimization problems require us to handle uncertainty in data estimates, forecasts, or, as in this case, action outcomes. And yet, most optimization solvers treat uncertainty as an afterthought, thereby severely limiting the ability of practitioners to provide robust decision support. This puzzle perfectly illustrates why looking at a few dozen or even hundreds of 'scenarios' is not enough.
The density of the treatment costs is plotted below (the Seeker model we used to create this analysis is provided at the very end of this article).
The density of complications is so low that we need to zoom in to see the curve.
Note the tiny hump at $14,500. These patients must have a ruptured disc (5%) but receive rehab as initial treatment, which is only possible if the the Radiol test came back negative (1%) and the Theanos test came back positive (20%). The chance of seeing these patients is therefore 5% x 1% x 20% = 1/10,000. And yet Seeker sees these cases and takes them into account in its optimization!
For this test and treatment plan, Seeker assesses the risk of complications at 833/100,000. This implies that 0.8% of patients would encounter complications, potentially very severe complications, as in the example above. We asked Seeker to reduce this risk while keeping the expected costs within $100 of the cost-optimized plan. The result is this revised test and treatment plan:
Its expected total cost is only $65 more than the original plan, but the risk of complications is reduced to 194/10,000. $65 for 75% less risk—that is the power of Seeker.
def ttplan(pID, uID):
env = skr.Env("license.sio", pID, uID, stochastic=True)
env.set_stochastic_parameters(int(1e5), 0)
numberOfTests, nodes, priorTests, numberOfDiseases,
diseasePrior, numberOfTreatments, layers, testProbabilities,
testCosts, treatmentCosts = createInstance()
# decision variables
prescribedTests = [env.categorical(0, numberOfTests-1)
for _ in range(nodes)]
treatments = [env.categorical(0, numberOfTreatments - 1)
for _ in range(nodes)]
phaenTests = [env.convert(numberOfTests-1), prescribedTests[1]]
for i in range(2, nodes):
phaenTests.append(env.if_(phaenTests[i // 2] ==
numberOfTests-1
, numberOfTests-1
, prescribedTests[i]))
# enforce no double-testing
for i in range(2, nodes):
noTest = env.eq(phaenTests[i], numberOfTests-1)
for pr in priorTests[i]:
neq = env.neq(phaenTests[i], phaenTests[pr])
env.enforce(env.or_([noTest, neq]))
# stochastic data
diseaseProb = env.discrete_uniform(0, 99)
disease = env.index(diseaseProb, diseasePrior)
dice = [env.discrete_uniform(0, 99) for _ in range(layers)]
# derived terms
indexOnLayer = [env.convert(1)]
testOnLayer = [phaenTests[1]]
treatment = 0
for k in range(1, layers + 1):
indexOnLayer.append(2 * indexOnLayer[k - 1] + (dice[k - 1]
< env.index(disease, testOnLayer[k - 1]
, testProbabilities)))
if (k < layers):
testOnLayer.append(env.index(indexOnLayer[k]
, phaenTests))
else:
treatment = env.index(indexOnLayer[k]-nodes, treatments)
diagnosticCostsOnLayer = [env.index(testOnLayer[k], testCosts)
for k in range(layers)]
costOfTreatment = env.index(treatment, disease, treatmentCosts)
diagnosticCosts = env.sum(diagnosticCostsOnLayer)
# objective
totalCosts = costOfTreatment + diagnosticCosts
expTotalCosts = env.aggregate_mean(totalCosts)
# cost optimization
env.set_restart_likelihood(0.05)
env.minimize(expTotalCosts, 120)
print("costs", expTotalCosts.get_value())
print("Tests", [t.get_value() for t in phaenTests])
print("Treatments", [t.get_value() for t in treatments])
print(env.get_number_evaluations())
# minimize risk
# constrain expected costs
costBound = 100 + expTotalCosts.get_value()
env.enforce_leq(expTotalCosts, costBound)
print("Cost bound set to", costBound)
# assess risk
risk = env.aggregate_relative_frequency_geq(costOfTreatment
, 3500)
env.evaluate()
print("Risk ingoing", risk.get_value())
# minimize risk
env.minimize(risk, 120)
print("Tests", [t.get_value() for t in phaenTests])
print("Treatments", [t.get_value() for t in treatments])
print(env.get_number_evaluations())
print("costs", expTotalCosts.get_value())
print("risk", risk.get_value())
return