Apparently when the University of Illinois at Urbana-Champaign made their plans to return to live classes this fall, they based some of their planning on epidemiological models constructed especially for the purpose. This included a test, trace, and isolation plan which the models indicated would be effective at preventing community spread of Covid-19 on campus.
UIUC had predicted a rise in COVID-19 cases when students began moving into dorms last month, but thought it would be able to identify, isolate and snuff out the virus with the school’s frequent testing and quarantine requirements. UIUC has performed about 182,060 saliva tests since the technology was unveiled in early July.
The UIUC testing program was so massive that when the full student population was present it accounted for more than 1.5% of all testing in the United States.
But now UIUC is reporting over 700 people testing positive for the Covid-19 virus, and they are getting some grief because the models were not created by epidemiologists, but by a pair of physicists who supposedly disparaged epidemiology as “not intellectually challenging” (or maybe “thrilling”).
But the UIUC decision to use physicists as modelers is getting some support from a surprising source: Epidemiological modelers.
One modeler’s thoughts on the UIUC thing. Because while there are some valid criticisms to be had here, I think there’s also some unfair aspects running around #epitwitter .
— Eric Lofgren (@GermsAndNumbers) September 4, 2020
So here we go – a long thread.
Lofgren goes on to argue, basically, that
- Plenty of epidemiologists don’t know how to build models.
- Epidemiological modeling has a lot in common with other kinds of modeling.
- The physicists building the models consulted with epidemiologists.
- It’s not surprising that people who chose to work on one science (physics) weren’t excited about working on another (epidemiology), but that doesn’t mean they didn’t do a good job.
So what went wrong? The short answer is that while the models may be simple, reality is really complex and weird.
Long ago, I used to write software for a company that built train simulators that we sold to railroads for training their engineers. I remember that one of our customers wanted us to prove that the models used in our simulator were accurate. Our modeling teams tried explaining how the models were based on scientific studies of train behavior and engineering data about specific components, but the customer wanted us to test our simulator against a real train on real track.
Having dealt with this request before, our modelers knew how to respond. They patiently listed out all the data the railroad would have to provide from their real train for us to setup the simulation model to match:
- The number of locomotives and cars in the train.
- The exact model of every locomotive and the options installed.
- The length and weight of every car.
- The amount of slack in the coupling between the cars.
- The exact model of brake valve used on every car.
- Were all those brake cylinders working? (It’s almost certain that in any long train, some of the brakes on some of the wheels aren’t actually doing anything useful.)
- What’s the coefficient of dynamic friction for the brakes on each car? (I.e. How worn out are they?)
- Locomotives use a diesel engine to generate electricity to run electric traction motors on each axle. Were all those motors working at full capacity or were some of them old and in need of service? Were you even sure they were all actually working? (More likely to be working than all the brakes, but it’s not unheard of for traction motors to be out.)
- The grade (percentage of incline) of every piece of track over which the train would be running.
- The degree of curvature and superelevation (tilt of the track from one side to the other) of every piece of track.
- Some measure of the quality of the track (I can’t remember how we did that).
- Were any parts of the track wet?
- Video of the engineer operating the control console so we can get timing data for each control input.
And so on. The request for whole-train validation was unceremoniously dropped.
Accurate modeling of train control systems and in-train physics can be surprisingly complicated. I don’t know a lot about epidemiological models, but I do know that they involve humans, so I’m guessing they are modeling a real world which is even more complicated than a train. That certainly seems to be the problem at UIUC:
But Nigel Goldenfeld, a physics professor who helped the school with modeling, said UIUC’s predictions did not take into account the level of noncompliance seen among students in recent weeks. The models did assume that some students would party, go to bars and fail to wear masks.
“What is not in the models is that students would actually fail to isolate,” he said. “That they would not respond to methods to reach them by (the public health department). That they would go to a party even if they knew they were COVID positive, or that they would host a party when they were COVID positive.”
It’s tempting to laugh at them for not realizing that college students would be irresponsible, but they did account for irresponsibility and include it in the model. They just underestimated how much the students would be irresponsible. Building the model is the easy part. It’s coming up with the inputs to the model that is so difficult.
This follows a pattern we’ve seen throughout attempts to model the Covid-19 pandemic: The spread of the disease itself follows some simple rules of physics and biology which can to some extent be captured in a dynamics model. The behavior of human beings…that’s a lot more complicated and unpredictable. It’s the people, not the disease, that make epidemiological modeling of this pandemic so hard to get right.
Leave a Reply