Tag Archives: disasters

COVID-19 is a pandemic that requires systems thinking and solutions

I was on the INFORMS Resoundingly Human to talk about COVID-19 and first responders. You can listen here:


In the podcast, I discuss supply chains, rationing resources, and disaster planning, and I note how everything old becomes new again. For example, the US is not experiencing its first N95 mask shortage. Systems concepts are important for understanding how to prepare for and respond to a pandemic.

In this post, I want to dig deeper into systems concepts. I wrote a quick primer on systems thinking and explain why systems concepts are important for understanding the COVID-19 pandemic.

What is a system?

A system is a set of things—people, vehicles, basketball teams, hospital beds, or whatever—interconnected in such a way that they produce their own pattern of behavior over time.

Here are three examples:

(1) A car is just a vehicle. A collection of cars can be a traffic jam.

(2) A single ventilator can be used to treat a patient. A hospital’s collection of beds and ventilators are available for treating patients. When a surge of patients require these resources, they may have to wait and queue for these limited resources.

(3) An N95 mask protects first responders from infectious disease when they treat patients. A supply chain of personal protective equipment (PPE) can have delays and shortages, leading to first responders not having the N95 masks they need at any given moment.

How is systems engineering relevant to COVID-19?

COVID-19 is absolutely a medical challenge. It is also a systems challenge that require systems thinking and systems solutions. In systems, decisions are not made in isolation, but rather, decisions are interrelated.

My discipline is operations research: the science of making decisions using advanced analytical methods. Systems require a series of decisions to operate effectively with or without patient surges in a pandemic. Operations research provides the analytical tools required to design and operate systems more effectively and efficiently.

In systems there are many trade-offs and complicated interactions. Here are examples of how systems engineering is important now:

(1) If a first responder does not have adequate personal protective equipment (PPE) such as latex gloves and N95 masks, they are at higher risk from acquiring COVID-19. If they do, they will not be able to treat patients in the coming months, thereby reducing the number of first responders (a critical resource) in the future. This informs how responders should treat patients and ration resources now.

(2) Surges in COVID-19 cases may lead to more patients requiring ventilators than are available in hospitals. This could lead to rationing and painful choices that would not be considered without a patient surge.

Systems concepts will continue to be important in the future. Here is a third example:

(3) One person who gets a vaccine has immunity. If enough people receive vaccines or have immunity from previously having had the disease, we can achieve herd immunity and eliminate person-to-person transmission of the disease even among those who do not have immunity. With herd immunity, the benefits are greater than the sum of its parts.

What can systems thinking tell us about the fatality rate for COVID-19?

It depends. We know that it depends on age, gender, and co-morbidities. The fatality rate is not an exogenously given number, but rather it is a function of the resources available for treating patients, which is endogenous to the system. The fatality rate for COVID-19 is a systems concept. If the number of infected individuals is low enough so that hospitals can handle the surge and give every patient the treatment they require, the fatality rate will be lower (relatively speaking. In absolute terms it will still be too high). The fatality rate will be a lot higher if hospitals are over capacity and have to ration beds and ventilators.

How are my personal decisions related to healthcare systems in the COVID-19 pandemic?

The resources in our healthcare system are being stretched to the limit. The resources include personnel (physicians, nurses, first responders), hospital beds, ventilators, and personal protective equipment. When there are not enough resources to give every COVID-19 patient the best treatment they require, physicians will have to ration resources and make tough choices. Our efforts to delay the second wave as long as possible and to reduce the number of people who require medical treatment will save lives. Flattening the curve is a systems concept aimed at reducing painful tradeoffs and complicated interactions.

How can we prevent the next wave?

Preventing the next wave of any infectious disease is a numbers game. I do not know how to practice medicine but I know how to crunch numbers. The key is to lower the overall transmission rate. The best way to lower the transmission rate varies according to the disease, but there are some basic principles for preventing a disease outbreak from becoming another wave of a pandemic. Best practices include better hygiene practices such as washing your hands and your mobile phones with soap and water, and covering your cough. Limiting the number of people you come in contact with reduces the opportunities for transmission. All those trips to the store to buy extra toilet paper increase one’s chance of contracting COVID-19.

What can we do to prepare for a second wave?

A second wave in a prolonged pandemic is not going to be easy for many of us. I use mathematical models and analytics in my research, and I find them to be useful in my everyday life. My research tells me that I make better decisions with better information and that I should use limited resources wisely. When I think about what it means to apply these principles to my decisions in a pandemic, I realized I can achieve both of these goals by gathering up to date information and following instructions from official, trusted sources such as local and state governments, local police and emergency medical service departments, and the Centers for Disease Control and Prevention. I plan to use the official sources to limit what I think about, worry about, and do in any upcoming waves of the pandemic. We are all inundated with conflicting information and advice from many sources, and it is taking its toll and potentially leading us to make unsafe choices such as making repeated trips to grocery stores to stockpile items we do not need.


Related posts:

emergency response during mass casualty incidents

Today’s blog post about my research on mass casualty events and emergency response given that COVID-19 has been declared a pandemic by the World Health Organization (WHO). I have four papers in the area that are relevant in the area of emergency medical services (EMS) during mass casualty incidents.

A mass casualty incident (MCI) is an event in which the demand for service overwhelms local resources. Since fire and EMS departments operate at the local level, they can be overwhelmed quite easily. Anything from a multiple vehicle accident to a weather disaster to a hospital evacuation can be considered an MCI. Fire and EMS departments have “mutual aid” agreements with neighboring departments to address the more routine of these incidents and have “Standard Operating Procedures” for a range of more severe incidents. However, switching between such policies in practice is not simple. Moreover, not all mass casualty incidents are the same. Responding to calls for service during a hurricane is different than during a pandemic. In the latter, paramedics and emergency medical technicians can become sick and should stop treating patients, leading to fewer resources for responding to patients that require service. Additionally, we would expect less road congestion and wind during pandemics than in a hurricane evacuation. However, both cases may see a surge of low-acuity patients who request service.

My research focuses on emergency response during MCIs lifts limiting assumptions made by papers in the literature, which often assume that there are enough resources available all the time (which is not a reasonable assumption during MCIs). Here is a summary of four of my papers that have addressed MCIs.

Dubois, E. Albert, L.A. 2020. Dispatching Policies During Prolonged Mass Casualty Incidents. Technical report, University of Wisconsin-Madison.

The newest paper is available as a technical report and is most relevant to COVID-19. It focuses on a large surge of patients that overwhelmed EMS resources. Here, we lift the assumption that a patient’s priority is a fixed input. Instead, we consider patients whose conditions deteriorate over time as they wait for service.  We consider how to assign two types of ambulances to patients, advanced and basic life support. We study how to dispatch ambulances during MCIs while allowing ambulances to idle while less emergent patients are queued. This is similar to keeping a reserve stock of advanced life support ambulances (see the last paper listed in this post). The inherent trade-off is that when low-priority patients are asked to wait for service, they can become high-priority patients. When high-priority patients are asked to wait for service, they can become critical or die. Our solution method is to find dynamic response policies to match two types of ambulances with these three types of patients.  We observe that, under the optimal policies, advanced life support ambulances often remain idle when less emergent patients are queued to provide quicker service to future more emergent patients. It is counter-intuitive to not use all resources all the time during an MCI. However, keeping some resources in reserve ensures that there are resources available at the time the most critical patients need them.

McLay, L.A., Brooks, J.P., Boone, E.L., 2012. Analyzing the Volume and Nature of Emergency Medical Calls during Severe Weather Events using Regression Methodologies. Socio-Economic Planning Sciences 46, 55 – 66.

The second paper seeks to characterize the volume and characteristics of EMS and fire calls for service. It was motivated by the need to deliver routine emergency service during weather emergencies and disasters. What typically happens during emergencies is that there are more calls for service, most of which are low priority calls. Triage becomes more important in these situations, because the most severe calls for service can be drowned out by so many low-priority requests. However, call surges are not the only stress on fire and EMS departments. Road congestion and slow travel times mean that each call takes more time to serve, which can further stress limited resources. As a result, it becomes important to triage calls and assign appropriate resources.

Kunkel, A., McLay, L.A. 2013. Determining minimum staffing levels during snowstorms using an integrated simulation, regression, and reliability model. Health Care Management Science 16(1), 14 – 26.

The third paper studies staffing levels during a blizzard, where a surge of calls can temporarily overwhelm resources that are available. Additional staff are usually scheduled during emergencies when call volumes increase. We specifically focus on snow events, and the results have insight into other situations. To determine staffing levels that depend on weather, we propose a data-driven model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care,with regression used to provide the input parameters. We consider different response options, including asking low priority patients to wait for service, and we take into account that service providers often work faster when systems are congested. The latter issue of allowing adaptive service rates is important, since it makes the model more realistic by limiting the assumption that service rates are constant. A key observation is that when it is snowing, intrinsic system adaptation with respect to service rates has similar effects on system reliability as having one additional ambulance.

Yoon, S., Albert, L. 2018. An Expected Coverage Model with a Cutoff Priority Queue. Health Care Management Science 21(4), 517 – 533.

The final paper examines how to locate and dispatch ambulances when resources can be temporarily overwhelmed. In this paper, there are prioritized calls for service in a congested system, but the system is not completely overwhelmed by an MCI such as a hospital evacuation. Typically, models in the literature implicitly assume that there are always enough resources to respond immediately to all calls for service that are received. This is not a good assumption when there is an MCI. As a result, we need new models and analyses to provide insights into how to allocate resources when there is congestion and many service providers are busy treating patients.

We formulate new models to characterize policies when ambulances are held in reserve for high priority calls. When the system is so congested that it hits the “reserve” stock of ambulances, low priority patients are either diverted to neighboring EMS systems through mutual aid or added to a queue and responded to when the congestion has reduced. Interestingly, we find that by adopting such an approach for sending (and not sending) ambulances to patients, this affects where we might want to locate ambulances at stations.




An integrated network design and scheduling problem for network recovery and emergency response

I recently published a paper entitled “An Integrated Network Design and Scheduling Problem for Network Recovery and Emergency Response” in Operations Research Perspectives (volume 5, p. 218 – 231) with my PhD student Suzan Afacan Iloglu.

This paper studies a problem in post-disaster response and restoration. Disasters cause damage to important infrastructure systems such as power, water, and road infrastructure. We were particularly motivated by hurricanes, where the debris on roads after the storm make many roads impassable, which can greatly reduce the connectivity of the road network and substantially increase travel times. Often, teams of repair crews clear the roads before utility crews can come in and restore essential services. Given my experience studying emergency medical services, I was particularly interested in how emergency responders can efficiently deliver aid while the debris is being cleared.

In this paper, we seek to coordinate the activities of two types of service providers: (1) emergency responders who provide essential services and (2) repair crews who install arcs over a finite time horizon. We formulate the problem as a location and scheduling model, where emergency responders are located on a network. These facilities can be re-located over a time horizon while network components (arcs) are installed (the scheduling component). The results shed light on how to prioritize the restoration of damaged road infrastructure to assist in providing emergency aid after a disaster. The solutions indicate how to coordinate the recovery of road infrastructure by taking into account emergency service efforts, which is crucial to save more lives and improve resilience.


Resilience Analytics at the University of Oklahoma

I was invited to give a guest lecture and public research seminar at the University of Oklahoma for Dr. Kash Barker’s Presidential Dream Course entitled “Analytics of Resilient Cyber-Physical-Social Networks.” Kash and I are collaborating on a project entitled “Resilience Analytics: A Data-Driven Approach for Enhanced Interdependent Network Resilience” funded by the National Science Foundation as part of the Critical Resilient Interdependent Infrastructure Systems and Processes (CRISP) initiative. My lecture and research talk were motivated by our collaborative research project.

My lecture was about modeling service networks and focused on location problems using network optimization for public safety. I introduced public safety operations research and discussed several location models for modeling service networks.

My research seminar was entitled “Designing emergency medical service systems to enhance community resilience.” My slides are below.

I enjoyed exploring the OU campus and the gorgeous gothic architecture everywhere. I especially liked seeing gargoyles on the campus library.

Public sector operations research: the course!

Course introduction

I taught a PhD seminar on public sector operations research this semester. You can read more about the course here. I had students blog in lieu of problem sets and exams They did a terrific job [Find the blog here!]. This post contains summary of what we covered in the course, including the readings and papers presented in class.


Public Safety Overview

  • Green, L.V. and Kolesar, P.J., 2004. Anniversary article: Improving emergency responsiveness with management science. Management Science, 50(8), pp.1001-1014.
  • Larson, R.C., 2002. Public sector operations research: A personal journey.Operations Research, 50(1), pp.135-145.
  • Rittel, H.W. and Webber, M.M., 1973. Dilemmas in a general theory of planning. Policy sciences, 4(2), pp.155-169.
  • Johnson, M.P., 2012. Community-Based Operations Research: Introduction, Theory, and Applications. In Community-Based Operations Research (pp. 3-36). Springer New York. (Originally an INFORMS TutORial)
  • Goldberg, J.B., 2004. Operations research models for the deployment of emergency services vehicles. EMS Management Journal, 1(1), pp.20-39.
  • Swersey, A.J., 1994. The deployment of police, fire, and emergency medical units. Handbooks in operations research and management science, 6, pp.151-200.
  • McLay, L.A., 2010. Emergency medical service systems that improve patient survivability. Wiley Encyclopedia of Operations Research and Management Science.

Facility location

  • Daskin, M.S., 2008. What you should know about location modeling. Naval Research Logistics, 55(4), pp.283-294.
  • Brotcorne, L., Laporte, G. and Semet, F., 2003. Ambulance location and relocation models. European journal of operational research, 147(3), pp.451-463.

Probability models for public safety

  • Larson, R.C. and Odoni, A.R., 1981. Urban operations research. This was the textbook we used to cover probability models, queueing, priority queueing, and spatial queues (the hypercube model).

Disasters, Homeland Security, and Emergency Management

Deterministic Network Interdiction

  • Smith, J.C., 2010. Basic interdiction models. Wiley Encyclopedia of Operations Research and Management Science.
  • Morton, D.P., 2011. Stochastic network interdiction. Wiley Encyclopedia of Operations Research and Management Science.

Papers presented by students in class

Papers selected for the first set of student presentations (background papers)

  • Blumstein, A., 2002. Crime Modeling. Operations Research, 50(1), pp.16-24.
  • Kaplan, E.H., 2008. Adventures in policy modeling! Operations research in the community and beyond. Omega, 36(1), pp.1-9.
  • Wright, P.D., Liberatore, M.J. and Nydick, R.L., 2006. A survey of operations research models and applications in homeland security. Interfaces, 36(6), pp.514-529.
  • Altay, N. and Green, W.G., 2006. OR/MS research in disaster operations management. European journal of operational research, 175(1), pp.475-493.
  • Simpson, N.C. and Hancock, P.G., 2009. Fifty years of operational research and emergency response. Journal of the Operational Research Society, pp.S126-S139.
  • Larson, R.C., 1987. Social justice and the psychology of queueing. Operations research, 35(6), pp.895-905.

Papers selected for the second set of student presentations (methods)

  • Ashlagi, I. and Shi, P., 2014. Improving community cohesion in school choice via correlated-lottery implementation. Operations Research, 62(6), pp.1247-1264.
  • Mandell, M.B., 1991. Modelling effectiveness-equity trade-offs in public service delivery systems. Management Science, 37(4), pp.467-482.
  • Cormican, K.J., Morton, D.P. and Wood, R.K., 1998. Stochastic network interdiction. Operations Research, 46(2), pp.184-197.
  • Brown, G.G., Carlyle, W.M., Harney, R.C., Skroch, E.M. and Wood, R.K., 2009. Interdicting a nuclear-weapons project. Operations Research, 57(4), pp.866-877.
  • Lim, C. and Smith, J.C., 2007. Algorithms for discrete and continuous multicommodity flow network interdiction problems. IIE Transactions, 39(1), pp.15-26.
  • Rath, S. and Gutjahr, W.J., 2014. A math-heuristic for the warehouse location–routing problem in disaster relief. Computers & Operations Research, 42, pp.25-39.
  • Argon, N.T. and Ziya, S., 2009. Priority assignment under imperfect information on customer type identities. Manufacturing & Service Operations Management, 11(4), pp.674-693.
  • Pita, J., Jain, M., Marecki, J., Ordóñez, F., Portway, C., Tambe, M., Western, C., Paruchuri, P. and Kraus, S., 2008, May. Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles International Airport. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track(pp. 125-132). International Foundation for Autonomous Agents and Multiagent Systems.
  • Mills, A.F., Argon, N.T. and Ziya, S., 2013. Resource-based patient prioritization in mass-casualty incidents. Manufacturing & Service Operations Management, 15(3), pp.361-377.
  • Mehrotra, A., Johnson, E.L. and Nemhauser, G.L., 1998. An optimization based heuristic for political districting. Management Science, 44(8), pp.1100-1114.
  • Koç, A. and Morton, D.P., 2014. Prioritization via stochastic optimization.Management Science, 61(3), pp.586-603.

I missed a class to attend the INFORMS Analytics meeting. I assigned two videos about public sector OR in lieu of class:

Jon Caulkins’ Omega Rho talk on crime modeling and policy

Eoin O’Malley’s talk about bike sharing and optimization (start at 3:51:53)

Blog posts I used in teaching:

We played Pandemic on the last day of class!


my national academies committee experience & risk-based flood insurance

I had the pleasure of serving on a National Academies committee the past two years. Our report entitled “Tying flood insurance to flood risk for low-lying structures in the floodplain” was just released [Link].

If you don’t know much about the National Academies, it is a private, independent, nonprofit institution that provides technical expertise to important societal problems (engineering, in my case). The National Academies committees like the one I participated in address a specific challenge and has a very specific charge. The committee is composed of a bunch of really smart people who work together to answer the questions posed in the charge. FEMA provided the charge for my committee.

The specific charge is below, but a bit of background is necessary to know why the problem is so important and was it had to be addressed now. Recently, I blogged about floods and their huge impact on society [Link]. After a series of hurricanes that caused extensive flood damage to properties, the National Flood Insurance Program (NFIP) was created (in 1968) to reduce the risk of flood and mitigate flood damage by encouraging better flood management and building practices. The idea was that homeowners in flood-prone areas (“Special Flood Hazard Areas” – areas with an annual chance of flooding of 1% or more) would have to pay for flood insurance to help pay for the cost of disasters. Today, most homeowners in Special Flood Hazard Areas pay the going rate based on an elaborate formula set by FEMA. There are currently about 5.5 million flood insurance policies.

Those houses that already existed in a Special Flood Hazard Area in 1968 could be grandfathered into the program and receive subsidized rates. Over time, the hope was that these existing houses in Special Flood Hazard Areas would be replaced, thus reducing exposure in flood-prone areas. But they were not. They continue to exist and are expensive for FEMA when disasters strike. This is a huge problem. FEMA’s insurance premium formula as well as risk-based actuarial rates are incredibly sensitive to the elevation of the home relative to base flood elevation. These homeowners may pay $200 for a flood insurance premium per year when a risk-based actuarial rate may thousands of dollars. These houses are negatively elevated, meaning that they are below base flood elevation and flood frequently. There are a lot of these structures out there and they are costly to FEMA.

The Biggert-Waters (BW) Flood Insurance Reform Act of 2012 required these subsidized policies to disappear overnight, turning this important problem into an immediate problem. Subsequent legislation changed some of this, but the bottom line was that subsidized rates would rise, substantially for some. FEMA wanted a review of how they set their rates to be credible, fair, and transparent. That is where the committee came in.

Here is our study charge set by FEMA. In our conversations with FEMA actuaries, FEMA asked for shorter-term (within 5 years) and longer-term recommendations for improving their methods. FEMA asked us to look at how premiums are set and how the process could be improved. We focused on the math; another committee addressed the societal impact of the changes.

Study Charge
An ad hoc committee will conduct a study of pricing negatively elevated structures in the National
Flood Insurance Program. Specifically, the committee will:

  1. Review current NFIP methods for calculating risk-based premiums for negatively elevated structures, including risk analysis, flood maps, and engineering data.
  2. Evaluate alternative approaches for calculating “full risk-based premiums” for negatively elevated structures, considering current actuarial principles and standards.
  3. Discuss engineering, hydrologic, and property assessment data and analytical needs associated with fully implementing full risk-based premiums for negatively elevated structures.
  4. Discuss approaches for keeping these engineering, hydrologic, or property assessment dataupdated to maintain full risk-based rates for negatively elevated structures.
  5. Discuss feasibility, implementation, and cost of underwriting risk-based premiums for negatively elevated structures, including a comparison of factors used to set risk-based premiums.

We constructed ten conclusions:

  1. Careful representation of frequent floods in the NFIP water surface elevation–probability functions (PELV curves) is important for assessing losses for negatively elevated structures.
  2. Averaging the average annual loss over a large set of PELV curves leads to rate classes that encompass high variability in flood hazard for negatively elevated structures, and thus the premiums charged are too high for some policyholders and too low for
  3. NFIP claims data for a given depth of flooding are highly variable, suggesting that inundation depth is not the only driver of damage to structures or that the quality of the economic damage and inundation depth reports that support the insurance claims is poor.
  4. When the sample of claims data is small, the NFIP credibility weighting scheme assumes that U.S. Army Corps of Engineers damage estimates are better than NFIP claims data, which has not been proven.
  5. Levees may reduce the flood risk for negatively elevated structures, even if they do not meet NFIP standards for protection against the 1 percent annual chance exceedance flood.
  6. When risk-based rates for negatively elevated structures are implemented, premiums are likely to be higher than they are today, creating perverse incentives for policyholders to purchase too little or no insurance. As a result, the concept of recovering loss through pooling premiums breaks down, and the NFIP may not collect enough premiums to cover losses and underinsured policyholders may have inadequate financial protection.
  7. Adjustments in deductible discounts could help reduce the high risk-based premiums expected for negatively elevated structures.
  8. Modern technologies, including analysis tools and improved data collection and management capabilities, enable the development and use of comprehensive risk assessment methods, which could improve NFIP estimates of flood loss.
  9. Risk-based rating for negatively elevated structures requires, at a minimum, structure elevation data, water surface elevations for frequent flood events, and new information on structure characteristics to support the assessment of structure damage and flood risk.
  10. The lack of uniformity and control over the methods used to determine structure replacement cost values and the insufficient quality control of NFIP claims data undermine the accuracy of NFIP flood loss estimates and premium adjustments.

You can read more about our report and its conclusions in the press release.

The committee was composed of 12 members and included civil engineers, risk analysts, actuaries, and one retired FEMA employee. Our fearless chair David Ford did a lot of the heavy lifting in terms of crafting our core conclusions. National Academies staff member Anne Linn was incredibly helpful in terms of getting us focused, staying on track, and writing the report. National Academies staff member Anita Hall did the logistics and was incredibly responsive to our travel needs. The committee met in person four times and wrote parts of the report. The report was sent out to reviewers, and we changed parts of the report in response to reviewer comments much like in a peer-reviewed journal. We couldn’t have done this without David and Anne (many thanks!)

Serving on the committee helped me understand the importance of flooding from many possible perspective. I bought a new house during the my time on the committee. My new house is on the top of a ridge with virtually no chance of flooding.

Serving on the committee also helped me to learn about state-of-the-art techniques in civil engineering and risk-based insurance. Our colleagues in other fields do some pretty cool things, and we can all work together to make the world a better place. I’m proud of our final report – I hope it leads to more credible, fair, and transparent NFIP flood insurance premiums.


flood risks and management science

This week’s flooding in Texas highlights how vulnerable we are to flood risks. Texas is extremely prone to flooding yet is among the worst states when it comes to flood-control spending. Texas is exposed to significant riverine flooding in addition to storm surge flooding caused by hurricanes and tropical storms. Texas has the second most flood insurance premiums in the US (second only to Florida).

In the past year, I have been serving on a National Academies committee on flood insurance for negatively elevated structures. I have learned a lot about flood insurance, incentives, and risk. I can’t say anything about the report yet except that it will be released soon, but I can tell you a little about the problem and how management science has helped us understand how to mitigate flood risks.

Floods occur frequently (although not frequently enough to motivate people to mitigate against the risks) and when floods occur, they do a lot of damage. The damage is usually measured in terms of structure and property damage, but flooding also leads to loss of life and injuries. Flooding is not just a coastal issue – floods occur along rivers, in areas with high water tables, and in urban areas where infrastructure channels water in such a way that it creates flood risks. Cook County, Illinois has chronic urban flooding issues that is expensive. Floods lead to enormous socioeconomic costs. Two-thirds of Presidential disaster declarations involve floods.

The basic approach to managing flood risks is to (1) encourage people not to build in floodplains and (2) building less flood-prone structures and communities to reduce the damage when floods occur. Getting this to happen is tough. Improving infrastructure requires an enormous investment cost either to society (e.g., flood walls), communities (e.g., FEMA’s Community Rating System), or individuals (e.g., elevating a house). A Citylab article criticizes Texas for not investing in infrastructure that could reduce the impact of floods.

On an individual level, flood insurance is required for those who live in “Special Flood Hazard Areas” (a floodplain; FEMA defines a “Special Flood Hazard Area” as an area with a >1% annual chance of flooding). Flood insurance can be really expensive, which can encourage individual homeowners to either forego insurance or mitigate against flooding. Elevating a house is expensive, but it may be a good financial choice if it reduces flood insurance by thousands of dollars per year. The reality is that most homeowners do not have a flood insurance policy even when it is required because insurance is expensive and is perceived as not needed. Many homeowners in floodplains go decades between floods, and despite the warnings and requirements, they do not see the need to pay so much for flood insurance when they have not experienced a flood.

I recommend reading Catherine Tinsley and Robin Dillon-Merrill’s paper on “near miss” events in Management Science and their follow-up paper with Matthew Cronin. Their papers demonstrate that when someone survives an event like a natural disaster that was not fatal or too traumatic (e.g., a flood that didn’t do too much/any damage), they are likely to make riskier decisions in the future (e.g., they cancel their flood insurance).

A Wharton report by Jeffrey Czajkowski, Howard Kunreuther and Erwann Michel-Kerjan entitled “A Methodological Approach for Pricing Flood Insurance and Evaluating Loss Reduction Measures: Application to Texas” specifically analyzed flood loss reduction methods in Texas. They recommend supplementing the flood insurance with private flood insurance (FEMA currently provides homeowners with flood insurance through the National Flood Insurance Program) to encourage more homeowners to purchase insurance. They also evaluate the impact of elevating structures in the most flood-prone areas, and they find that mitigation through elevation can be instrumental in reducing much of the damage.

What have you experienced with flooding?

staying safe from tornadoes

The devastating tornadoes in Oklahoma and Arkansas this weekend was a sad way to start tornado season. Sensing equipment and forecasting models have been used in improved advanced warning systems to help people take shelter, however, major tornadoes are still pretty deadly. The recent 2011 tornado in Joplin, MO was one of the deadliest ever. The  “Weather Forecasting Improvement Act of 2014,″ which passed the House in April 2014, seeks to address this need.

While there is room for improvement when it comes to advanced tornado warnings, the cost associated with the warning systems is somewhat controversial. Federal funding for disasters is essentially a zero-sum game, so high-profile disasters like tornadoes can use up a disproportionately high amount of the budget allocated to disaster research, leaving us vulnerable to other less news-worthy disasters. Since there are so few tornado-related fatalities every year compared to other weather disaster (e.g., heat waves), there isn’t much of a potential to save lives regardless of how good the advanced warning system becomes.

Meteorologist Eric Holthaus wrote a nice article on Slate about this issue [Link]

While it’s necessary to continue making progress on hurricane and tornado forecasts, it should definitely not be at the expense of funding to improve forecasts of lower-profile weather and climate disasters that, in aggregate, kill dozens of times more people per year, and are increasing. Essentially, the bill invests scarce funds in high-profile weather events at the expense of those that cause many more deaths. Boosted by human-caused climate change, heat waves now kill more people in the United States each year than hurricanes, tornadoes, lightning and floods combined, according to the CDC. And weather-related traffic accidents kill 10 times more than heat waves—more than 6,000 people per year. [see the first image below to visualize these magnitudes]

Track the bill here. The bill focuses on forecasting, but I am going to take this a step further and examine the entire warning system to see if there are other ways we could save lives. The ultimate goal is to keep people safe, and there are many ways to do so. I grew up with tornado sirens, which were great so long as you were awake. I was surprised to learn that this is generally the (see the bottom figure below). TV warnings are another old-fashioned way to warn people, which seemed to work back in the era when people watched network TV. Virginia does not have too many tornado sirens (although it had tornadoes!), and since I do not watch much TV, my family missed a couple of warnings. Later, Virginia Commonwealth University used the campus siren installed to warn us of active shooters on campus as a makeshift tornado alarm. In my opinion, that was terrific. They have had more tornadoes than shootings.

I later learned that I could get the best warnings from following meteorologists and weather nerds on twitter. I highly recommend following nearby National Weather Service offices on twitter to get high quality information in real-time (get started here). Weather apps can deliver warnings even if you aren’t actively using the app. I think these are all great options, but I am still sometimes blissfully unaware of serious weather despite being “prepared.” Being able to reach everyone including those who cannot be reached by the most far-reaching alerts in our shifting technology landscape is not a forecasting problem (e.g., the deaf cannot hear sirens). The point is that there are some serious challenges in ensuring that the warnings get to the people quickly. Doing so isn’t so much of a forecasting problem, unless the forecasts give much, much earlier warnings (think: hours instead of minutes).

Another problem is that I can always choose to ignore the warnings even if I get them. This will likely happen if there are too many weather warnings/false alarms (The Weather App That Cried Wolf). I blogged about this issue here.

The OR tie in for this post may be somewhat dubious, but the connection here is that these issues cut to systems thinking and tradeoffs. Plus, I’m a daughter of the Midwest and will always be somewhat obsessed with weather and tornadoes. Your thoughts on optimal ways to keep people safe (broadly speaking) from tornadoes are welcome.


From “Investing More Money in Tornado Research Would Be a Disaster” by meteorologist Eric Holthaus. Click through for more information.



Related factoid: most of the world is not at an increased risk of tornadoes. So tornado prediction and preparedness is mostly a United States problem

Parts of the world at an increased risk of tornadoes are in red. Courtesy of NOAA. Click through for more info.

Another factoid: tornadoes are far more likely to occur in the late afternoon and early evening than overnight. I was shocked to learn this.

Tornado likelihood across the United States. Click through for more information.


Related posts:

operations research, disasters, and science communication

I had the pleasure of speaking at the AAAS Meeting on February 17 in a session entitled Dynamics of Disasters: Harnessing the Science of Networks to Save Lives. I talked about my research that addresses how to use scarce public resources in fire and emergency medical services to serve communities during severe but not catastrophic weather events. My research has application to weather events such as blizzards, flash flooding, derechos, etc. that are not so catastrophic that the National Guard would come. Here, a community must meet demands for fire and health emergencies within a community using the resources that they have during “regular” days – e.g., ambulances and fire engines – while the transportation network is impaired due to snow, flooding, etc. Everything is temporarily altered, including the types of 911 calls that are made and travel and service times as they are affected by an impaired transportation network. Plus, it’s always a lot of fun to mention “Snowmaggedon” during a talk.

Anna Nagurney organized the session, and the other speakers included David McLaughlin, Panos Pardalos, Jose Holguin-Veras, and Tina Wakolbinger. They talked about a number of issues, including:

  • how to detect tornadoes temporally and spatially by deploying new types of sensors
  • how to evaluate people and even livestock during hurricanes and floods
  • what the difference between a disaster and a catastrophe is
  • what types of emergency logistics problems require our expertise: national versus internationa, public vs. non-profit, mitigation vs. preparedness vs. response, short-term disaster vs. long-term disaster

I applaud Anna Nagurney for organizing a terrific session. It was fascinating to talk to people in my field about disasters without focusing too much on the modeling details. We all mentioned which types of methodologies we used in the talk, but we focused on the takeaways, actionable results, and policy implications. And it’s clear that the opportunities in this area are almost endless.

The AAAS Meeting is all about science communication to a large audience. The talks focus on broader impacts not specific model details. It’s not always easy for me to take a step back from my research and explain it at a higher level, but I get a lot of practice through blogging and talking about my research in my classes. Still, I was nervous. I am a mere blogger – the conference is heavily attended by real science journalists. In fact, I had to submit speaker information and a picture ahead of time so that journalists prepare for my talk. I truly felt like an OR ambassador – it was quite an experience.

I attended another session on disasters, where the topics often revolved around forecasting power, false alarms, and risk communications. I have blogged about these issues before in posts such as what is the optimal false alarm rate for tornado warnings? and scientists convicted for manslaughter for making a type II error. This appears to be an ongoing issue. According to the scientists on the panel, part of the problem stems from journalists who want to make a good story even juicier by not portraying risk accurately, thus leading to false alarm fatigue.

Other sessions at the AAAS Meeting addressed several fascinating topics. One session was on writing about science, and it featured a writer from the Big Bang Theory. Another session was about communicating science to Congress. Many of the speakers were from science publications and PBS shows.

I have at least one other blog post on science communication in the works, so stay tuned.

My slides are below:

the logistics of the post-Sandy New York Marathon

I’m pleased to hear that NYC marathon will be held on Sunday as planned.

The logistics will be challenging. The race organizers were expecting 50,000 runners before Hurricane Sandy hit. While many runners may sit out, I expect that most will try to run. After all, the hurricane hit well into the tapering phase of training, meaning that runners should be ready to run,even if they’ve been dealing with hurricane-related challenges. And most of the out of town runners will be relatively unaffected by the hurricane and should similarly be ready to run.

The main challenges as I see it will be to:

  1. Get runners into the city and have a hotel room
  2. Get runners and volunteers to the race.
  3. Distribute race supplies such as water and Powerade and to locate portable bathrooms.

#1 Get runners into the city

In huge marathons like this one, many of the runners will not be nearby. Last year, 20,000 of the runners came from overseas. The main ways to get to NYC are by plane and train. As of now, Amtrak still has not resumed NYC travel. They plan to partially restore travel on Friday. There have been a large number of flight cancellations, but flights are being restored and it appears that runners are making it to New York.

Runners from out of town also need hotels. Surprisingly, the lack of hotel rooms seems to be a larger problem for runners than transportation to NYC. The hotels are packed:

The city’s hotels are coping with a list of issues. Among them: Unprecedented cancellations and requests to extend stays; a high number of walk-in room requests from powerless local residents; unpredictable staffing levels; non-working land lines, and in some cases no steam heat.

The Pittsburgh Steelers likewise had problems finding a hotel to accomodate the team on Saturday night before their road game against the New York Giants. The Steelers are flying to Newark for their game Sunday morning.

#2 Get runners and volunteers to the race

Once in/near the city, all 50,000 runners and a few thousand volunteers need to get to the beginning of the race more or less at the same time. Driving to the beginning of a big race like this is generally not the best way to get there. The NYC marathon normally starts on Staten Island, which harder to get to than most races. In the past, half of the runners take the subway in combination the Staten Island ferry to the beginning of the race. Not so this year. The Staten Island ferry has been cancelled and buses will instead transport the runners from a meeting point to the race in four waves at 4:30am, 5:30am, 6:30am, and 7:30am. There shouldn’t be a lot of traffic at 4:30am on Sunday morning, so I would anticipate that the runners should be OK as long as they can take other forms of public transportation to get to the meeting point for the race buses.

Distribute race supplies such as water, Powerade, and portable bathrooms

Normally, setting up portable bathrooms and water/Powerade stations is not a complicated matter. With the number of road closures, etc., it will be more difficult to obtain the necessary marathon resources and get them where they need to be. Races need a huge number of bathrooms because all runners need to go to the bathroom at the same time (right before the race). I wasn’t sure that many portable bathrooms would be available, and it sounds like 1750 bathrooms are at the start of the race. I wrote about bathrooms before [Link]. That sounds like a lot of bathrooms per runner, but I can assure you, there will still be long lines.

In sum, I am amazed that the marathon will continue more or less as planned. I am surprised, however, that hotels may be the biggest challenge. I am also concerned about snafus with public transportation, since runners will rely on public transportation in new ways this time. I hope everything goes smoothly.

What are other issues, bottlenecks, and shortages do you foresee?