Category Archives: Uncategorized

flood markers

I visited Germany over the winter break. I have to admit that most of my time was spent admiring Germany’s infrastructure and clean streets. In fact, here is a picture of me on the Neckar river in Heidelberg, where I stopped during my run to admire the dam on the river.

I went on a walking tour of Frankfurt, Germany and was enthralled by the flood markers on a bridge on the river Main. Here is a picture of the flood markers. The plaques on the left show the height of the flood waters and the dates when they occurred. I zoomed in below so you can see them better.



Places where floods occur regularly, such as coastal areas, often have markers that indicate the height of the flood waters. These markers can be formal, as in Frankfurt, or informal. They intend to provide a recording of the historic flooding that occurred and to communicate this information to future generations.

Flood markers are important in areas that are flood prone, because serious floods may occur fairly rarely. It’s easy to forget how bad the last flood really was and to make poor flood-related decisions, like build in low-lying areas that could floor and not buy mandatory flood insurance. I once blogged about this issue in my post “flood risks and management science”.

The Floodstone project seeks to create a database of flood markers. This website on flood markers contains a collection of images illustrating the height of the flood waters. It includes an image from Frankfurt as well as many others. Another site contains images only from the 1974 flooding in Brisbane to illustrate the variety of ways a flooding was recorded.

For more reading:

Other pictures from my trip:

In #heidelberg #germany at the #castle

A post shared by Laura Albert (@punkrockanalytics) on

Hanging out with Robert Wilhelm Bunsen of Bunsen burner fame in #Heidelberg #Germany

A post shared by Laura Albert (@punkrockanalytics) on


2018 INFORMS Government & Analytics Summit: a recap

I chaired the 2018 INFORMS Government & Analytics Summit, an outreach event to government policymakers and Congressional staffers about how operations research can save lives, save money, and solve problems. It was a blast. Here is a recap of the event. Please visit the website for more information and to find recordings of the talks that will be posted soon. INFORMS Executive Director Melissa Moore kicked off the Summit with the following video:

I gave a few opening remarks and gave a quick, non-technical overview of operations research and analytics:

Secretary Anthony Foxx and General Michael Hayden gave the two keynotes that were the center of the Summit. Both speakers were experienced, understood the value proposition that OR and analytics offer to government officials and policymakers, and are dynamic and engaging speakers.

Former Transportation Secretary Foxx focused on transportation, and he emphasized the importance of integrating transportation solutions. In the United States, transportation is decentralized, with decisions, operations, and maintenance being made by many players, including the Federal government, local governments, and the private sector. A challenge is in developing a cohesive transportation plan with so many players. It is further compounded by transportation data that is collected and owned by so many of these players and stored at various sites. Yet Foxx was optimistic about our ability to bring these transportation issues together and solve problems. Foxx noted, “Waze knows more about transportation activity than I ever knew as Transportation Secretary.”

Foxx noted that transportation is not just a transportation problem. Transportation plays a key role in building communities, should be people-centric, and impacts community health. Transportation solutions should strive to build better communities, not just expand transportation infrastructure. He discussed the smart city initiative as an avenue to incentive cities to develop plans that integrate transportation plans with other objectives.

General Michael Hayden’s talk focused on guiding policy decisions in a post truth world. Intelligence is centered on making fact-based decisions and in collecting facts and expert judgement that are consistent with the facts. We increasingly live in a post-truth world, where decisions are made on feeling, emotion, loyalty, tribe and identify. These factors increasingly inform our truth, not the facts.

General Hayden’s talk was fascinating and philosophical at times. He mentioned Oxford Dictionary’s word of the year (post-truth) and discussed how the Enlightenment philosophy based on truth, data, hypotheses, and validation inspired our founding fathers. He discussed the flow of information and ideas as a system with reinforcement, cycles, and feedback loops. He views information flow as a structured system. He noted that intelligence is pessimistic and policy is optimistic. I wholeheartedly agree with the latter; I even wrote a blog post about it.

Hayden ended his talk with advice on how to work with decision makers. As NSA director, he worked with many decision-makers who were not in his field and not always enthusiastic about the facts and analysis he brought to the table. He found it helpful to use intelligence as a way to bound the possible policy decisions. By putting a box around the set of feasible policy decisions, he could help rule out bad and disastrous decisions from consideration. This also helped the decision-maker (often, a President) feel like the one in charge with input from an intelligence expert, which was helpful in facilitating productive conversations.

The three panels focused on transportation, national security, and healthcare. The INFORMS member experts and moderators were outstanding!


Jim Bagian, University of Michigan

Sommer Gentry, U.S. Naval Academy

Eva Lee, Georgia Tech

Julie Swann, N.C. State

Moderator: Don Kleinmuntz



Saif Benjaafar, University of Minnesota

Pooja Dewan, BNSF

Peter Frazier, Cornell University & Uber

Steve Sashihara, Princeton Consultants

Moderator: José Holguín-Veras, Rensselaer Polytechnic Institute


National Security

David Alderson, Naval Postgraduate School

Natalie Scala, Towson University

Harrison Schramm, CAP, Center for Strategic and Budgetary Analysis

Moderator: Col. Greg Parlier (Ret.)


As chair, I would like to mention that we were fortunate to have many nominations and would have liked to have more opportunities to participate in the Summit. Moving forward there will be other opportunities to support INFORMS’ advocacy activities. We look forward to the chance to involve even more members as we work to help make sure policymakers in Washington better understand and appreciate how they can leverage O.R. and Analytics to help save lives, save money and solve problems.

I want to thank the INFORMS Staff and especially Jeff Cohen for making the INFORMS Government & Analytics Summit a reality.



Resilience Analytics at the University of Oklahoma

I was invited to give a guest lecture and public research seminar at the University of Oklahoma for Dr. Kash Barker’s Presidential Dream Course entitled “Analytics of Resilient Cyber-Physical-Social Networks.” Kash and I are collaborating on a project entitled “Resilience Analytics: A Data-Driven Approach for Enhanced Interdependent Network Resilience” funded by the National Science Foundation as part of the Critical Resilient Interdependent Infrastructure Systems and Processes (CRISP) initiative. My lecture and research talk were motivated by our collaborative research project.

My lecture was about modeling service networks and focused on location problems using network optimization for public safety. I introduced public safety operations research and discussed several location models for modeling service networks.

My research seminar was entitled “Designing emergency medical service systems to enhance community resilience.” My slides are below.

I enjoyed exploring the OU campus and the gorgeous gothic architecture everywhere. I especially liked seeing gargoyles on the campus library.

The IKEA Effect

I included an aside about the IKEA Effect in my last post. The IKEA effect is one of many cognitive biases that is described as:

The tendency for people to place a disproportionately high value on objects that they partially assembled themselves, such as furniture from IKEA, regardless of the quality of the end result.

The IDEA effect was introduced in the paper “The ‘IKEA Effect’: When Labor Leads to Love” by Michael I. Norton, Daniel Mochon, and Dan Ariely. In their research, they asked participants to build various products (both utilitarian and non-utilitarian) in a series of experiments. The results indicate that the participants attached great value to the products they successfully made themselves. The reason it happens is that the work boosted the participants feeling of competence. However, the IKEA effect only happened when participants were successful.

I enjoy some DIY hobbies including knitting, sewing, and cooking and succumb to the IKEA effect all the time.

There are implications in the workplace or in academia. For example, I warn student groups about the IKEA Effect when working on class projects and advise them to be critical of their work before handing it in. I tell them about other cognitive biases, such as the bandwagon effect and the planning fallacy, and the IKEA effect is always their favorite.

Listen to or read the NPR story about the research.

When have you seen the IKEA effect in action?


Pareto efficient nut butters that balance taste and affordability

I am a huge nut butter fan. I have a nut butter shelf in one of my kitchen cabinets, and I have even ranked my favorites:


Once upon a time, I blogged about nut butters and created a chart comparing taste and cost. I wanted to update the cost-affordability chart in my previous blog post to account for my tastes. While peanut butter is third on my list of favorite nut butters above, it’s on the taste-affordability efficient frontier. And I think that’s worth celebrating today on National Peanut Butter Day.

I consider four types of peanut butter as well as soy, almond, cashew, sunflower seed, and golden pea butter. I realize I could consider more subdivisions, but I wanted to keep things simple and be consistent with I actually categorize nut butters. Peanuts and peas are technically legumes, but legume butters options seem close enough to warrant a direct comparison to nut butters. I’ll refer to all of these options as “nut butters” in this post.

The peanut butter types are:

  1. Regular peanut butter or store brand peanut butter (Skippy, Peter Pan, store brand, etc.)
  2. Homemade peanut butter (see my recipe; it’s basically just a can of nuts placed in a food processor).
  3. Natural peanut butter (there are various kinds; imagine the kinds where the oil separates)
  4. Trader Joe’s peanut butter (it tastes different than the others to me)

The criteria I consider are:

  1. Affordability: the cheaper the better
  2. Taste: subjective according to my tastes

The homemade peanut butter has a hidden cost because I have to make it; however, I only include the cost of the ingredients in my chart below.

You might argue that my homemade peanut butter and natural peanut butter are the same thing except that I make the former kind. While technically that is true, I would argue that the homemade peanut butter tastes a lot better because I made it. The “Ikea effect,” a cognitive bias in which consumers place a disproportionately high value on products they partially created, explains why I prefer the nut butters I make.

The results indicate that there are four nut butters on the Pareto frontier:

  • Cashew butter
  • Soy butter
  • Trader Joe’s peanut butter
  • Homemade peanut butter



There are all kinds of nut butters with other things mixed in: chocolate, cookie dough, bourbon pecan (it’s to die for!). All of these nut butters are excellent, although some are better than others. It’s hard to compete with chocolate so I left those off. I also left off cookie butter because it’s not a nut butter and not nearly as good (at least to me).

My cabinet at home currently has: natural peanut butter, Trader Joe’s peanut butter, golden pea butter, soy butter, chocolate peanut butter, and Nutella (for my daughters; it has too much lactose for me).

What is your favorite nut butter?




Punk Rock OR’s New Year’s resolutions

I casually constructed a list of New Year’s resolutions for 2018:

  1. Unsubscribe, don’t just delete academic spam emails.
  2. Blog more frequently, especially about my research.
  3. Say no more often to carve out more time for research that is free from email, twitter, and other distractions.
  4. Write and edit my writing every day, even if only for a few minutes.
  5. Read popular science/tech books for fun.
  6. Run a marathon and qualify for Boston.

I must confess that I’ve already failed miserably on #3a and said yes to something, but I am making time for research today (#3b) so I’ll count that as progress. I’ve also completed every other item on the list today except for #6.  The challenge will be to sustain my effort toward these goals.

What are your resolutions?


the paradox of automation: on automation and self-driving cars

Crash: how computers are setting us up for disaster,” an article by Tim Harford in the Guardian, is about how automation diminishes our skills. The article is an excerpt from Harford’s new book, Messy: The Power of Disorder to Transform Our Lives.  You can listen to an audio version of the article here.

I knew that this issue of technology eroding skills exists and is not new. Before there were writing systems, history was oral and information was passed down verbally. As a result, being able to memorize large amounts of information was a profoundly useful skill. Once we could could write information down and look it up later, the skill of memorization became less useful. Technology makes certain skills obsolete. That frees us up to develop new and more complex skills, but paradoxically, this makes us vulnerable.

From the article, I learned that these issues lead to the paradox of automation:

It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”

I have seen the paradox of automation in my research. One of my areas of research is in emergency medical services. Here, paramedics and emergency medical technicians implement a variety of medical techniques and procedures. One way to improve system performance — which reflects response times — is to increase the number of service providers. More service providers means a greater likelihood of having someone readily available and nearby to the next call, which in turn increases the likelihood of short response times. This is normally a good thing. The downside is that each service provider treats fewer patients and their skills erode because they rarely have to implement some of the procedures, so when they do, they do not do it effectively. The medical literature confirms this (it’s true that if you don’t use it, you lose it). This issue is one reason why medical personnel undergo regular training, but ensuring regular practice in the field seems to be the best way to go.

The paradox of automation will be an issue with self-driving cars.

The US Department of Transportation  adopted SAE International’s six levels of automation for autonomous cars, which provides a useful framework for discussing the future of self-driving cars:

  • Level 0 – No Automation: The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems
  • Level 1 – Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
  • Level 2 – Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
  • Level 3 – Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene
  • Level 4 – High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene
  • Level 5 – Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver

Levels 0-2 require a driver performing major driving functions. Drivers are necessary until Level 5 is reached. In the future, cars will be partially or mostly automated and will require someone to drive and/or regularly intervene, and therefore, drivers will have to learn how to manage a partially-autonomous car instead of driving it.

The idea of automation and design is discussed in the New York Times Magazine article “Rev-up: imagining a 20% driving world.” John Lee, a professor in my department at UW-Madison, discusses the paradox of automation when it comes to driving a car that is partially autonomous. “Driving and managing the automation that is helping you drive are two quite different skill sets. Automation-management skills need to be learned as much as driving skills,” he says.

The paradox of automation is discussed in the 99% Invisible podcast.

One of my favorite articles from The Onion