Translating engineering and operations analyses into effective policy

I am presenting at the AAAS Annual meeting in a session entitled “Translating Engineering and Operations Analyses into Effective Homeland Security Policy” with Sheldon Jacobson and Gerald Brown:

In my talk, I will discuss three research questions I have advanced:

  1. How can we more effectively perform risk based security?
  2. What is the optimal way to allocate vehicles to emergency calls for service?
  3. What is the optimal way to protect critical information technology infrastructure?

My slides are below.

Related posts and further reading:

If you have any questions, please contact me!


The IKEA Effect

I included an aside about the IKEA Effect in my last post. The IKEA effect is one of many cognitive biases that is described as:

The tendency for people to place a disproportionately high value on objects that they partially assembled themselves, such as furniture from IKEA, regardless of the quality of the end result.

The IDEA effect was introduced in the paper “The ‘IKEA Effect’: When Labor Leads to Love” by Michael I. Norton, Daniel Mochon, and Dan Ariely. In their research, they asked participants to build various products (both utilitarian and non-utilitarian) in a series of experiments. The results indicate that the participants attached great value to the products they successfully made themselves. The reason it happens is that the work boosted the participants feeling of competence. However, the IKEA effect only happened when participants were successful.

I enjoy some DIY hobbies including knitting, sewing, and cooking and succumb to the IKEA effect all the time.

There are implications in the workplace or in academia. For example, I warn student groups about the IKEA Effect when working on class projects and advise them to be critical of their work before handing it in. I tell them about other cognitive biases, such as the bandwagon effect and the planning fallacy, and the IKEA effect is always their favorite.

Listen to or read the NPR story about the research.

When have you seen the IKEA effect in action?


Pareto efficient nut butters that balance taste and affordability

I am a huge nut butter fan. I have a nut butter shelf in one of my kitchen cabinets, and I have even ranked my favorites:


Once upon a time, I blogged about nut butters and created a chart comparing taste and cost. I wanted to update the cost-affordability chart in my previous blog post to account for my tastes. While peanut butter is third on my list of favorite nut butters above, it’s on the taste-affordability efficient frontier. And I think that’s worth celebrating today on National Peanut Butter Day.

I consider four types of peanut butter as well as soy, almond, cashew, sunflower seed, and golden pea butter. I realize I could consider more subdivisions, but I wanted to keep things simple and be consistent with I actually categorize nut butters. Peanuts and peas are technically legumes, but legume butters options seem close enough to warrant a direct comparison to nut butters. I’ll refer to all of these options as “nut butters” in this post.

The peanut butter types are:

  1. Regular peanut butter or store brand peanut butter (Skippy, Peter Pan, store brand, etc.)
  2. Homemade peanut butter (see my recipe; it’s basically just a can of nuts placed in a food processor).
  3. Natural peanut butter (there are various kinds; imagine the kinds where the oil separates)
  4. Trader Joe’s peanut butter (it tastes different than the others to me)

The criteria I consider are:

  1. Affordability: the cheaper the better
  2. Taste: subjective according to my tastes

The homemade peanut butter has a hidden cost because I have to make it; however, I only include the cost of the ingredients in my chart below.

You might argue that my homemade peanut butter and natural peanut butter are the same thing except that I make the former kind. While technically that is true, I would argue that the homemade peanut butter tastes a lot better because I made it. The “Ikea effect,” a cognitive bias in which consumers place a disproportionately high value on products they partially created, explains why I prefer the nut butters I make.

The results indicate that there are four nut butters on the Pareto frontier:

  • Cashew butter
  • Soy butter
  • Trader Joe’s peanut butter
  • Homemade peanut butter



There are all kinds of nut butters with other things mixed in: chocolate, cookie dough, bourbon pecan (it’s to die for!). All of these nut butters are excellent, although some are better than others. It’s hard to compete with chocolate so I left those off. I also left off cookie butter because it’s not a nut butter and not nearly as good (at least to me).

My cabinet at home currently has: natural peanut butter, Trader Joe’s peanut butter, golden pea butter, soy butter, chocolate peanut butter, and Nutella (for my daughters; it has too much lactose for me).

What is your favorite nut butter?




Punk Rock OR’s New Year’s resolutions

I casually constructed a list of New Year’s resolutions for 2018:

  1. Unsubscribe, don’t just delete academic spam emails.
  2. Blog more frequently, especially about my research.
  3. Say no more often to carve out more time for research that is free from email, twitter, and other distractions.
  4. Write and edit my writing every day, even if only for a few minutes.
  5. Read popular science/tech books for fun.
  6. Run a marathon and qualify for Boston.

I must confess that I’ve already failed miserably on #3a and said yes to something, but I am making time for research today (#3b) so I’ll count that as progress. I’ve also completed every other item on the list today except for #6.  The challenge will be to sustain my effort toward these goals.

What are your resolutions?


the paradox of automation: on automation and self-driving cars

Crash: how computers are setting us up for disaster,” an article by Tim Harford in the Guardian, is about how automation diminishes our skills. The article is an excerpt from Harford’s new book, Messy: The Power of Disorder to Transform Our Lives.  You can listen to an audio version of the article here.

I knew that this issue of technology eroding skills exists and is not new. Before there were writing systems, history was oral and information was passed down verbally. As a result, being able to memorize large amounts of information was a profoundly useful skill. Once we could could write information down and look it up later, the skill of memorization became less useful. Technology makes certain skills obsolete. That frees us up to develop new and more complex skills, but paradoxically, this makes us vulnerable.

From the article, I learned that these issues lead to the paradox of automation:

It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”

I have seen the paradox of automation in my research. One of my areas of research is in emergency medical services. Here, paramedics and emergency medical technicians implement a variety of medical techniques and procedures. One way to improve system performance — which reflects response times — is to increase the number of service providers. More service providers means a greater likelihood of having someone readily available and nearby to the next call, which in turn increases the likelihood of short response times. This is normally a good thing. The downside is that each service provider treats fewer patients and their skills erode because they rarely have to implement some of the procedures, so when they do, they do not do it effectively. The medical literature confirms this (it’s true that if you don’t use it, you lose it). This issue is one reason why medical personnel undergo regular training, but ensuring regular practice in the field seems to be the best way to go.

The paradox of automation will be an issue with self-driving cars.

The US Department of Transportation  adopted SAE International’s six levels of automation for autonomous cars, which provides a useful framework for discussing the future of self-driving cars:

  • Level 0 – No Automation: The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems
  • Level 1 – Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
  • Level 2 – Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
  • Level 3 – Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene
  • Level 4 – High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene
  • Level 5 – Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver

Levels 0-2 require a driver performing major driving functions. Drivers are necessary until Level 5 is reached. In the future, cars will be partially or mostly automated and will require someone to drive and/or regularly intervene, and therefore, drivers will have to learn how to manage a partially-autonomous car instead of driving it.

The idea of automation and design is discussed in the New York Times Magazine article “Rev-up: imagining a 20% driving world.” John Lee, a professor in my department at UW-Madison, discusses the paradox of automation when it comes to driving a car that is partially autonomous. “Driving and managing the automation that is helping you drive are two quite different skill sets. Automation-management skills need to be learned as much as driving skills,” he says.

The paradox of automation is discussed in the 99% Invisible podcast.

One of my favorite articles from The Onion

INFORMS Annual Meeting 2017

I enjoyed the 2017 INFORMS Annual Meeting in Houston, TX. I am on the INFORMS Board as the VP of Marketing, Communication, and Outreach and was the conference organizing committee as speakers chair (with Andrew Schaefer). The list of plenaries and keynotes is here. The speakers that Andrew and I invited were excellent, and I’m grateful they all agreed to speak.

I was relieved the conference could remain in Houston, and I enjoyed exploring the area. Here are a few memories from the conference.

A view of Houston from my hotel

On the roof of the Hilton in Houston

Cycling outside of Minute Maid Park two days before the World Series begun

Sports scheduling meets business analytics: why scheduling Major League Baseball is really hard

Mike Trick of Carnegie Mellon University came to the Industrial and Systems Engineering department at UW-Madison to give a colloquium entitled “Sports scheduling meets business analytics.”

How hard is it to schedule 162 game seasons for the 30 MLB teams? It’s really, really hard.

Mike Trick stepped up through what makes for a “good” schedule? Schedules obey many constraints, some of which include:

  • Half of each team’s games are home, half are away.
  • Teams cannot have more than three series away or home.
  • Teams cannot have three home weekends in a row.
  • Teams in the same division play six series: two early on, two in the middle of the season, and two late, with one home and one away each time.
  • Teams play all other teams in at least two series.
  • Schedules should have a good flow, with about one week home followed by one week away.
  • Teams that fly from the west coast to the east coast have a day off in between series.

Teams can make additional scheduling requests. Every team, for example, asks for a home game on Father’s Day, and this can only be achieved for half of the teams in any given year. Mike addresses this by ensuring that no team has more than two away games in a row on Father’s Day.

Mike illustrated how hard it was to create a feasible solution from scratch. You cannot complete a feasible schedule if you try something intuitive like schedule the weekends first and fill out the rest of the schedule later. This leads to infeasible schedules 99% of the time. One of the challenges is that integer programming algorithms do not quickly identify when infeasibility is reached and instead branch and bound for a long while.

Additionally, it is equally hard to change a small piece of a feasible schedule based on a new requirement and easily get another feasible schedule. For example, let’s say the pope decides to visit the United States and wants to use the baseball stadium on a day scheduled for a game. You cannot simply swap that game out with another. Changing the schedule to free up the stadium on that one day leads to a ripple of changes across the entire schedule for the other teams, because changing that one game affects the other visiting team’s schedule and leads to violations in the above constraints (e.g., half of each team’s games are at home, etc). This led to Mike’s development of a large neighborhood search algorithm that efficiently reschedules large parts of the schedule (say, a month) during the schedule generation process.

Mike found that how he structured his integer programming models made a big difference. He did not use the standard approach to defining variables. Instead he used an idea from Branch and Price and embedded more structure in the variables (which ultimately introduced many more variables) to solve the problem more efficiently using commercial integer programming solvers. This led to 6 million variables that allowed him to embed his objectives such as travel costs.

In most real-world problems, Mike noted that there is no natural objective function. MLB schedules are a function of travel distance and “flow,” where flow reflects the goal of alternating home and away weeks. The objective reflects the distance teams travel. He cannot require each team to travel the same amount. Seattle travels a minimum of 48,000 miles per season no matter the schedule because Seattle is far away from most cities. Requiring other teams to travel 48,000 miles in the season leads to schedules where teams often travel from coast to coast on adjacent series to equal Seattle’s distance traveled. That is bad.

Mike ultimately included revenue in his objective, where revenue reflects attendance. He used linear regression to model attendance. He acknowledged that this is a weakness, because attendance does not equal profit. For example, teams can sell out afternoon games when they discount ticket prices. Children come and do not purchase beer at the stadiums, which ultimately fills the stands but does not generate the most revenue.

Mike summarized the keys to his success, which included:

  1. Computing power improved over time
  2. Commercial solvers improved
  3. He solved the right problem
  4. He structured the problem in an effective way
  5. He identified a way to get quick solutions for part of the schedule (useful for when something came up and a game had to change).
  6. He developed a large neighborhood search algorithm that efficiently retools large parts of the schedule.

Three years ago I wrote a blog post about Mike Trick’s keynote talk on Major League Baseball (MLB) scheduling at the German Operations Research Conference (blog post here). that post contains some background information.