# Tag Archives: teaching

## finding optimal marriage pairings using the assignment problem

The topic of today’s blog post is about optimally finding a spouse using optimization models (HT Anna Nagurney). This post is based on a paper published in EJOR entitled, “Optimizing the Marriage Market: An Application of the Linear Assignment Model,” and in it, researchers apply the linear assignment problem to identify how to optimally match potential (heterosexual) couples to find a new social optimum. While matching the couples is a textbook exercise, the researchers used a longitudinal dataset in Switzerland to identify meaning weights to assign to each potential pairing. They find that the actual marriages are far from optimal.

The weights are based on logistic regression models for predicting the likelihood of divorce from a longitudinal data set. The weights are based on four types of socioeconomic variables of each person in the set:

1. Age
2. Previous divorce (or not)
3. Education (high or low)
4. Nationality (Swiss, Western, or non-Western)

The weights for each pairing are not symmetric. For example, a wife is much more likely to divorce from a husband five years her junior than five years her senior.

The assignment problem is an integer programming model that produces the lowest cost one-to-one matching between two sets of items, such as individuals and jobs. Here, the two sets of items are men and women. The assignment problem is totally unimodular, and therefore, can be efficiently solved via the Hungarian algorithm.

Let:

• W = set of women
• M = set of men (with |W|=|M|)
• x{ij} = 1 if woman i is matched to man j, i in W and j in M
• c{ij} = the “cost” of matching woman i is to man j.

The optimization problem is:

A solution to the assignment problem admits exactly m=|M| = |W| variables with value 1 (the rest of the variables are zero). The structure here is a bipartite graph: one set of nodes represents the women and the other set of nodes represents the men. Every women is connected to all the men (and none of the women) and vice versa. There are m! possible matchings (corresponding to some permutation of possible pairings), and the assignment polytope has m! extreme points.

The Hungarian algorithm works by finding the reduced cost matrix, by first subtracting the smallest value in each row from the entire row. This is repeated for each row, leaving a zero in each row. Then, this is repeated over the columns. The resulting reduced matrix will have a zero in every column and every row, and all of its entries will be nonnegative. The optimal solution is identified by covering the zeros by adding lines row-wise and column-wise in a multi-step procedure.

I put together a small Excel spreadsheet with 9 men and women [Link to my Excel file and to the instructions], where I solve the assignment program. Please download and use in an introductory LP class.

The authors of the paper say that their method is an “innovative method of optimizing romantic partner allocation.” Of course, this is no way to find a partner for life.  However, the authors point out that they could substantially improve marriage survival by reallocating 68% of the pairings. They conclude that “current marriage markets are suboptimally organized.” My Valentine’s Day wish to my readers is that you optimally organize your love life with or without the use of optimization models.

## a multiobjective decision analysis model to find the best restaurant in Richmond

I taught multiobjective decision analysis (MODA) this semester. It is a lot of fun to teach. I always learn a lot when I teach it. One of the most enjoyable parts of the class (for me at least!) is to run a class project that we chip away at during class over the course of the semester. Our project is to find the best restaurant for us to celebrate at the end of the semester. “Best” here is relative to the people in the class and the .

The project is a great way to teach about the MODA process. The process not only includes the modeling, but also the craft of working with decision makers and iteratively improving the model. It’s useful for students to be exposed to the entire analysis process. I don’t do this in my other classes.

On the first day of class, we came up with our objectives hierarchy. I did this by passing out about five Post It notes to each student. They each wrote one criteria for selecting a restaurant on each Post It note. They stuck their Post It notes to the wall. Together, we regrouped and organized our criteria into an objectives hierarchy.  Some of the objectives because “weed out criteria,” such as making sure that the restaurant could accommodate all of us and comply with dietary restrictions.

Our initial criteria were:

1. Distance
2. Quality of food
3. Variety of food
4. Service: Fast service
5. Service: Waiting time for a table
6. Service: Friendly service
7. Atmosphere: Noise level
8. Atmosphere: Cleanliness
9. Cost

Our final criteria were as follows (from most to least important):

1. Quality of food
2. Cost (tie with #3)
3. Distance
4. Fast service (tie with #5)
5. Noise level
6. Cleanliness

We removed variety of food, waiting time, and friendly service because classroom discussions indicated that they weren’t important compared to the other criteria. Variety, for example, was less important if we were eating delicious food at an ethnic restaurant that had less “variety” (variety in quotes here, because it depends on you you measure it).

In the next few weeks, we worked on identifying how we would actually measure our criteria. Then, we came up with a list of our favorite restaurants. During this process, we removed objectives that no longer made sense.

We collaboratively scored each of the restaurants in each of the six categories by using a google docs spreadsheet.

1. Quality of food = average score (1-5 scale)
2. Cost (tie with #3) = cost of an entree, drink, tax, and tip
3. Distance = distance from the class (in minutes walk/drive)
4. Fast service (tie with #5) = three point scale based on fast service, OK service, or very slow service
5. Noise level = four point scale based on yelp.com ratings
6. Cleanliness: based on the last inspection. Score = # minor violations + 4*# major violations.

A real challenge was to come up with:

• the single dimensional value functions that translated each restaurant score for an objective into a value between 0 and 1.
• the weights that balanced our preferences across objectives using swing weight thinking. FYI, we used an additive model.

I won’t elaborate on these parts of the process further. Ask me about these if you are interested.

When we finished our model, the “best” decision was to forego a restaurant and do a potluck instead. No one was happy with this. We examined why this happened. This was great: ending up with a bad solution was a great opportunity for learning. We concluded that we didn’t account for the hidden costs associated with a potluck. Namely, it would entail either making a trip to the grocery store or cooking, approximately a 30 minute penalty. We decided that this was equivalent to driving to a distant restaurant, a 26 minute drive in our model.  It was also hard to evaluate cleanliness since the state do not inspect classrooms like they do restaurants. But since cleanliness didn’t account for much of our decision, we decided not to make adjustments there.

We performed a sensitivity analysis on all of the weights. Regardless of what they were, most of the restaurants were dominated, meaning that they would not be optimal no matter what the weights were. The sensitivity was not in google docs, since we downloaded the document and performed sensitivity on our own. I show the sensitivity wrt to the weight for quality below. The base weight for quality is 0.3617. When the weight is zero and quality is not important, Chipotle would have been our most preferred restaurant. The Local would be preferred only across a tiny range.

We celebrated in Ipanema, a semi-vegetarian restaurant in Richmond. I think our model came up with a great restaurant. We all enjoyed a nice meal together. Interestingly, Mamma Zu scored almost identically to Ipanema (see the figure below).

I cannot claim credit for this fun class project. I shamelessly stole this idea from Dr. Don Buckshaw, who uses it in MODA short courses.  We use the Craig Kirkwood’s Strategic Decision Making as the textbook for the course. I also recommend Ralph Keeney’s Value Focused Thinking and John Hammond’s Smart Choices.

How do you choose a restaurant?

Sensitivity with respect to the weight for quality (0.3617 in the base case).

## university offers zombie apocalypse course to teach students survival skills

Michigan State University plans to offer a zombie apocalypse course to teach students survival skills. The course will be offered by the School of Social Work. (Hat tip to Paul Rubin). The course won’t really teach students how to survive a zombie attack, rather, it uses a zombie apocalypse as a vehicle for teaching students about how to model catastrophic events and infectious diseases like pandemic flu.

This has me convinced that I should develop a course on OR models for a zombie apocalypse.

I am planning to develop a similar course that teaches introductory OR modeling to undergraduates by way of applications in emergency preparedness and emergency response. I had envisioned covering more traditional disasters, such as hurricanes and earthquakes. Maybe I should think outside the box.

What topics would you offer in an OR course on the zombie apocalypse? I would start with population models using birth-death models and/or differential equations (see one of my previous posts on this topic) and then look at how to staff deputies or federal marshals to combat the zombie hoards.

I plan to talk about zombies, werewolves, and vampires in the stochastic processes course I am teaching this semester. Here is a previous exam question.

## how to find a football team’s best mix of pass and run plays using game theory

This is my third and final post in my series of football analytics slidecasts. After this one, just enjoy the Superbowl. My first two posts are here and here.

This slidecast illustrates how to find

• the offensive team’s best mix of run and pass plays, and
• the defensive team’s best mix of run and pass defenses.
The best mix is, of course, a mixed strategy. We use a game theory to identify the best mix (a Nash equilibrium) for a simultaneous, perfect information, zero-sum game.
View another webinar from Laura McLay

## When is a two point conversion better than an extra point? A dynamic programming approach.

This post continues my series of slidecasts about football. My first slidecast is here.

Today’s topic addresses when a two point conversion is better than an extra point after a touchdown. As you may guess, it is best for a team to go for two when they are down by eight. You can see other scenarios when it is best to go for two, based on the point differential and the remaining number of possessions in the game.

This presentation is on Wayne Winston’s book Mathletics, which is a fantastic introduction to sports analytics.

Related post:

## should a football team go for it on fourth down?

With the Superbowl coming up, I created three sports analytics slidecasts for analyzing football strategies. I will post one per day here on the blog.

The first slidecast deals with the decision of whether a football team should go for it on fourth down (or should they punt). The presentation is adapted from the book Scorecasting by Tobias Moskowitz and Jon Werthem. Wayne Winston blogged about this, and his blog post went viral. Here is another look at this issue.

Last month, ABC aired a science and technology program on prime time. It was called FIRST — Science is Rock and Roll, and it was about robotics built by K12 kids. I don’t remember seeing another science special on a major network during prime time. It was a terrific coup for science education in the US. However, it only aired because will.i.am’s gumption: he bought the airtime from ABC to make the robotics special happen.

Contrast that with the events of last night. The Miss Universe beauty competition appeared on NBC in prime time. A rock star’s resourcefulness wasn’t even required for the beauty competition to air. Reliable sources tell me that there was a swimsuit competition (is it not the year 2011?)

One step forward and one step back. It reminds me of something I saw on twitter last year:

“40 years ago we risked it big to put a man on the moon. 10 years ago we added ‘Hot coffee’ disclaimers on coffee cups.”

## financial fragility

Quick question: If you were to face a \$2,000 unexpected expense in the next month, could you get the funds you need?

(b) probably able

(c) probably not able

(d) certainty not able

If you answered (a), kudos! Answering (b) isn’t too bad, but (c) or (d) have me worried.

Nationally, 24.9% of respondents answered (a), 25.1% answered (b), 22.2% answered (c), and 27.9% answered (d). So almost exactly half of all respondents would have trouble meeting an unexpected financial expense (see the WSJ article that summarizes just how fragile US households are).  These findings are summarized in a report by the National Bureau of Economic Research based on a survey from 2009. I’ve heard that an unexpected financial crisis that would lead to someone filing for bankruptcy is small (less than \$2000!).  This is depressing.

Why am I bringing this up on an OR blog? Well, I am also an OR educator who firmly believes that part of my job as an educator is to teach students how to be better citizens.  Good citizens, in my opinion, are savers (among other things).  I’m not alone: my PhD advisor frequently shared advice on saving for retirement and on reducing travel expenses.

I’m not sure OR students would necessarily avoid making financial mistakes that could lead to bankruptcy, even with high-paying jobs.  The researchers who wrote the aforementioned report find that:

• 9.8% of respondents making more than \$150K per year would certainly not be able to cope with an unexpected \$2K expense, with an additional 4.7% probably not able to cope.
• 10.8% of respondents making \$100-150K per year would certainly not be able to cope, with an additional 12.9% probably not able to cope.
• 11.3% of respondents with graduate education would certainly not able to cope, with an additional 11.6% probably not able to cope.

It could be worse. This is a list of respondents in various countries that would certainly not be able to cope:

1. UK = 35.5%
2. Portugal = 32.1%
3. Germany = 28.9%
4. USA = 27.9%
5. Netherlands = 18.9%
6. France = 18.8%
8. Italy = 9%

At the end of each semester, I tell students the seven things I want them to really learn from my class (see the blog post here), one of which is to “Do not live outside of your means, even on a graduate student stipend.” Maybe I should expand that bit of advice but not to the point of preaching.

How do you give financial advice to students?

Related posts on frugality:

## The discrete optimization blog has reached the end of the tour

As you may recall, I required my PhD students to maintain a discrete optimization blog this semester. Since the semester is over, the blog will “end” as well (although it will live on in the blogosphere).  I miss my students’ new posts already!  I wrote about a the lessons I learned this semester while undertaking this teaching experiment and included some blog stats (you may be surprised on the top blog posts of the semester).  I’ll refer you to the class blog for more.

## a course blog on discrete optimization

I decided to try an experiment this semester that involves possibly torturing a few grad students:  I proposed that the students in my grad-level course on discrete optimization would maintain a blog for course of the semester. Amazingly, the students in my class agreed to my little experiment!  I am happy to say that our blog, the not-so-cleverly titled Discrete Optimization Blog, has been live for two weeks now.

If you’re interested, please stop by and visit us.  Blog posts are due on Mondays at noon (from now until the first week in May).  You can read more about the blog assignment here. As far as I know, this is the first operations research course blog, and I think the students are doing a great job.