Tag Archives: stochastic processes

is anything really exponentially distributed?

I am teaching stochastic processes again this semester. After enjoying a humorous exchange on twitter about the exponential distribution, I wondered about the practicality of the exponential distribution. Most of the examples from my notes seem a little idealized.  For example, I’m almost positive that light bulb times are not exponentially distributed. Is anything really exponentially distributed?

Things that are almost certainly exponentially distributed:

  • The amount of ink and graffiti on dollar bills that have been in circulation more than one year (a student who works for the Federal Reserve Bank provided this useful tifbit!).
  • The useful life of things made from polyurethane foam, such as Nerf balls, car seats, mattress pads, and carpet pads (the useful life occurs after the break-in period and prior to the break-down period).
  • The time between 911 calls (see below).
  • The time between celebrity deaths.

I looked at some emergency medical 911 data.  The call volume changes over the course of the day, but the call volume is constant for a large part of the day.  Looking just at that time period, I examined the interarrival times of the 911 calls.  The exponential distribution is more or less a perfect fit!  I don’t have any data for my celebrity deaths hypothesis, but since they are independent, like most 911 calls, then I would expect celebrity deaths to follow an exponential distribution.

The time between emergency medical 911 calls

The time between emergency medical 911 calls is exponentially distributed

What else is exponentially distributed?  Do you know how light bulb lifetimes are distributed?

Advertisements

robust Christmas shopping

I usually more or less finish my Christmas shopping before Black Friday, so I usually do not worry about optimal shopping strategies. I enjoyed reading about Aurelie Thiele’s latest paper on robust timing of markdowns.  Her blog post summarizes her new paper (she provides a link to the paper), where Aurelie and her collaborators propose a method for dynamically timing markdowns over a finite horizon using robust optimization models.

This got me thinking about how to reverse-engineer the process to get the best deals.  If you’re not willing to do Black Friday shopping (the getting up at 3am and waiting in long lines do not make up for the extra savings), then shopping for a few key items is like a stopping problem (e.g., the Secretary Problem), where a series of deals are offered (some at the same time, such as in the Sunday ads) and the consumer eventually decides to purchase exactly one item with a deadline of December 24.  When should I purchase the item?  I suppose it depends on how retailers adjust the prices from one time period to the next.

These are a few rules of thumb that I’ve learned (beware: no modeling or algorithms used here).  With a bad economy, I see no drawback to a wait and see approach, since sales continually improve and store coupons become available as shoppers stay home.  I have found that in the last three years or so, the pre-Christmas sales (not just Black Friday) are often better than the post-Christmas clearance sales.  So I stock up and do some personal shopping before the holidays.  However, the clearance sales become lucrative in February and insanely good in March (for clothes, at least).

How do you do your Christmas shopping?

Related posts:


plantains and coupon collecting

I must have stochastic processes on the mind after wrapping up another semester.  I will admit that this is a rather silly post. I was sauteing a couple of plantains last night.  I cut the plantains into thin slices and then saute them in a little olive oil in a cast iron skillet until they are browned.  It’s hard to flip plantain slices over, and only a few slices can come into contact with the skillet.  As a result, it’s hard to evenly cook the slices.

As I was cooking, I starting to mentally model how I would estimate how long it would take to brown all of the plantain slices.  Since I flip plantains at semi-regular intervals (about once every minute or two), it provides a discrete-time structure to the problem.  The more flips have occurred, the lower the probability that any unbrowned plantain comes into contact with the skillet (since it becomes more likely that already-browned slices come in contact with the skillet).  This is not unlike the coupon collecting problem, where there is a set of n coupons from which coupons are collected one at a time with replacement.  The set of n coupons is my set of plantain slices, plantain slices are collected when they come in contact with the skillet, and plantain slices can come into contact with the skillet more than once.  I do not collect coupons one by one, but rather I collect a subset of coupons at each flip–the number of plantain slices that can come in contact with the skillet.  The bottom line is that it takes an increasingly long time to brown each additional plantain slice.

Of course, I could always game the system and manually exchange browned plantain slices with unbrowned plantain slices to sleep along the cooking process and encourage uniformity.  But (a) I don’t have the time to devote to manually flipping with two kids to feed and (b) that’s a little obsessive.

So that leaves me with the solution to the coupon collecting problem: the plantains would burn before they would all be browned. I suppose I’ll have to work on my plantain-turning technique and accept a few undercooked plantains.

What algorithm do you use for skillet cooking?  How do you cook plantains? (I love ’em, but I’m a plantain novice. I’d love some new recipes).

Plantains

Links:

Related posts:


OR and H1N1

This is the second of three posts about the INFORMS Annual Meeting.

I enjoyed a talk by Dr. Richard Larson of MIT about the timely topic of H1N1 and operations research.  I tuned out much of the alarmist news prior to the conference (to keep my sanity) and instead adopted a rigorous handwashing regimen.  Larson’s talk highlighted the many opportunities for addressing H1N1 issues using operations research, including:

  • Queuing for vaccinations.
  • Reneging on vaccinations (some health care workers are refusing required vaccinations).
  • Timing the vaccinations (before the prevalence peaks) is important for reducing risks, since youths are particularly susceptible to dying from H1N1..
  • Locating facilities to manage surge capacity when the epidemic hits.
  • Correctly diagnosing and isolating cases of H1N1.
  • Supply chains for vaccinations.

Larson and his collaborator Dr. Stan Finkelstein takes a different kind of focus, looking at personal choices, such as hand washing, coughing into sleeves, avoiding handshakes, and avoiding crowds.  They examine this issue through non-pharmaceutical interventions.  Someone infected with H1N1 infects about 1.5 people in the next 24 hours (on average).  This value is the mean of a random variable, which depends on personal choices (like handwashing).  He examines the conditions under which the average number of infections decreases below 1.0, when the virus essentially dies out (Similar to my reasoning on vampire populations).

Finkelstein, a medical doctor, discussed some of the policy results.  Initial reports suggested that H1N1 has a fatality rate of about 50% (Spanish flu has a FR of 3%).  After an initial panic, flu fatigue set in.  And the first wave of H1N1 resemble seasonal rather than pandemic flu.  But after the recent panicking, many of us simply have not been motivated to improve our personal choices to reduce H1N1 transmission.  Case in point, elbow bumping pictured below (instead of hand shaking) did not catch on at the conference as I had hoped. And the anti-bacterial hand gel was not located in useful places at the conference, so I used my own personal stash of anti-bacterial lotion after shaking hands.

I hope some of this research is used to lessen the impact of H1N1 this year before I am transformed into a germ-a-phobe.

Link:  Flu101@MIT

Karima Nigmatulina, after successfully defending the first PhD thesis on our flu research project, bumps congratulatory elbows with advisor Richard Larson as Anna Teytelman looks on. 	 	 CESF Venn CESF embraces problems operating at the Venn diagram intersection of ‘traditional engineering,’ management (broadly interpreted) and social science.

Karima Nigmatulina, after successfully defending the first PhD thesis on our flu research project, bumps congratulatory elbows with advisor Richard Larson.


on vampires and stochastic processes

The movie Twilight came out on DVD came out earlier in the week. This movie about teenage vampires made a lot of money at the box office, and I have to admit that I’m a little curious to see what all the fuss is about. But I can’t get into the whole vampire thing. I have a great deal of skepticism about vampires.

Here’s my problem with vampires. I have a hard time believing that there would be just a few vampires out there and that the existence of vampires would be such a well-kept secret. After all, they reproduce rather easily (a single vampire could create thousands of offspring, whereas there are limits to human reproduction) and vampires don’t die easily. If there were vampires, they would almost certainly outnumber humans (but then vampires would run out of food).

This argument becomes even more overwhelming if you model a vampire population as a branching process or birth-death process and assume that each vampire in the population has probability Pj of producing j offspring (with j=0,1,2,… ). The vampire population would either explode or die out, depending on the expected number of offspring per vampire. But if you take into account the fact that vampires live many, many generations (they’re virtually immortal) and may create thousands of offspring, the population explodes (if you assume that each vampire creates at least one vampire, on average, before it dies). With those numbers, vampires would not be living under the radar–they would be everywhere!

I have yet to see a vampire movie that implicitly assumes that there is a reasonable model for vampire population dynamics (using a stochastic process framework or something else). And frankly, I’m pretty disappointed. Until I am offered a reasonable explanation for why there aren’t more vampires, I won’t be able to jump on the vampire bandwagon. If I had free time, maybe I would write a mathematically consistent vampire novel.

See the response posted on March 31.