Tag Archives: computing

NP-complete problems are hard-ass problems

There is an interesting part of Lance Fortnow’s book The Golden Ticket: P, NP, and the Search for the Impossible about how we got the name “NP-complete:”

What should people call those hardest problems in NP? Cook called them by a technical name “deg({DNF tautologies}),” and Karp used the term “(polynomial) complete.” But those names didn’t feel right.

Donald Knuth took up this cause. in 1974 Knuth received the Turing Award for his research and his monumental three-volume series, The Art of COmputer Programming. For the fourth volume, Knuth, realizing the incredible importance of the P versus NP problem, wanted to settle the naming issue for the hardest sets in NP. In 1973, Knuth ran a poll via postal mail. He famously doesn’t use email today, but back in 1973 neither did anyone else.

Knuth suggested “Herculean,” formidable,” and “arduous,” none of which fared well in the poll. Knuth got many write-in suggestions, some straightforward, like “intractable,” “obstinate,” and others a bit more tongue-in-cheek, like “hard-boiled” (in honor of Cook) and “hard-ass” (“hard as satisfiability”). [Laura: Read Knuth’s post in ACM SIGACT News]

The winning write-in vote was for “NP-complete,” which was proposed by several people at Bell Labs in New Jersey after considerable discussion among the researchers there. The word “complete” comes from mathematical logic, where a set of facts is complete if it can explain all the true statements in some logical system. Analogously, “NP-complete” means these problems in NP powerful enough that they can be used to solve any other problems in NP.

Knuth was not entirely happy with this choice but was willing to live with it. He truly wanted a single English word that captured the intuitive meaning of hard search problems, a term for the masses. In a 1974 wrap-up of his survey, Knuth wrote, “NP-complete actually smacks of being a little too technical for a mass audience, but it’s not so bad as to be unusable.”

“NP-complete” quickly became the standard terminology.

The emphasis/underlining is mine.  Sorry for the PG-13 title. On a related note, if you haven’t read Richard Karp’s seminal paper “Complexity of Computer Computations,” I recommend it!

R.M. Karp, Reducibility Among Combinatorial Problems, Complexity of Computer Computations (R.E. Miller and J.W. Thatcher, eds.), Plenum Press, 1972, pp. 85–103.

Grace Hopper on David Letterman and other tidbits about programming

Today is the beginning of Computer Science Education Week. President Obama is even on board! I saw a number of tweets about Grace Hopper this morning, a programming pioneer. Among other things, compilers are credited to her. Computer Science Education Week coincides with Grace Hopper’s birthday (not a coincidence!)

The Anita Borg Institute hosts the Grace Hopper Celebration of Women in Computing conference (see: http://gracehopper.org/). The conference is for women in computing (computer science, IT, etc.). I was on a panel on research and opportunities in security at the 2012 conference. It was a terrific experience. I talked to a large audience full of women (a first for me). I also had a nice lunch with other academics, and met several incredibly supportive male colleagues. The best part of all was attending the career fair. I was completely overwhelmed with how every major software and internet company was there courting the women. I should have scanned in my program from the conference–it was full of ads from these companies specifically tailored to multi-cultural women (the ads weren’t pink and frilly, they were geeky and great!) I have never felt that women were so welcome and wanted in computing as I did that day. I will never forget it.

Back to Grace Hopper. She was once on David Letterman! Here is a 10 minute segment from the show (which I think aired back in the 1980s).

John Cook blogs about Grace Hopper in his three rules of thumb blog post(“Light travels one foot in a nanosecond”)

Grace Hopper is the google doodle today.

Grace Hopper will always have a special place in my heart. My cat is named Hopper, and not because of her jumping ability.

Here are a few things about programming and Grace Hopper from twitter:

the first supercomputer was powered by women

I stumbled across an article on ENIAC, the world’s first supercomputer that was built by the Army and unveiled in 1946.  It summarizes twelve factoids about ENIAC. I found these two the most interesting:

5. The original programmers of ENIAC computer were women. The most famous of the group was Jean Jennings Bartik (originally Betty Jennings). The other five women were Kay McNulty, Betty Snyder, Marlyn Wescoff, Fran Bilas, and Ruth Lichterman. All six have been inducted into the Women in Technology International Hall of Fame. When the U.S. Army introduced the ENIAC to the public, it introduced the inventors (Dr. John Mauchly and J. Presper Eckert), but it never introduced the female programmers.

6. Jean Bartik went on to become an editor for Auerback Publishers, and eventually worked for Data Decisions, which was funded by Ziff-Davis Publishing. She has a museum in her name at Northwest Missouri State university in Maryville, Missouri.

Kudos to the women of ENIAC and other women of supercomputing fame!

The women of ENIAC

are we failing to educate the next generation of operations researchers?

I had a chance to meet Bill Hart at the INFORMS Computing Society Conference to talk about blogging.  Bill Hart is an OR person with a computer science (CS) background.  Part of his interest in blogging stems from his CS background.  CS tends to rely on a more oral tradition, since the specific tools change even quicker than they do in OR.  There are also certain types of publications that focus on specific implementation issues rather than high-level model and algorithmic issues.  However, many such journals are no longer in press, leading to knowledge not being passed to the next generation of computer scientist. One example that Bill Hart mentioned is the journal Dr. Dobbs (a list of now defunct CS journals is maintained here).  In order to get a sense of what these journals offer, you can read about the history of Dr. Dobbs here, and you can read a programmer’s lament about its demise, where he writes:

A conventional magazine or newspaper instead “pushes” information into a reader’s hands.  I flip through every page, or at least look hard at the table of contents, of every magazine. Serendipity reigns; facts and ideas I wasn’t looking for come leaping off the page. I rip out articles of interest to read during down times, on the plane or waiting in a lobby somewhere.

Hopefully, Bill will blog more about this soon—I am doing my best.  Instead of writing more about CS, I can write about OR.

The publications that Bill Hart mentioned are aimed as using specific tools and software for solving problems.  We tend to teach the methods and theory in class, but the software comes and goes.  Students and practitioners often need to solve problems using software that isn’t documented nearly as well as their simulation or integer programming textbooks.  Google is somewhat helpful for finding documentation and help, but as I’ve learned lately with Gurobi, google is not enough.  Those of you who follow my twitter feed know that I installed Gurobi on my work computer and laptop with some success, but I had trouble finding the exact commands that I needed.  Twitter users and the Gurobi discussion group on google were necessary.

What information needs to be passed on between the generations of OR analysts?  More specifically, what are we not doing a good job of passing on? To be honest, I am not sure if I am old enough to answer those questions.

As I write these thoughts on a blog post, I am not suggesting that blogs should necessarily play a central role in educating the next generation (indeed, they are all too easy to ignore).  While blogs are good at creating content, they are not really appropriate for all types of content creation.  Twitter isn’t a better alternative.  I have used twitter to occasionally find solutions to my software problems, but I didn’t then use twitter to educate the masses with my newly-found answers. With the wealth of tools available online, I wonder if there are new opportunities to educate the next generation of operations researchers if we are creative.  What do you think?


The status of P=NP? Still open.

Mike Trick’s post on the status of P=NP is required reading.  He writes about a new review paper on the status of P=NP (by Lance Fortnow at Northwestern) and its corresponding New York Times article.  I had not heard about either.  There have been a few recent developments on the P=NP front (check out the blog Godel’s Lost Letter and P=NP for updates).  I have trouble keeping track of all of the news and had always felt like a review article was needed.  I am thrilled that one now exists, along with the newspaper article to explain the concept more clearly to non-experts.  I will be using both in some of my classes.  For more info, please check out Mike Trick’s blog post.

my first silent movie

I recently saw my first silent movie: The Passion of Joan of Arc. Much to my surprise, I loved it. I can’t wait to see another (I’ve got my eye on Nosferatu). I loved just about everything about the Passion of Joan of Arc and I didn’t miss the lack of dialogue. The movie translated just a few of the lines that were “spoken” during the movie. Much of the dialogue was left for the viewer to “hear” by reading lips and body language. The acting and camera work conveyed the emotion of Joan of Arc’s short life and trial, and it packed an incredible punch (Roger Ebert explains why this movie is a masterpiece much better than I can).

After seeing the film, I reflected on movies and technology. Movies are not just a product of technology but are also shaped by technology. Each time that movie technology changes, the movie experience fundamentally changes. Because special effects and dialogue are so easy to put in movies these days, there is too much CGI and talking to have an emotionally moving moviegoing experience most of the time. Most movies fail to convey what they really should be conveying, and as a result are flat and uninspiring (There are plenty of exceptions, and a quick glance at my Netflix history includes Tully, Gone Baby Gone, Lars and the Real Girl, Bella, and Juno).

The same can be said about operations research. We work in a technology-driven field, where we rely on software to do much of our work for us. Sometimes we rely on software too much and on good modeling too little. A IIE blog entry writes about blindly using software as a quick fix. When computing power wasn’t very powerful, making a tight, efficient formulation was necessary for finding optimal solutions. I hope we haven’t lost some of the art of OR.

This past week, I have been working on a research problem that is formulated as an integer program. I was curious to see what the optimal solutions look like, so I popped it into CPLEX. After letting CPLEX churn away on it for more than 72 hours, I felt a little guilty that I didn’t spend more time to make an elegant and efficient IP formulation. There may be a great theoretical research problem to work on, but alas, I know that it was just laziness. Just because computing is cheap and easy doesn’t mean that I shouldn’t pay attention to theoretical aspects. Although to be fair, I thought there was a good chance that CPLEX would very quickly find an optimal solution, which offered little incentive to consider various formulations. We don’t want to lose sight of important theoretical contributions just because it is easier to focus on computational challenges in OR. With journals like Mathematics of Operations Research that publish theoretical contributions to OR and journals like INFORMS Journal on Computing that publish truly innovative computing research, I am not worried for our field. But I will sit down and try to make my a masterpiece rather than let CPLEX churn on endlessly (Eventually, mbuilding had a power outage, so I never found the optimal solution with CPLEX. I wrote some code that takes just a couple of hours to run. It’s not the most elegant algorithm, but an improvement).