Tag Archives: politics

Resilient voting systems during the COVID-19 pandemic: A discrete event simulation approach

Holding a Presidential election during a pandemic is not simple, and election officials are considering new procedures to support elections and minimize COVID-19 transmission risks. I became award of these issues earlier this summer, when I had a fascinating conversation with Professor Barry Burden about queueing, location analysis, and Presidential elections. Professor Burden is a professor of Political Science at the University of Wisconsin-Madison, a founding director of the Elections Research Center, and an election expert.

I was intrigued by the relevance of location analysis and queueing theory in this important and timely problem in public sector critical infrastructure (elections are critical infrastructure). I looked into the issue further with Adam Schmidt, a PhD student in my lab. We created a detailed discrete event simulation model of in-person voting, and we analyzed it using a detailed study.

We present an executive summary of our paper below. Read the full paper here: https://doi.org/10.6084/m9.figshare.12985436.v1

 

Resilient voting systems during the COVID-19 pandemic:
A discrete event simulation approach

Adam Schmidt and Laura A. Albert
University of Wisconsin-Madison
Industrial and Systems Engineering
1513 University Avenue
Madison, Wisconsin 53706
laura@engr.wisc.edu
September 21, 2020

Executive Summary

The 2020 General Election will occur during a global outbreak of the COVID-19 virus. Planning for an election requires months of preparation to ensure that voting is effective, equitable, accessible, and that the risk from the COVID-19 virus to voters and poll workers is minimal. Preparing for the 2020 General Election is challenging given these multiple objectives and the time required to implement mitigating strategies.

The Spring 2020 Election and Presidential Preference Primary on April 7, 2020 in Wisconsin occurred during the statewide “Stay-at-home” order associated with the COVID-19 pandemic. This election was extraordinarily challenging for election officials, poll workers, and voters. The 2020 Wisconsin Spring Primary experienced a record-setting number of ballots cast by mail, and some polling locations experienced long waiting times caused by consolidated polling locations and longer-than-typical check-in and voting times due to increased social distancing and protective measures. A number of lawsuits followed the 2020 Wisconsin Spring Primary, highlighting the need for more robust planning for the 2020 General Election on November 3, 2020.

This paper studies how to design and operate in-person voting for the 2020 General Election. We consider and evaluate different design alternatives using discrete event simulation, since this methodology captures the key facets of how voters cast their votes and has been widely used in the scientific literature to model voting systems. Through a discrete event simulation analysis, we identify election design principles that are likely to have short wait times, have a low-risk of COVID-19 transmission for voters and poll workers, and can accommodate sanitation procedures and personal protective equipment (PPE).

We analyze a case study based on Milwaukee, Wisconsin data. The analysis considers different election conditions, including different levels of voter turnout, early voting participation, the number of check-in booths, and the polling location capacity to consider a range of operating conditions. Additionally, we evaluate the impact of COVID-19 protective measures on check-in and voting times. We consider several design choices for mitigating the risks of long wait times and the risks of the COVID-19 virus, including consolidating polling locations to a small number of locations, using an National Basketball Association (NBA) arena as an alternative polling location, and implementing a priority queue for voters who are at high-risk for severe illness from COVID-19.

As we look toward the General Election on November 3, 2020, we make the following observations based on the discrete event simulation results that consider a variety of voting conditions using the Milwaukee case study.

  1. Many polling locations may experience unprecedented waiting times, which can be caused by at least one of three main factors: 1) a high turnout for in-person voting on Election Day, 2) not having enough poll workers to staff an adequate number of check-in booths, 3) an increased time spent checking in, marking a ballot, and submitting a ballot due to personal protective equipment (PPE) usage and other protective measures taken to reduce COVID-19 transmission. Any one of these factors is enough to result in long wait times, and as a result, election officials must implement strategies to mitigate all three of these factors.
  2. The amount of time spent inside may be long enough for voters to acquire the COVID-19 virus. The risk to voters and poll workers from COVID-19 can be mitigated by adopting strategies to reduce voter wait times, especially for those who are at increased risk of severe illness from COVID-19, and encourage physical distancing through the placement and spacing of voting booths.
  3. Consolidating polling locations into a few large polling locations offers the potential to use fewer poll workers and decrease average voter wait times. However, the consolidated polling locations likely cannot support the large number of check-in booths required to maintain low voter wait times without creating confusion for voters and interfering with the socially distant placement of check-in and voting booths. As a result, consolidated polling locations require high levels of staffing and could result in long voter wait times.
  4. The NBA has offered the use of its basketball arenas as an alternative polling location for voters to use on Election Day as a resource to mitigate long voter wait times. An NBA arena introduces complexity into the voting process, since all voters have a choice between their standard polling location and the arena. This could create a mismatch between where voters choose to vote and where resources are allocated. As a result, some voters may face long wait times at both locations.

We recommend that entities overseeing elections make the following preparations for the 2020 General Election. Our recommendations have five main elements:

  1. More poll workers are required for the 2020 General Election than for previous presidential elections. Protective measures such as sanitation of voting booths and PPE usage to reduce COVID-19 transmission will lead to slightly longer times for voters to check-in and to fill out ballots, possibly causing unprecedented waiting times at many polling locations if in-person voter turnout on Election Day is high. We recommend having enough poll workers to staff one additional check-in booth per polling location (based on prior presidential elections or based on what election management toolkits recommend), to sanitize voting areas and to manage lines outside of polling locations.
  2. To reduce the transmission of COVID-19 to vulnerable populations during the voting process, election officials should consider the use of a priority queue, where voters who self-identify as being at high-risk for severe illness from COVID-19 (e.g., voters with compromised immune systems) can enter the front of the check-in queue.
  3. In-person voting on Election Day should occur at the standard polling locations instead of at consolidated polling locations. Consolidated polling locations require many check-in booths to ensure short voting queues, and doing so requires high staffing levels. Election officials should ensure that an adequate number of voting booths (based on prior presidential elections or based on what election management toolkits recommend) can be safely located within the voting area at the standard polling locations, placing booths outside if necessary.
  4. We do not recommend using sports arenas as supplementary polling locations for in-person voting on Election Day. Alternative polling locations introduce complexity and could create a mismatch between where voters choose to go and where resources are allocated, potentially leading to longer waiting times for many voters. This drawback can be avoided by instead allocating the would-be resources at the sports arena to the standard polling locations.
  5. The results emphasize the importance of high levels of early voting for preventing long voter queues (i.e., one half to three quarters of all votes being cast early). This can be achieved by expanding in-person early voting, in terms of both the timeframe and locations for early in-person early voting, adding new drop box locations for voters to deposit absentee ballots on or before Election Day, and educating voters on properly completing and submitting a mail-in absentee ballot.

The results are based on a detailed case study using data from Milwaukee, Wisconsin. It is worth noting that the discrete event simulation model reflects standard voting procedures used throughout the country and can be applied to other settings. Since the data from the Milwaukee case study are reflective of many other settings, the results, observations, and recommendations can be applied to voting precincts throughout Wisconsin and in other states that hold in-person voting on Election Day.

Resilient voting systems during the COVID-19 pandemic: A discrete event simulation approach

 


destroying drug cartels with mathematical modeling

The New Scientist has an article on using network analysis to destroy drug cartels. It’s worth reading [link]

They describe the structure of the network and why taking out the “hubs” can increase crime:

Complexity analysis depicts drugs cartels as a complex network with each member as a node and their interactions as lines between them. Algorithms compute the strength and importance of the connections. At first glance, taking out a central “hub” seems like a good idea. When Colombian drug lord Pablo Escobar was killed in 1993, for example, the Medellin cartel he was in charge of fell apart. But like a hydra, chopping off the head only caused the cartel to splinter into smaller networks. By 1996, 300 “baby cartels” had sprung up in Colombia, says Michael Lawrence of the Waterloo Institute for Complexity and Innovation in Canada, and they are still powerful today. Mexican officials are currently copying the top-down approach, says Lawrence, but he doubts it will work. “Network theory tells us how tenuous the current policy is,” he says.

The Vortex Foundation in Bogota, Columbia offers another approach for targeting anti-drug efforts:

Vortex uses network-analysis algorithms to construct diagrams for court cases that show the interactions between cartel members, governors and law enforcers. These reveal links that are not otherwise visible, what Salcedo-Albaran calls “betweeners” – people who are not well-connected, but serve as a bridge linking two groups. In Mexico and Colombia, these are often police or governors who are paid by the cartels.

“The betweener is the guy who connects the illegal with the legal,” says Salcedo-Albaran. Because many cartels depend on their close ties with the law to operate successfully, removing the betweeners could devastate their operations.

There is a rich history of applying OR to crime problems. Jon Caulkins has applied OR to drug. I like his paper “What Price Data Tell Us About Drug Markets” with Peter Reuter, where he touches on the drug network and hierarchy. The price of illicit drugs varies substantially in time and space. For example, illicit drug prices are lower in the supplier/hub cities as opposed to small cities. Here, the prices are not necessarily a function of the shortest path from supplier to market.

We have already alluded to the fact that there is systematic variation in wholesale prices
between cities, implying that there are poor information flows and/or significant transaction costs
associated with lateral market transactions. Examining spatial variation in retail prices also yields
insights about these markets. Caulkins (1995) found that illicit drug prices within the United
States increase as one moves away from the drug sources and that prices are lower in larger
markets. For cocaine in particular, the data support the notion that cocaine is distributed through
an “urban hierarchy,” in which large cities tend to be “leaders,” with drugs diffusing down through
layers of successively smaller surrounding communities. Points of import, such as New York City,
are at the top of the hierarchy. Large, cosmopolitan cities such as Philadelphia occupy the first tier
below points of import; more regionally oriented cities such as Pittsburgh the second; and smaller
cities the third. Of course drug distribution networks do not always follow such a regimented
pattern; some cocaine is shipped directly to smaller cities from more distant points of import such
as Miami and Houston. Nevertheless, prices show the general pattern of an urban hierarchy. This
is consistent with anecdotal observations but stands in marked contrast to common depictions of
trafficking paths which suggest that drugs more or less follow the shortest path from place of
import to point of retail sale.

There even seems to be systematic variation in prices between different neighborhoods
within one city. As Kleiman (1992) observed, heroin prices are consistently lower in Harlem than
in the Lower East Side, just half an hour away by subway. For example, in data from the 1993
domestic monitor program (DEA, 1994), the mean price per pure gram in East Harlem was
$0.358/mg vs. a mean price of $0.471/mg on the Lower East Side, a difference that is statistically
significant at the 0.05 level.

In his paper “Domestic Geographic Variation in Illicit Drug Prices” in the Journal of Urban Economics, he attributes some of the price variations to incomplete information and economies of scale (ares that produce/process large amounts of drugs can sell it more cheaply).

Related post:


forecasting the Presidential election using regression, simulation, or dynamic programming

Almost a year ago, I wrote a post entitled “13 reasons why Obama will be reelected in one year.” This post uses Lichtman’s model for predicting the Presidential election way ahead of time using 13 equally weighted “keys” – macro-level predictors. Now that we are closer to the election, Lichtman’s method offers less insight, since it ignores the specific candidates (well, except for their charisma), the polls, and the specific outcomes from each state. At this point in the election cycle, knowing which way Florida, for example, will fall is important for understanding who will win.  Thus, we need to look at specific state outcomes, since the next President needs to be the one who gets at least 271 electoral votes, not the one who wins the popular vote.

With less than two months until the election, it’s worth discussing two models for forecasting the election:

  1. Nate Silver’s model on fivethirtyeight
  2. Sheldon Jacobson’s model (Election analytics)

In this post, I am going to compare the models and their insights.

Nate Silver [website link]:

Nate Silver’s model develops predictions for each state based on polling data. He adjusts for different state polls applying a “regression analysis that compares the results of different polling firms’ surveys in the same state.” The model then adjusts for “universal factors” such as the economy and state-specific issues, although Silver’s discussion was a bit sketchy here–it appears to be a constructed scale that is used in a regression model. It appears that Silver is using logistic regression based on some of his other models. Here is a brief description of what goes into his models:

The model creates an economic index by combining seven frequently updated economic indicators. These factors include the four major economic components that economists often use to date recessions: job growth (as measured by nonfarm payrolls), personal incomeindustrial production, and consumption. The fifth factor is inflation, as measured by changes in theConsumer Price Index. The sixth and seventh factors are more forward looking: the change in the S&P 500 stock market index, and the consensus forecast of gross domestic product growth over the next two economic quarters, as taken from the median of The Wall Street Journal’s monthly forecasting panel.

Nate Silver’s methodology is here and here. It is worth noting that Silver’s forecasts are for election day.

Sheldon Jacobson and co-authors [website link]

This model also develops predictions for each state based on polling data. Here, Jacobson and his collaborators use Bayesian estimators to estimate the outcomes for each state.  A state’s voting history is used for it’s prior. State polling data (from Real Clear Politics) is used to estimate the posterior. In each poll, there are undecided voters. Five scenarios are used to allocate the undecided voters from a neutral outcomes to strong Republican or Democrat showings. Dynamic programming is used to compute the probability that each candidate would win under the five scenarios for allocating undecided votes. It is worth noting that Jacobson’s method indicates the Presidential election if it is held now; it doesn’t make adjustments for forecasting into the future.

The Jacobson et al. methodology is outlined here and the longer paper is here.

Comparison and contrast:

One of the main differences is that Silver relies on regression whereas Jacobson uses Bayesian estimators. Silver uses polling data as well as external variables (see above) as variables within his model whereas Jacobson relies on polling data and the allocation of undecided voters.

Once models exist for state results, they have to be combined to predict the election outcome. Here, Silver relies on simulation whereas Jacobson relies on dynamic programming. Silver’s simulations appear to simulate his regression models and potentially exogenous factors. Both the simulation and dynamic programming approaches model inter-state interactions that do not appear to be independent.

Another difference is that Silver forecasts the vote on Election Day whereas Jacobson predicts the outcome if the race were held today (although Silver also provides a “now”-cast). To do so, Silver adjusts for post-convention bounces and for the conservative sway that occurs right before the election:

The model is designed such that this economic gravitational pull becomes less as the election draws nearer — until on Election Day itself, the forecast is based solely on the polls and no longer looks at economic factors.

This is interesting, because it implies that Silver double counts the economy (the economy influences voters who are captured by the polls). I’m not suggesting that this is a bad idea, since I blogged about how all forecasting models stress the importance of the economy in Presidential elections. It is worth noting that Silver’s “now”-cast is close to Jacobson’s prediction (98% vs. 100% as of 10/1)

Silver makes several adjustments to his model, not relying solely on poll data. The economic index mentioned earlier is one of these adjustments. Others are the post-convention bounces (those have both been weighed out by now). While Silver appears to do this well, the underlying assumption is that what worked in the past is relevant for the election today.  This is probably a good assumption as long as we don’t go too far in the past. This election seems to have a few “firsts,” which suggests that the distant past may not be the best guide. For example, the economy has been terrible: this is the first time that the incumbent appears to be heading toward reelection under this condition.

Both models rely on good polls for predicting voter turnout. The polls in recent months have been conducted on a “likely voter basis,” From what I’ve read, this is the hardest part of making a prediction. The intuition is that it’s easy to make a poll, but it’s harder to predict how this will translate into votes. Silver explains why this issue is important in response to a CNN poll:

Among registered voters, [Mr. Obama] led Mitt Romney by nine percentage points, with 52 percent of the vote to Mr. Romney’s 43 percent. However, Mr. Obama led by just two percentage points, 49 to 47, when CNN applied its likely voter screen to the survey.

Thus, the race is a lot closer when looking at likely voters. Polling is a complex science, but those who are experts suggest that the race is closer than polls indicate.

Jacobson’s model overwhelmingly predicts that Obama will be reelected, which is in stark contrast to other models that give Romney a 20-30% chance of winning as of 9/16 and a ~15% of winning today (10/1). Jacobson’s model predicted an Obama landslide in 2008, which occurred. The landslide this time around seems to be due to a larger number of “safe” votes for Obama in “blue” states (see the image below). Romney has to win many battleground states to win the election. The odds of Romney winning nearly all of the battleground states necessary to win is ~0% (according to Jacobson as of 9/30). This is quite a bold prediction, but it appears to rely on state polls that are accurately calibrated for voter turnout. To address this, Jacobson uses his five scenarios that suggest that even with a strong conservative showing, Romney has little chance of winning.  Silver and InTrade predict a somewhat closer race, but Obama is still the clear favorite  (e.g., Intrade shows that Romney has a 24.1% of winning as of 10/1) .

Additional reading:

Special thanks to the two political junkies who gave me feedback draft of this blog: Matt Saltzman and my husband Court.

Sheldon Jacobson’s election analytics predictions as of 9/16


Who will be the Republican nominee?

The race for the Republican Presidential nomination has changed so much in the past week that it is hard to keep up. I enjoy reading Nate Silver’s NY Times blog when I have a chance. A week ago (Jan 16) he wrote a post entitled “National Polls Suggest Romney is Overwhelming Favorite for GOP Nomination, where he noted that Romney had a 19 point lead in the polls. He wrote

Just how safe is a 19-point lead at this point in the campaign? Based on historical precedent, it is enough to all but assure that Mr. Romney will be the Republican nominee.

Silver compared the average size of the lead following the New Hampshire primary across the past 20+ years of Presidential campaigns. He sorted the results according to decreasing “Size of Lead” the top candidate had in the polls. The image below is from Silver’s blog, where it suggests that Romney has this race all but wrapped up.

It looks almost impossible for Romney to blow it. I stopped following the election news until Gingrich surged ahead and the recount in Iowa led to Santorum winning the caucus.

A mere week later, it looks like Romney’s campaign is in serious trouble. Today (Jan 23), Silver wrote a post entitled “Some Signs GOP Establish Backing Romney is Tenuous.”  His forecasting model for the Florida primary on January 31 now predicts that Newt Gingrich has an 81% chance of winning. This is largely because Silver weighs “momentum” in his model, which Gingrich has in spades.

Two months ago, I blogged about how Obama will win the election next year. I was only half-serious about my prediction. Although the model seems to work, it is based on historical trends that may not sway voters today. Plus, I had no idea who the Republican nominee would be. Despite my prediction, I certainly envisioned a tight race that Obama could lose. Not so much these days.

A lot has changed in the past week (and certainly in the past two months!)

My question is, what models are useful for making predictions in the Republican race? Will the issue of “electability” ever become important to primary voters?

 


community based operations research

Michael Johnson, PhDI had the pleasure of interviewing Michael Johnson about his upcoming book Community Based Operations Research in a Punk Rock OR Podcast (21 minutes). It’s a fantastic book: I recommend that you ask your university library to add it to their collection.  If you are heading to the INFORMS Annual Meeting and are interested in CBOR, you might want to check out the two panels that Michael Johnson is chairing.

If you cannot wait for your copy of CBOR to arrive in the mail, I recommend reading Michael Johnson and Karen Smilowitz’s INFORMS Tutorial on CBOR. It’s a must read!

Other Podcasts can be found here.


optimizing school bus routes

This is my second post on politics this month (this month’s INFORMS blog challenge–my first post is about snow removal).  There are few political topics that invoke an emotional response as strongly as does K12 public education.  My daughter started attending the public school system this year, and I have been surprised at how, well, political school is.  But given the budget cuts over the past few years, some of that is understandable.

I learned the bus route that my daughter would take before the school year began.  My first reaction was to wonder if the bus routes were optimized (what else would my reaction be?).  Designing bus schedules isn’t rocket science, but it can be haphazard, leading to kids spending extra time on the bus, wasted gas, and late bus drivers.

A quick search on the web indicates that bus route scheduling is quite the political issue.

In my neck of the woods, I could probably identify near-optimal solutions without having to build a model (there are a number of small, isolated subdivisions with low road connectivity, which makes routing and bus stop selection a breeze).  But other bus routing scenarios are more complex, either due to the sheer size of the school system, the density and layout of neighborhoods, or not-so-simple school boundaries.

One feature that makes bus routing tough are “fair” policies for allocating students to schools that use lotteries to let parents select their schools: any child could attend any school so buses for every school could pass through a single neighborhood.  Good bus routes become much harder to identify (unless OR is used!), and regardless, students would have to spend more time on the bus as the distance to school increases.

Michael Johnson and and Karen Smilowitz’s excellent TutORial on Community Based Operations Research contains a brief overview about allocating public education resources.  In addition to school bus routing, operations research has been used to

  • design recommendations for school closures in a region that reflect socio-economic characteristics of the students in different areas of the region.
  • develop forecasting models for school attendance as input to optimization models for locating public-school buildings and setting attendance boundaries.
  • use data envelopment analysis (DEA) that uses school performance observations to guide secondary schools for ways to improve their performance.

Have you seen OR used for public education?

Related posts:


snow removal using a shovel, a plow, and operations research

This month’s INFORMS blog challenge has the topic of politics.  I’ve written about politics frequently before, mainly as it relates to elections.  I have also written about voting systems as they relate to politics, the Oscars, and the Olympics.  Politics is a broad topic, but I’ll write about something I haven’t before: snow removal.  After all, I am at home during yet another snow day (I live in Virginia:  1″ of snow is enough to paralyze the entire city.  My Midwestern sensibilities do not understand this).

Mike Trick wrote a wonderful post about snow removal last year.  I won’t duplicate his efforts, but I will note that James Campbell from University of Missouri-St. Louis has written several articles about OR and snow removal as has Gene Woolsey.  Snow removal can be formulated as an optimization model using generalized assignment and partitioning problems.

Another way to optimally remove snow is by using OR to influence urban planning.  In many states, including my state of Virginia, new laws limit the number of cul-de-sacs that can be built in new neighborhoods.  Cul-de-sacs and neighborhoods with few entrances and exits cause many problems: they create traffic bottlenecks and accidents, increase ambulance response times, and increase the time for snow removal.  Models that can relate neighborhood design to the cost of providing public services are valuable for removing snow more efficiently for decades to come.

The politics of snow removal are interesting to me.  As a Chicagoland native, I grew up used to seeing buckets of salt dumped on the road before every storm and plows quickly responding to blizzards.  When I was older, I was surprised to learn that most of the country does not have such great public services.  My parents told me why:  the blizzard of 1979–when more than 88″ of snow was dumped on the city–was so mismanaged that the mayor of Chicago lost his reelection bid.  A good snow removal plan that used OR would have reduced or eliminated the public backlash.  We still see political fallout after blizzards, such as the controversy surrounding the EMS response in New York during the December blizzard.  On the other hand, Newark’s major Cory Booker was praised by just about every news outlet for personally digging citizens out of the snow.  Of course, this is not a very efficient method for removing snow, but sometimes appearances matter more than efficiency.

Related posts:

  • *vote* a post about election-oriented politics

* vote! *

Seeing as how tomorrow is election day, I rounded up a few election-related OR posts.  Don’t forget to vote!

Punk Rock OR posts:

Elsewhere on the blogophere:

Elsewhere on the web:


a second-hand account of the inauguration

One of my students attended the inauguration and indicated that there were at least three logistical problems:

  1. The Metro was indeed a mess. Much of the problem, however, was due to the fact that there were not enough personnel there to help the out-of-towners navigate the Metro, so the confusion was worse than the crowds. I can believe this. All tickets are purchased by automated machines. The fare cost depends on where you are going (not a flat fare), and there is a discount 9am–3pm. Parking is paid by different automated machines and I still haven’t figured those out. Compounding the problem was the rumor that someone was hit by a train, which stressed an already stressed system. This rumor turned out to be true–a 68 year old woman was hit by a train and luckily survived. More than 1.5M rides were taken using the Metro on Monday.
  2. The second logistical nightmare was trying to get onto the lawn at the Mall. About 240K tickets were issued, and there were not enough personnel taking tickets and letting people onto the Mall. The line was very long and people became impatient after waiting for hours. Watch the video here.  The crowds got angry and surged forward, breaking through the barricade of armed security guards. More than 4000 people with tickets missed the inauguration.  (Side note:  I have noticed that there is almost always a bottleneck at admission for special events.  Understaffing admission seems to be a common oversight).
  3. Going to the bathroom was indeed a challenge. The wait to use a McDonald’s bathroom took an hour, but at least there was toilet paper.

I skipped the inauguration

I did not attend the inauguration mainly due to unpredictable travel times and child care issues. But I followed it via web stream and news sites and wanted to write an update about the crowd estimates and logistical issues that I wrote about last week.

The crowd estimates were predictably less than the prediction of about 2 million people. The Washington Post reported that the total crowd size was approximately 1.8 million (including people attending the parade) and 1 million people on the Mall. Expert Clark McPhail estimated the crowd size to be 1 million on the Mall. He said, “It was sparser than I thought… There were lots of open spaces.” The Numbers guy reports the estimates that many networks were using and how they got them.

It seems like there were enough bathrooms. There are some reports of long bathroom lines, but in the end, the wait seemed doable. I was unable to find any articles about a major bathroom problem or the lack of toilet paper. Hooray!

Miraculously, the traffic going to and from DC was reported to not be too bad.  All bridges connecting Virginia to DC were shut down before the Inauguration, so I expected the worst.  It sounds like heavy congestion was limited to Metro stations.  Originally, traffic on I95 was predicted to back up all the way to Richmond (about 100 miles south of the bridges to DC) but there was absolutely no traffic in Richmond.

If you attended any of the inaugural events, please comment about your experience and the logistics.