Tech

The Broken Algorithm That Poisoned American Transportation

a highway in the shape of a frowny face

This article appears in VICE Magazine’s Algorithms issue, which investigates the rules that govern our society, and what happens when they’re broken.

In November 2011, the Louisville-Southern Indiana Ohio River Bridges Project published a 595-page document that was supposed to finally end a decades-long battle over a highway. The project was a controversial one, to say the least.

Videos by VICE

At a time when many cities around the country were re-evaluating whether urban highways had a place in their downtowns, Louisville was doubling down. It not only wanted to keep the infamous “Spaghetti junction” where Interstates 64, 65, and 71 meet in a tangled interchange, but it wanted to build more on top of it. In addition, the political alliance behind the project aimed to expand the I-64 crossing to double the lane capacity, as well as build a whole new bridge just down the river—doubling the number of lanes that crossed the river from six to 12—all for a tidy $2.5 billion.

But in order to get approval to use federal funds for this expensive proposition, the project backers had to provide evidence that Louisville actually needed this expansion. Using a legally-mandated industry practice called Travel Demand Modeling (TDM), the project backers hired an engineering firm to predict what traffic will look like 20 years in the future, in this case, by 2030. They concluded that the number of cross-river trips would increase by 29 percent. The implication was obvious: if they did nothing, traffic would get worse. As a result, the project got federal approval and moved ahead.

Two subsequent studies, however, also funded by the Louisville-Southern Indiana Ohio River Bridges Project, came to a very different conclusion.

Two years later, engineering firm CDM Smith looked at what traffic conditions actually had been while the project was seeking approval. It found that from 2010 to 2013, cross-river traffic had actually fallen by .9 percent.

The other study, this one for potential bond-holders, was far more puzzling. It concluded that by 2030, the combined cross-river traffic would be just 132,000 trips, some 15 percent lower than the SDEIS had predicted. Even worse, according to this new study, the combined 12 lanes of river crossings would carry some 4,000 fewer daily trips than just the I-65 bridge did in 2007 alone, completely undermining the argument that Louisville needed these new bridges.

Aaron Renn, an urban policy researcher and frequent critic of the Ohio River Bridges project, extensively documented these shenanigans. “No matter how crazy this project is,” he wrote back in 2013 when that bond-holder study came out, “it always manages to find ways to show that it’s even more wacky than I thought.”

The project is now finished, and everyone in Louisville can see for themselves which prediction was the better one. In 2018, a post-construction traffic study showed that cross-river trips decreased by 2 percent from 2013 to 2018. As a result, the project has been called by Vox, among others, a “boondoggle” of epic proportions.

The Louisville highway project is hardly the first time travel demand models have missed the mark. Despite them being a legally required portion of any transportation infrastructure project that gets federal dollars, it is one of urban planning’s worst kept secrets that these models are error-prone at best and fundamentally flawed at worst.

Recently, I asked Renn how important those initial, rosy traffic forecasts of double-digit growth were to the boondoggle actually getting built.

“I think it was very important,” Renn said. “Because I don’t believe they could have gotten approval to build the project if they had not had traffic forecasts that said traffic across the river is going to increase substantially. If there isn’t going to be an increase in traffic, how do you justify building two bridges?”

Travel demand models come in different shapes and sizes. They can cover entire metro regions spanning across state lines or tackle a small stretch of a suburban roadway. And they have gotten more complicated over time. But they are rooted in what’s called the Four Step process, a rough approximation of how humans make decisions about getting from A to B. At the end, the model spits out numbers estimating how many trips there will be along certain routes.

As befits its name, the model goes through four steps in order to arrive at that number. First, it generates a kind of algorithmic map based on expected land use patterns (businesses will generate more trips than homes) and socio-economic factors (for example, high rates of employment will generate more trips than lower ones). Then it will estimate where people will generally be coming from and going to. The third step is to guess how they will get there, and the fourth is to then plot their actual routes, based mostly on travel time. The end result is a number of how many trips there will be in the project area and how long it will take to get around. Engineers and planners will then add a new highway, transit line, bridge, or other travel infrastructure to the model and see how things change. Or they will change the numbers in the first step to account for expected population or employment growth into the future. Often, these numbers are then used by policymakers to justify a given project, whether it’s a highway expansion or a light rail line.

Although there are many reasons the Ohio River Bridges Project was a total urban planning debacle, one that has not gotten much attention is the role travel demand models played in putting lipstick on the $2.5 billion pig. One potential reason for that is because those who work in the field have come to expect nothing less.

To be sure, not everyone who works in the field feels this way. Civil engineers in particular are more likely to defend the models as a useful tool that gets misapplied from time to time. University of Kentucky civil engineering professor Greg Erhardt, who has spent the better part of two decades working on these models, said at their best they are “a check on wishful thinking.” But other experts I spoke to, especially urban planners, tend to view the models as aiding and abetting the wishful thinking that more highways and wider roads will reduce traffic.

Either way, nearly everyone agreed the biggest question is not whether the models can yield better results, but why we rely on them so much in the first place. At the heart of the matter is not a debate about TDMs or modeling in general, but the process for how we decide what our cities should look like.

TDMs, its critics say, are emblematic of an antiquated planning process that optimizes for traffic flow and promotes highway construction. It’s well past time, they argue, to think differently about what we’re building for.

“This is the fundamental problem with transportation modeling and the way it’s used,” said Beth Osborne, director of the non-profit Transportation for America. “We think the model is giving us the answer. That’s irresponsible. Nothing gives us the answer. We give us the answer.”

In 1953, Detroit-area highway agencies launched the first TDM study to create a long-range plan for highway development. The idea, as recounted in an academic history of TDM, was deceptively simple. In order to execute a massive public works project like a highway system, planners had to have some idea where people will travel in the future. There’s no point, they figured, in spending a few decades building these highways only to find they’re either too big or too small or go to the wrong places.

The Detroit Metropolitan Area Traffic Study, as it was called, conducted 39,000 home interviews and 7,200 interviews with truck and taxi drivers (characteristically for the Motor City in mid-century, public transit was not considered). Using an IBM 407 punch card computer to partially automate some steps, the researchers extrapolated from recent trends to predict future travel patterns in order to build an expressway network that would work for Detroit not just in 1955, when the study was published, but in 1980, too.

“It’s not so much about the measurement being wrong, it’s that the whole underlying thesis is wrong”

This was a novel approach to transportation planning and, given the technology and thinking of the time, right on the cutting edge. Other cities, including Chicago, San Juan, and Washington D.C., adopted it shortly thereafter. And it wouldn’t take long for this approach to be exported to other countries as well and become a common transportation planning tool all over the world.

In retrospect, the concept had some obvious flaws. For starters, the model’s basic approach was to presume what had happened recently would continue to happen. If Detroit’s population was rising, it would continue to rise. If fuel prices were falling, they would continue to fall. But that’s not how the world works. A lot can change in a few decades.

Take, for example, population and land-use patterns, inputs from the first step of the four-step model. They are two of the most important variables in any TDM, since the more people that live in a given area, the more trips there will be, and where in that area they live and work will largely determine travel patterns. Both of these factors would radically shift within the Detroit area. In the 1950s, Detroit was in the middle of an unprecedented urban growth spurt, peaking around 1950 at more than 1.8 million people, according to historian Thomas Segrue’s The Origins of Urban Crisis: Race and Inequality in Postwar Detroit. By 1970, almost one in five people had left thanks in large part to middle class “white flight” to the suburbs. Many businesses moved headquarters or factories outside of the city as well, drastically altering travel patterns. A planner in 1955 would have been hard-pressed to forecast any of that.

More subtly, critics of the typical modeling approach say they don’t align with how humans actually behave. For example, say that you live in Pasadena and your friend in Culver City invites you over for dinner at six on a weekday. Would you go? Or would you tell them they must be nuts if they think you’re going to drive across Los Angeles during rush hour? Odds are, you will opt for the latter—or the invitation would have never been proffered to begin with out of basic human decency—and the trip is never made.

Traffic forecasting doesn’t work like this. In the models, any trip made today will be made perpetually into the future no matter how much worse traffic gets.

Experts refer to this as “fixed travel demand,” which is essentially an oxymoron, because travel demand is almost by definition not fixed. We are always deciding whether a trip is worth taking before we take it. One of the major factors in that decision-making process is how long the trip will take. TDMs work the exact opposite way by assuming that if people want to go somewhere they will. Only then will they calculate how long it will take.

For this reason, some urban planners derisively refer to this approach as “the lemming theory of demand,” said Joe Cortright, an urban economist for the consulting firm Impresa and contributor to the website City Observatory, because it assumes people will keep plowing onto highways no matter how bad congestion gets.

“It’s not so much about the measurement being wrong, it’s that the whole underlying thesis is wrong,” said University of Connecticut professor Norman Garrick. “You’re not thinking about how people behave and how they’re using the system. You’re just saying this is how it happened in the past [and] this is how it will happen in the future, even though you’re injecting this big change into the system.”

The flip side of the fixed travel demand problem is equally pernicious. Let’s say LA somehow doubled the number of lanes on the 110 and 10 freeways, which connect Pasadena to Culver City. Now, going to dinner at your friend’s place might not seem like such a bad idea. Except tens of thousands of other people are thinking the same thing. They, too, will make trips they previously did not make. Over the long run, they may move further away where houses are cheaper because the commute is faster, meaning they’ll drive more and be on the road longer. Eventually, those new lanes fill up and traffic is as bad as ever.

This phenomenon is called induced demand, and it is not merely a thought exercise. It is precisely what has happened in nearly every case where cities build new highways or expand old ones.

“Recent experience on expressways in large U.S. cities suggests that traffic congestion is here forever,” wrote economist Anthony Downs in his 1962 paper The Law of Peak-Hour Expressway Congestion. “Apparently, no matter how many new superroads are built connecting outlying areas with the downtown business district, auto-driving commuters still move to a crawl during the morning and evening rush hours.”

Experts have known about induced demand for generations, yet we keep adding more highways in the Sisyphean task of attempting to build our way out of rush hour traffic. To fully appreciate the absurdity of this quest, look no further than the $2.8 billion freeway project in Katy, Texas that was supposed to reduce commute times along the expanded 23-lane freeway, the widest in the world. All too predictably, congestion only increased, and commute times are longer still.

A 2011 paper called “The Fundamental Law of Road Congestion” concluded “increased provision of roads or public transit is unlikely to relieve congestion” because every time new lane-miles are added, trip miles driven increase proportionately. The more highways and roads we build, the more we drive. (The flip side is also true: in the rare cases when highways are temporarily out of commission, such as the case with the Alaskan Way Viaduct in Seattle, traffic doesn’t get much worse.) And TDMs have been totally ignorant of it.

“It is well-recognized that the 4-step modeling paradigm developed 50-60 years ago is only a computational convenience that is not behavioral,” wrote transportation planner and consultant David T. Hartgen in 2013, “and does not reflect how traveler decisions are actually made.”

The proof is on the roadways. In his landmark 2007 study of traffic forecasts across 14 nations and five continents, Oxford University professor Bent Flyvbjerg found half of traffic forecasts are wrong by more than 20 percent, a finding subsequently replicated elsewhere. A 2006 study by the National Cooperative Highway Research Program found that out of 15 toll road projects, the actual traffic was 35 percent below the predicted traffic on average. Another study found the error was more like 42 percent on average.

“I think there’s this general consensus that there’s accuracy issues,” said Fred Jones, a senior project manager with the planning firm Michael Baker International. “Sometimes in the order of magnitude anywhere from 30 to 50 percent off.”

Even worse, no one is learning from their mistakes. “Inaccuracy is constant for the 30-year period covered by this study,” Flyvbjerg wrote. “Forecasts have not improved over time.”

It’s not even clear civil engineers or the firms that run these models believe inaccuracy is a bad thing. They’re being asked to do the impossible and predict the future—of course there will be inaccuracies, they argue. It’s like routing a trip on Google Maps. If it’s a 20-minute drive across town, Google Maps will do a pretty good job predicting how long it will take. If it’s supposed to be an eight-hour trip, that’s basically a guess, because even Google can’t see into the future to know if there will be a crash in I-95 outside of D.C. by the time you get there in five hours. The legally mandated 20-year forecast, University of South Florida professor Chanyoung Lee says, is a lot like that.

As a result, civil engineers doing the modeling tend to downplay the relevance of the precise numbers and speak more broadly about trends over time. Ideally, they argue, policymakers would run the model with varying population forecasts, land use patterns, and employment scenarios to get a range of expectations. Then, they would consider what range of those expectations the project actually works for.

The problem is, when the results are presented to the public, they lose all nuance and are seized by policymakers as fact. As Cortright put it, “the models are essentially a sales tool for what highway departments want to do.”

As problematic as they have been, the models have gotten smarter. Especially in the last decade or so, more states are working from dynamic travel models that more closely reflect how humans actually behave. They are better at taking into consideration alternate modes of transportation like biking, walking, and public transportation. And, unlike previous versions, they’re able to model how widening one section of road might create bottlenecks in a different section.

Still, experts warn that unless we change the entire decision-making process behind these projects, a better model won’t accomplish anything. The models are typically not even run—and the results presented to the public—until after a state department of transportation has all but settled on a preferred project.

After talking to 10 experts in the field for this story, one thing was clear: the hurdles are not technological, but social and political. After all, the Louisville bridge project did accurately model travel demand for the bond-holders. It can be done. The question is not why the models are wrong, but why the right ones don’t seem to make any difference.

When I asked Renn, who had watched the Louisville project closely, what would be a better way to evaluate how to build a big transportation project, he said he wasn’t sure. “There’s this idea we need to depoliticize questions, that we can reduce political choices to objective decision criteria, when in fact I think many of our debates are driven essentially by rival value systems in our visions of the public good.”

Here, the Louisville case is once again illustrative. In the SDEIS, the engineers estimated a 15 percent population growth in the metro region by 2030. This prediction seems sound; from 2007 to 2020, Renn said, the population in those counties has increased 7.85 percent. But, the SDEIS predicted virtually all of that population increase would occur in the surrounding counties and city outskirts. Thanks to that assumption—as well as a forecasted 42 percent increase in employment—the SDEIS came up with a 52 percent increase in travel times and a whopping 161 percent increase in hours lost due to sitting in traffic delays with the existing infrastructure. These were critical estimates to bolster the case for the two bridges plan.

But these trends are not immutable laws of human existence. “This is a classic self-fulfilling prophecy dressed up as technocratic objectivity,” said Cortright. “The population forecasts assume the indefinite decentralization of households and businesses.”

For this reason, TDM critics say the forecast accuracies—or lack thereof—are almost besides the point, because any project that changes the built environment will alter the way people behave. The question is not whether the predictions of how they will behave are accurate, but what kind of behavior we want to have more of.

“I don’t really care whether the highway model was ‘accurate’ or not,” said  Kevin DeGood, director of infrastructure policy at the Center for American Progress and frequent critic of these types of models in highway plans, “because even if the model is accurate the project can be a failure.”

To that end, DeGood added that we need to refocus our goals at the planning stage, away from projected vehicle speeds, traffic flow, and congestion, to different questions, ones that could steer us towards quality-of-life issues. For example, what percent of households live within a quarter mile of high-quality public transit? What percent can commute without using a private vehicle or live near a public park?

Transportation projects cut to the core of what we value in society. Do we want city neighborhoods divided by tangled highway junctions so people can get downtown easily from the suburbs? Or do we want walkable urban districts with cleaner air, quieter streets, and a proximity to jobs and businesses that means people don’t need to own cars if they don’t want to?

The answers to all these questions would result in states spending their dollars very differently. One would result in a lot more projects like Louisville’s. The other would shift focus from road building to public transportation, as well as changing laws to promote density.

To Renn’s point, most American cities are divided on these issues. Perhaps the most useful thing the model does is obscure that debate behind a veil of scientific certainty. Behind hard, solid numbers. “From the standpoint of a citizen, these numbers essentially come out of a black box,” he said. “You don’t have any idea how they generated these numbers, so you can’t begin to critique them.”

In other words, the model shuts people up. It may not be honest, but in the world of transportation politics, there’s nothing more valuable than that.

Follow Aaron Gordon on Twitter.