As we mark the one-year anniversary of America’s right-wing temper tantrum that almost cost us the Republic, many Americans are probably wondering just how we can prevent such a terrible, violent event from ever happening again on U.S. soil. Well, according to the Washington Post, those in the data science community believe they may have a solution.
Many data researchers are currently hard at work on something called “unrest prediction”—an effort to use algorithms to understand when and where violence may break out in a given nation or community. Key to this effort are organizations like CoupCast, a project at the University of Central Florida, which uses a combination of historical data and machine learning to analyze the likelihood that a violent transition of power will take place in one country or another, on any given month. According to Clayton Besaw, who helps run CoupCast, these forecasting models have traditionally been aimed at foreign countries but, unfortunately, America is looking more and more like a reasonable candidate for just such an event.
“It’s pretty clear from the model we’re heading into a period where we’re more at risk for sustained political violence—the building blocks are there,” said Besaw, speaking with the Post.
While this may all sound very novel, efforts to use data to predict unrest aren’t particularly new. They generally involve gathering immense amounts of data about specific populations and then inputting that data into projection models. The real question isn’t how this is all works but rather: “Does it actually work?” and also “Do we really want it to?”
As far back as 2007, the Defense Advanced Research Projects Agency (DARPA) was working on an Integrated Crisis Early Warning System (ICEWS)—a data-driven program meant to predict social unrest in countries around the world. Produced with the help of researchers from Harvard and professional bomb-maker Lockheed Martin, the program claimed to have created forecasting models for a majority of the world’s nations and could supposedly produce “highly accurate forecasts” as to whether a country would, say, witness a deadly riot or not. The program worked by feeding huge troves of open-source data—such as regional news stories—into its system, which would then use the data to calculate the likelihood of some sort of regional unrest incident.
“The secret sauce in all of this is the fact that we use what’s called a mixed model approach,” said Mark Hoffman, senior manager at the Lockheed Martin Advanced Technology Laboratories, during a 2015 interview with Signal Magazine. “For any one event, say, a rebellion in Indonesia, we will turn around and have five models that are forecasting whether that’s going to happen.” According to Hoffman, the program eventually saw adoption by “various parts of the government” (read: the intelligence community) and also saw interest by “the insurance, real estate and transportation industries.”
Around the time ICEWS was in development, there was also work being done on the EMBERS Project, a large data program launched in 2012 (once again with federal tax dollars) that uses gargantuan caches of open-source data from social media to enable threat forecasting. According to a Newsweek article from 2015, “an average of 80 to 90 percent of the forecasts” EMBERS generates have “turned out to be accurate.” This algorithm was allegedly so good at its job that it predicted events like the 2012 impeachment of Paraguay’s president, an outbreak of violent student protests in Venezuela in 2014, and 2013 protests in Brazil over the cost of the World Cup.
If you believe these claims, it’s truly stunning stuff, but it also inspires a pretty basic question: Uh, what the hell happened last year, guys? If this kind of algorithmic prediction exists—and is readily available (indeed, there’s currently an entire market devoted to it)—why didn’t anybody in the U.S. intelligence community foresee a riot that was blatantly advertised all over Facebook and Twitter? If it’s so accurate, why wasn’t anyone using it on that fateful day in January? We have a word for that kind of technical fumble and it’s, uh… not “intelligence.”
According to the Post article, one thing that could explain the historic fumble is that most of these programs and products have been aimed at forecasting events in other countries—the ones that might pose a strategic threat to U.S. interests overseas. They haven’t so much been trained inwards on Americans.
On one hand, it feels like a good thing that these sorts of predictive powers aren’t being broadly aimed at us because there’s a lot we still don’t know about how they do or do not work. Beyond the potential slippery slope of civil liberty violations this kind of algorithmic surveillance could spark, the most obvious concern with this sort of forecasting technology is that the algorithms might be wrong—and that it would send governments off to respond to things that weren’t ever going to happen in the first place. As the Post points out, this could lead to things like governments cracking down on people who would’ve otherwise just been peaceful protesters.
However, an even more concerning issue might be: What if the algorithms are right? Isn’t it just as creepy to imagine governments using immense amounts of data to accurately calculate how populations will behave two weeks in advance? That puts us firmly in Minority Report territory. Either way, we probably need to think a little more about this kind of technology before we let it out of the barn.