Minority Report-like tech could predict mass shootings — but should we use it?
Data mining and artificial intelligence could predict mass shootings
SAN FRANCISCO -- We have the technology to predict who's likely to become an active shooter and to intervene before mass shootings happen, a security researcher said this week at the RSA Conference here.
The question is: Should we implement such a Minority Report-type of "precrime" system?
"What if it saved hundreds of lives per year?" asked Jeffrey J. Blatt of X Ventures. "It would result in serious loss of privacy in all aspects of life, loss of anonymity, loss of legal rights. Would it be worth it?"
- Best home security systems: Protect yourself from intruders
- Here are the best antivirus apps to keep yourself safe
Blatt explained that he's not advocating that such a system be built, and he has serious reservations about whether it should be. He was just pointing out that it's possible to do this because all the data and technology needed is already available. (British police may already be testing such a system.)
Known active shooters, defined as someone who has tried to kill random people in a confined and/or populated area, Blatt said, have very often exhibited similar behavior before trying to kill anyone.
The shooters-to-be "claim a close association with weapons, law enforcement or the military," Blatt said. "They identify themselves as agents of change, and may identify with previous known mass shooters," even though they can be of any age, race, ethnicity or gender.
They also frequently warn or threaten targets before acting, "often on social media," Blatt said. "There's also 'leakage' in the form of jokes or offhand comments about committing mass shootings."
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
"These behavioral threat indicators manifest as data inputs: social media interactions, school behavior records, friends and family, financial transactions, web search histories, etc.," said Blatt.
Using machines to find patterns in public records
Given all those warning signs, the public is often tipped off when, for example, a high-school student talks about killing people at school and a classmate reports it. But there's so much data out there with potential active-shooter warning signs that there's no way police, school officials or employers could sift through it all.
This is where data mining, machine learning and artificial intelligence could become useful, Blatt said.
"Instead of a human team looking for threat indicators, could a data processing system identify a potential active shooter before the crime actually takes place?" he wondered. "If we know everything about everyone, we might be able to predict events or outcomes."
We already have the technology
This sort of thing isn't just possible, Blatt said -- it's already been done. The National Security Agency's PRISM program, revealed in 2013 by Edward Snowden's leaked PowerPoint presentations, harvested and analyzed huge amounts of internet data to look for dangerous patterns.
A decade earlier, the Pentagon's Total Information Awareness program, developed in the wake of 9/11, sought to spot and predict terrorist attacks, but Congress cut off its funds after a public outcry. An active-shooter prediction and detection system would likely be narrower in scope and method than either of these huge global systems.
"What if we created a predictive policing system based on active shooter behavior threat indicators?" Blatt said. "I asked a well-known Israeli artificial-intelligence expert to see if this might be built, and he said, yes it could."
"We could scan everyone between the ages of 10 and 90," he added. "We could throw in all DMV records, criminal records, financial, medical and employer records -- I know some of that's illegal, but let's consider it for a thought experiment."
The machine-learning system could rapidly sift through all the information on everyone, connect dots to establish who might be likely to become an active shooter, and report what it finds to human administrators to take possible further action.
If the system worked well enough in predicting and preventing active shooters, Blatt said, it could even be applied to preventing robbery, rape or assault.
What if this creates even bigger problems?
But Blatt foresees many problems. First of all, you need to counter against bias on the part of the human administrators -- how can we be sure they're fair when deciding to act against a potential shooter?
You could make sure that the administrators don't see how the AI reaches its conclusions, Blatt said, but that "black box" approach would make you wonder if the algorithm itself was biased "and you're approaching Skynet territory."
Blatt said he's bringing this all up because someone may try to develop such a system soon, and we need to anticipate the disruption this will bring to our notions of fairness, privacy, due process and public safety.
"As a society," he said, "we need to consider that the intrusion may be worse than preventing the criminal acts."
Paul Wagenseil is a senior editor at Tom's Guide focused on security and privacy. He has also been a dishwasher, fry cook, long-haul driver, code monkey and video editor. He's been rooting around in the information-security space for more than 15 years at FoxNews.com, SecurityNewsDaily, TechNewsDaily and Tom's Guide, has presented talks at the ShmooCon, DerbyCon and BSides Las Vegas hacker conferences, shown up in random TV news spots and even moderated a panel discussion at the CEDIA home-technology conference. You can follow his rants on Twitter at @snd_wagenseil.