New in School: AI-Driven Gun Detection Systems
LAST FEBRUARY, Tim Button got some of the worst news possible: His 15-year-old nephew Luke Hoyer was killed in the school shooting at Marjory Stoneman Douglas High School in Parkland, Florida. Since that terrible day, many of Luke’s surviving classmates have become prominent voices in the movement for tougher gun laws. But as the face-off over gun control intensified once more in the tragedy’s aftermath, Button received a sympathetic call from his friend Rick Crane, who suggested developing a perimeter security system in the style of an invisible fence. After some long discussions, they decided to create a company to harness the latest in high-tech security systems that, they hope, could detect potential shooters before they open fire.
The thought of leveraging immediate technology solutions reflects both men’s professional backgrounds: Button owns a telecom company, and Crane is director of sales, network security, and cloud management for a cloud-computing subsidiary of Dell Technologies. So far, their startup, Shielded Students, has enlisted three security companies: an emergency response coordination service and two that make gun detection systems. One of these systems, developed by Patriot One Technologies in Canada, integrates a microwave radar scanner with a popular artificial intelligence (AI) technique that is trained to identify guns and other hidden weapons. Shielded Students hopes to combine these and other solutions into a package that can help prevent another mass shooting like the one that killed Luke and 16 other people.
“I can tell you with a lot of confidence that this technology, incorporated into Marjory Stoneman Douglas, would have probably saved all 17,” Button said, “including my nephew — who was one of first people shot.”
While legislators and advocates wrestle over gun laws, a growing list of companies are joining Shielded Students to fill school security gaps. Like Patriot One, several say they will use AI to automatically detect guns either with high-tech screening or by scanning surveillance footage. It all sounds promising, but some experts worry about turning school grounds into surveillance-heavy zones where AI helps private companies collect and analyze scads of student data. More importantly, they say, there is little to no public data available to assess whether and how well such AI-driven gun-detection systems work in a busy school environment — though the hunt for solutions has become increasingly urgent.
While the number of school shootings has modestly declined since the 1990s, a spate of recent incidents has galvanized a national debate on school safety. And the Parkland shooting in particular has renewed discussions about strengthening gun regulations in the United States, which has experienced 57 times more school shootings than all other industrialized countries combined. Since the Columbine High School massacre of 1999, more than 187,000 students have experienced gun violence at American schools.
Given the public concerns, security companies will likely find at least some willing customers. Indeed, Shielded Students is already in talks with schools to test the system on campuses, and while Button says the technology won’t catch every shooter, he remains convinced the impact will be real. “ will certainly deter a large percentage of things from happening in and around schools,” he said.
IN SEATTLE, more than 3,000 miles away from Florida, Leo Liu says he absorbed news of the Parkland shooting like “a big punch.” Two weeks later, his concern grew when he saw his 7-year-old’s school launch active shooter drills alongside the earthquake drills that are routine where they live in the Pacific Northwest. Like Button, Liu, a co-founder of a Seattle-based startup called Virtual eForce, formed a high-tech plan to spot and track future school shooters.
Liu’s vision relies on AI to automatically detect guns in video surveillance images. Once the system flags a potential gun, it can alert security staff, who can then either confirm or dismiss the possible threat before triggering a school lockdown and notifying police. The system, Liu says, could also help track the gunman and send location alerts via text or app to the school and the police. According to Virtual eForce, the system is already in trial at a health care office building, and the company hopes to follow suit in schools. At least two other companies are also pitching AI-based gun detection, including the Israel-based AnyVision and Canada-based SN Technologies, according to The Washington Post.
The AI technology behind these efforts — known as deep learning — represents the latest developments in computer vision. By training deep-learning algorithms on thousands or millions of images, researchers can create computer software that recognizes and labels human faces, cats, dogs, cars and other objects in photos or videos. The same idea applies to guns.
But deep-learning systems are only as good as their training. For example, an algorithm trained to recognize guns based on well-lit scenes from Hollywood films and TV shows may not perform well on security footage. “From a visual standpoint” a weapon may appear as “nothing more than a dark blob on the camera screen,” says Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab.
To boost accuracy, Virtual eForce trained its algorithms to recognize different types of guns such as long guns — including AR-15s and AK-47s — and hand guns. The startup also filmed its own videos of people holding different guns from different angles, and lowered the resolution to mimic grainy surveillance footage.
Still, Liu acknowledged that no deep learning algorithm will be flawless in the real world. The system’s most common errors are false positives, when it mistakenly identifies a relatively innocuous object as a gun. As a safeguard, people will have the final say in assessing any threat the system flags, Liu says. Such human checks could even improve a deep learning algorithm’s performance by confirming or correcting the system’s initial classification.
Even with these safeguards, however, people have their own biases that can influence how they interpret a possible threat, says Douglas Yeung, a social psychologist at the RAND Corporation who studies the societal impacts of technology. And the training and expertise of the people supervising the AI system will also be important, whether they are guards, security specialists, or imagery analysts.
Then there is the matter of privacy. Both AnyVision and SN Technologies, for example, combine gun detection with facial recognition to identify potential shooters. Similarly, Virtual eForce says it could incorporate facial recognition if clients want the additional security layer. But using this tech on students comes with many more privacy and accuracy concerns, which could put off some schools.
“There could be a chilling effect from the surveillance and the amount of data you need to pull this off,” Hwang says.
THE COMPANIES that rely on video surveillance can generally only detect a drawn weapon. That is why Patriot One, the Canadian company, plans to offer schools a different technology that could identify weapons hidden under clothing or in bags. “We are focused on concealed threats,” said chief executive Martin Cronin, “which computer vision is not suited for.”
Patriot One’s approach relies upon a specialized radar technology — developed in partnership with McMaster University — that can be hidden behind security desks or in the walls near a building’s main access points. The radar waves bounce off a concealed object and return a signal that reveals its shape and metal composition. From there, a deep learning tool recognizes the radar patterns that match weapons including handguns, long guns, knives, and explosives. So far, one of the biggest challenges is training the tool to ignore the usual clutter in a student’s backpack, such as wadded up gym clothes, a textbook, or a pencil case.
The company was working with the Westgate Las Vegas Resort and Casino even before the deadliest mass shooting in modern U.S. history took place nearby on the Strip on October 1, 2017. The gunman used suitcases to smuggle an arsenal of weapons to hotel rooms on the 32nd floor of Mandalay Bay. In the future, if Patriot One can prove its technology works, the system could help detect a suitcase of guns.
To handle false positives, Patriot One will allow customers to set a threshold for system alerts about potential threats. A hotel, for example, could choose to only get alerts if a threat is 70 percent likely to be real. But the company also seems conscious of needing to achieve a reliable product before selling its system to school districts, and has been actively cooperating with Shielded Students in the wake of the Parkland school shooting.
“We’re not going to release for wide commercial deployment until we’re satisfied for high accuracy,” Cronin says. “And so that’s why we’re doing this real-world testing and optimization, because it would be unacceptable to have a high level of false positives, because people would lose faith in system.”
Given the lack of public data, it’s difficult to independently judge the accuracy of any of the new security systems. But beyond the challenge of real-world performance, the systems could be vulnerable to people actively looking to fool them. A tech-savvy individual or group, for example, could try to hack into an AI-based computer vision system and submit thousands or even millions of modified images to discover the best way to confuse it — a process known in AI research as an adversarial attack.
“I think the number one priority is to be aware that these adversarial attacks exist, and recognize that if an attacker is incentivized enough to break a machine-learning based system, chances are they can find a way to break it,” says Andrew Ilyas, an incoming Ph.D. candidate in computer science at MIT. “Based on the work in the field, it doesn’t look like we’re ready to have mission-critical decisions be made based on AI alone.”
Ilyas and his colleagues at LabSix, an AI research group run by MIT students, have already proven that it’s possible to trick deep learning tools. In 2017, the team showed how to break Google Cloud Vision, a commercial service that can label images such as faces and landmarks. They also tricked a Google computer vision algorithm — one of the best available — to classify a 3D-printed turtle as a rifle.
It’s difficult to say how easily such adversarial attacks could trick real AI-driven surveillance systems, says Anish Athalye, a Ph.D. candidate in computer science at MIT and a member of LabSix. As far as the MIT team knows, nobody has publicly demonstrated a successful adversarial attack on such surveillance systems.
Still, it’s not hard to imagine the security risks that could arise in coming years. A sophisticated attacker might disguise a handgun to show up in a security system as a pencil case or a pair of gym socks.
DESPITE BUTTON’S eagerness for Shielded Students to help protect schools immediately, he recognizes that the security company partners involved need time to collect more data and integrate their technologies. For now, Shielded Students plans to test its system at several schools, while keeping an eye out for new high-tech security solutions as they become available. “We fully expect to be able to interchange these technologies as needed, as well as add new technologies, as we learn what else is out there or new technologies are introduced,” Button said.
Of course, no school security is foolproof. Today, even without the high-tech AI, schools post armed police officers or guards, limit access points, and install metal detectors — and all have failed to stop school shooters at some point, says Cheryl Lero Jonson, assistant professor in criminal justice at Xavier University in Cincinnati, Ohio. Last year, Jonson published a literature review in the journal Victims & Offenders on the effectiveness of school security measures. She found them all lacking.
“Prevention measures, including technological-driven measures, can be breached or fail,” Jonson says. Active shooter drills will be necessary even as technology improves, she adds, because “people need to be equipped with the skills and mental preparation needed to potentially survive an active shooting event.”
It remains to be seen whether high-tech surveillance can make a difference in future school shooting cases. Hwang at MIT, who is also a former global public policy lead on AI and machine learning for Google, doesn’t necessarily oppose searching for solutions to gun violence beyond gun regulation reform, but he doesn’t think AI-based surveillance tools are ready, in part because he isn’t convinced that most companies have enough training data to ensure an accurate system.
Even if the systems do work, having cameras and surveillance equipment everywhere in schools could lead to a slippery slope in terms of how that surveillance data is used. Private companies may feel tempted to sell schools on additional uses of all the data being collected each day.
“I’m generally concerned about the impact of deep surveillance on the educational experience,” Hwang said.
Recent Comments