Cops are falling in love with AI, and it’s much deeper than facial recognition

Cops are falling in love with AI, and it's much deeper than facial recognition

Hello and welcome to Eye on AI.

If you’ve spent any time on social media in the past month or so, chances are you’ve seen videos of New Yorkers trolling the new NYPD “robocop” that’s now patrolling the Times Square subway station. Standing 5-foot-3 and roaming around like a goofy sidekick in Star Wars, the Nvidia AI-powered K5 robot from Knightscope is certainly an easy target for mockery.

But while arguably the most in-your-face example of how law enforcement agencies are tapping AI technology for policing, the K5 is just the tip of a much more discrete iceberg. Police across the country—and the world—are increasingly using AI systems that, without meme-appeal, are likely to fly under the radar, yet may be far more consequential. This past week, reports in both The Markup and 404 Media revealed more about these tools and how police are using AI. Police interest in AI is nothing new, but it’s clearly ramping up with the recent AI boom.

Ese Olumhense’s dispatch from “Cop Con” in The Markup was particularly enlightening. She attended the San Diego conference, officially called the International Association of Chiefs of Police (IACP) conference, where nearly 700 vendors demonstrated their new tech for policing to over 16,000 attendees. The attendees were largely law enforcement officials not only from the U.S., but also from countries including Saudi Arabia, Japan, Brazil, Canada, Jamaica, Indonesia, Ireland, the Dominican Republic, the U.K., and many more.

Olumhense reports that the technology she saw largely fell into three buckets. There were robotic tools like drones and police robots, such as a drone that can come armed with a tool for breaking windows. There were also emerging and enhanced surveillance technologies, like automatic license plate readers. And lastly, a whole lot of AI.

“Artificial intelligence and algorithmic products were, predictably, among the tools I saw the most,” she wrote in The Markup. For example, she saw companies advertise voice analysis algorithms that claim to detect fraud risks based on speech as well as various tools that can purportedly aggregate and analyze data to generate insights, including data from social media, jail systems, video feeds, and court records.

Facial recognition technology, however, was noticeably missing from the conference.

“I didn’t really see it brought up that often in panels. People just weren’t touching it,” Dave Maass, director of investigations at the nonprofit Electronic Frontier Foundation, who has attended three IACP conferences, told The Markup in another version of the story. “And I’m not sure that’s because the technology has become pretty toxic in public discourse, if it’s just not as useful and hasn’t lived up to promises—who knows, but people were not into it.”

Indeed, facial recognition technology has been shown to be both racially biased and frequently wrong. Several black Americans have already been wrongly arrested and even jailed based solely on inaccurate facial recognition matches, according to the Associated Press, the New York Times, and many other recent reports.

Beyond the emerging tech police are eyeing and buying, a new report from 404 Media uncovered an “AI-powered system that is rapidly springing up across small town America and major cities alike.” Essentially, the system allows law enforcement to link together all of a town’s security cameras—including those that are government-owned as well as privately owned cameras at businesses and homes—into one central hub.

“Fusus’ product not only funnels live feeds from usually siloed cameras into one central location, but also adds the ability to scan for people wearing certain clothes, carrying a particular bag, or look for a certain vehicle,” reads the article, which also discusses the absence of clear policies, auditable access logs, and community transparency about the capabilities and costs of the system.

Especially with AI’s proven racial biases, the use and possible misuse of the technology for policing has been a driving part of the AI conversation for years. As the tech community and now governments and population at large debate the need to focus on near versus long-term risks of AI, surveillance concerns and the likely potential for AI to exacerbate the racial biases already rampant in policing is not just near-term, but playing out rapidly right now.

In his AI executive order last week, President Joe Biden included a direction to “ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.” While this is just a directive and doesn’t have the power of legislation, its inclusion in the order is important. It’s also worth noting that of the “surge” in state-level AI laws passed, proposed, and going into effect this year, none explicitly address policing or law enforcement. But clearly, governments are just getting started.


Programming note: Gain vital insights on how the most powerful and far-reaching technology of our time is changing businesses, transforming society, and impacting our future. Join us in San Francisco on Dec. 11–12 for Fortune’s third annual Brainstorm A.I. conference. Confirmed speakers include such A.I. luminaries as Google Assistant and Google Bard GM Sissie Hsiao, IBM Chief Privacy and Trust Officer Christina Montgomery, Walmart International SVP and CTO Sravana Karnati, Pfizer Chief Digital and Technology Officer Lidia Fonseca, and many moreApply to attend today!

And with that, here’s the rest of this week’s AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

This story was originally featured on Fortune.com

Advertisement