Researchers at a national laboratory are forecasting a future where police and border agents are assisted by artificial intelligence, not as a software tool but as an autonomous partner capable of taking the steering wheel during pursuits and scouring social media to target people for closer investigation. The “Digital Police Officer” or “D-PO” is presented as a visionary concept, but the proposal reads like a pitch for the most dystopian buddy cop movie ever.
The research team is based out of Pacific Northwest National Laboratory (PNNL), a facility managed by the corporation Battelle on behalf of the U.S. Department of Energy. They have commissioned concept art and published articles in magazines aimed at law enforcement leaders, EFF has learned through a review of materials, including records obtained through a Freedom of Information Act request.
“To leverage the full power of artificial intelligence, we need to know how people can best interact with it,” they write in a slide deck that starts with a robot hand and a human hand drawing each other in the style of the famous M.C. Escher artwork. “We need to design computing systems that are not simply tools we use, but teammates that we work alongside.”
For years, civil liberties groups have warned about the threats emerging from increased reliance by law enforcement on automated technologies, such as face recognition and “predictive policing” systems. In recent years, we’ve also called attention to the problems inherent in autonomous police robots, such as the pickle-shaped Knightscope security patrol robots and the quadrupedal “dog” robots that U.S. Department of Homeland Security wants to deploy along the U.S.-Mexico border
The PNNL team’s vision for “human-machine teaming” goes so much further.
“Al plays an active role in the mission by learning from the human and its environment,” the researchers write in a slide defining the term. “It uses this knowledge to help guide the team without requiring specific direction from the human.”
The Digital Police Officer
In articles published in Police Chief, the official magazine of the International Association of Chiefs of Police, and Domestic Preparedness Journal, the researchers introduce a fictional duo named Officer Miller and her electronic sidekick, D-PO (an apparent play on C-3PO), who’ve been patrolling the streets together for five years.
Here’s what they would look like, according to concept art commissioned by PNNL:
(Miller is technically a paramedic in this image, but this was used to illustrate the police officer narrative in both publications.)
And here’s another piece of PNNL art from a presentation EFF received in response to a FOIA request:
PNNL’s fictional narrative begins with D-PO keeping tabs on the various neighborhoods on their beat and feeding summaries of activities to Officer Miller, as they do everyday. Then they get an alert of a robbery in progress. The PNNL researchers imagine a kitchen sink technological response, tapping drones, face recognition, self-driving vehicle technology, and algorithmic prediction:
While Officer Miller drives to the site of the robbery, D-PO monitors camera footage from an autonomous police drone circling the scene of the crime. Next, D-PO uses its deep learning image recognition to detect an individual matching the suspect’s description. D-PO reports to Officer Miller that it has a high-confidence match and requests to take over driving so the officer can study the video footage. The officer accepts the request, and D-PO shares the video footage of the possible suspect on the patrol car’s display. D-PO has highlighted the features on the video and explains the features that led to its high-confidence rating.
“Do you want to attempt to apprehend this person?” D-PO asks.
Obviously Officer Miller does.
As they drive to the scene, the officer talks to D-PO the way she would with a human partner: “What are my best options for apprehending this guy?” Officer Miller asks.
D-PO processes the question along with the context of the situation. It knows that by “this guy” the officer is referring to the possible suspect. D-PO quickly tells Officer Miller about three options for apprehending the suspect including a risk assessment for each one…
D-PO’s brief auditory description is not enough for the officer to make a decision. Despite Officer Miller’s usual preference to drive, she needs her digital partner to take the wheel while she studies the various options.
“Take over,” she tells D-PO.
All this action sequence is missing is Officer Miller telling D-PO to blast Mötley Crüe’s “Kickstart my Heart.”
The authors leave the reader to conclude what happens next. If you buy into the fantasy, you might imagine this narrative ending in a perfect apprehension, where no one is hurt and everyone receives a medal–even the digital teammate. But for those who examine the intersection of policing and technology, there are a wide number of tragic endings, from mistaken identity that gets an innocent person pulled into the criminal justice system to a preventable police shooting–one that ends in zero accountability, because Officer Miller is able to blame an un-punishable algorithm for making a faulty recommendation.
EFF filed a Freedom of Information Act request this spring with PNNL to learn more about this program, how far along it is, and whether any local law enforcement have expressed interest in it.
The good news is that in the emails we obtained, one of the authors acknowledges in internal emails that elements like a D-PO taking over driving is a “long way off” and monitoring live drone feeds is “not a near-term capability.” Only one agency wrote to the PNNL email address included at the end of the Police Chief magazine article: the Alliance Police Department in Nebraska (pop. 8,150).
“We are implementing an artificial intelligence program to include cameras around the city, ALPR’s and drones,” Chief Philip Lunkens wrote. “Any way we can work together or try things I am very open to [sic]. Please let me know your thoughts and how I can help. Thanks for what you are doing.”
The bad news is that the FOIA documents also include a concept for how this technology could be combined with augmented reality for policing U.S. borders–and that might be a lot closer to realization.
The Border Inspections Teammate System
The PNNL researchers’ slides include a section designed specifically to entice Customs & Border Protection to integrate similar technologies into its process for screening vehicles at ports of entry.
CBP is infamous for investing in experimental technologies in the name of border security, from surveillance blimps to autonomous surveillance towers. In the PNNL scenario, the Border Inspections Teammate System (BITS) would be a self-directed artificial intelligence that communicates with checkpoint inspectors via an augmented reality (AR) headset.
(PNNL did not respond to our request for a more legible scan of the slide.)
This concept is also produced as a tech thriller narrative. A couple of CBP officers have stopped a truck at the border. While the officers inspect the vehicle and grill the driver, BITS busily combs through an array of databases “maintained by several agencies involved in interstate commerce, homeland security, federal and state commercial truck enforcement and others.” BITS is also scanning through video recorded at weigh stations and analyzes traffic and weather data along the truck’s route. BITS concludes that the driver may be lying about his route and recommends a deeper level of scrutiny.
Of course, the border agents accept the recommendation. They break out advanced, hand-held scanners to probe the vehicle, while BITS compares the scans in real-time against data collected from thousands of other previous scans. BITS tells the officers that the driver is carrying crates that look similar to other crates containing blister packs of narcotics.
Finally, BITS scans the driver’s online presence and determines, “the driver’s social media activity shows a link to other suspects of similar activity.”
The scenario concludes with the CBP officers detaining the driver. The researchers again leave the conclusion open-ended. Maybe they find illegal narcotics in the back of the truck–or maybe it was all computer error and the driver loses his job, because his perishable freight didn’t arrive on time.
The records EFF received do not indicate any official interest from CBP or the Department of Homeland Security. However, BITS may not be as far off in the future as the D-PO. CBP has been experimenting with AR since at least 2018 by using HoloLens headsets to inspect goods for intellectual property violations.
Meanwhile, an “artificial intelligence expert” at San Diego State University is developing a technology that sounds similar to (if not more alarming than) BITS. The project contemplates “helping DHS ‘see’ terrorists at the border” with HoloLens headsets that “would add custom-built algorithms to place everything a border agent needs to know in a line of sight for faster, more thorough operations.”
This system builds off the SDSU expert’s earlier DHS-funded project: a kiosk-based system called the “Automated Virtual Agent for Truth Assessments in Real Time” (AVATAR) that was tested around 2011 at border crossings. According to an SDSU promotional article, AVATAR was “designed initially for border and airport security” and researchers claimed it could “tell if the person being interviewed might be providing deceptive answers based on information transmitted via behavioral sensors in the kiosk,” such as facial expressions and voice inflections. All of these technologies have the potential for grave error as well as racial bias.
As dazzling as this technology might be to officials working in the highly politicized realm of border security, it sets off a flashing red alarm for civil liberties advocates who have been long tracking the abuse and violations of the rights of travelers at ports of entry.
No More Tech Fantasies
One of the problems with modern policing is the adoption of unproven technologies, often based on miraculous but implausible narratives promoted by tech developers and marketers, without contemplating the damage they might cause.
The PNNL researchers present D-PO as a solution that they proudly acknowledge sounds like it was pulled from “a science fiction novel.” But what they fail to remember is that science fiction is a cautionary genre, one designed to help readers–and the world–imagine the worst case scenarios. Hal murdered the crew in 2001: A Space Odyssey. The precogs of Minority Report made mistakes. The Terminator‘s Skynet nearly wiped out the human race.
Society would be better served if the PNNL team used their collective imagination to explore the dangers of new policing technologies so we can avoid the pitfalls, not jetpack right into them.