top of page


Following the murders of five Dallas Police Department officers in July 2016, Chief David Brown ended the suspect's violent activity and subsequent standoff when a pound of plastic explosive was delivered by a bomb disposal robot. The ensuing explosion killed the suspect, resulting in the first deliberate use of a robot, armed with deadly force, by American law enforcement.

The type of robot used by the Dallas Police Department did not operate autonomously, but was under the physical control and decision-making of a human operator. In effect, this machinery could be comparable to a vehicle, improvised weapon, or other object used by a law enforcement officer to neutralize a lethal threat, which the courts have already identified as reasonable if the criteria for responding to a deadly threat has been met.

As technology continues to advance, robotic machinery has become an increasingly larger part of the public’s lives and military operations. Law enforcement is no exception. Law enforcement operations have included similar machinery for at least the last twenty years in incidents involving standoffs, explosives, natural disasters, collapsed structures, and other hazardous conditions. There has been some public concern with the law enforcement use of aerial drones, automatic license plate readers, and autonomously operated traffic control cameras/recorders.

The public appears to be philosophically conflicted by the lethal use of human-controlled machinery within a law enforcement context. It is likely the use of the robot in Dallas preserved the lives and/or health of those law enforcement officers who would have eventually been tasked with taking the violent suspect into custody or neutralizing his threat potential. Regardless, some viewed the application of the robotic machinery as “ruthless and excessive, if not downright illegal. To many, the killing seemed like an extra-judicial execution (Boyd, 2016).” It is unknown if those critical of the robot's use have an experienced understanding of confronting potentially lethal threats and/or proposed a realistic alternative, or if the application of the human-controlled machinery provided the perception that human discretion had been eliminated.

Already used in some medical procedures and manufacturing operations, robots have been lauded as a means of eliminating human error (Dillow, 2013), and have been suggested as an infallible option for use in law enforcement operations. It could be argued that autonomous robots would not be susceptible to the human mistakes produced by the physical errors and limitations of human physiology and sympathetic nervous system responses present in some law enforcement uses of force. Presumably, autonomous robots would also not be susceptible to perceived bias or required to defend themselves with lethal force if confronted by that level of threat and force, potentially preserving the suspect's life.

In her examination of robots and advancing technology, Joh opined, “Robots have the potential to make policing safer. That possibility, however, must be balanced against the mistakes, hacks, and malfunctions that will inevitably occur. Who will bear the responsibility for these mistakes, either because the threat was misjudged, or the force disproportionate (2016)?” It appears mistakes and malfunctions are anticipated to remain a component of law enforcement operations, even with the advancement of autonomous robots and other machinery.


If a law enforcement officer’s potentially lethal response is deemed reasonable, is there a tool, machinery, technology, or technique that should be excluded? Why, or why not?

Should law enforcement officers be prohibited from using a tool, machinery, technology, or technique to neutralize a lethal threat or stop assaultive behavior if it could protect the officer from death or injury? Why, or why not? What is the criteria?

Should law enforcement officers be prohibited from using a tool, machinery, technology, or technique to neutralize a lethal threat or stop assaultive behavior if it could protect members of the public from death or injury? Why, or why not? Is your answer consistent with your view of officer safety and self-preservation measures? Why, or why not?

How does the potential that autonomous robotics, presumably used in future policing, would remain susceptible to mistakes and misjudgments relate to the human mistakes and misjudgments sometimes present in law enforcement uses of force?

Why do you think the public is, at times, uncomfortable with the law enforcement use of machinery, unique applications, and/or improvised objects to interrupt violent behavior or enhance police service efficiency and accuracy? Is this intellectually consistent? Does this have any application to objectively evaluating a law enforcement use of force?


Boyd, E.B. (2016). Is Police Use of Force About to Get Worse—With Robots?, Sept. 22, 2016.

Dillow, C. (2013). GE's Hospital Robot Could Reduce Human Errors And Save Lives. Popular Science, Jan. 31, 2013.

Joh, E. (2016). Policing Police Robots. 64 UCLA Law Review. Disc. 516.

Kann, D. (2017). Why your local police force loves robots., April 18, 2017.

Mar, G. (2016). Policing in 2025: How robots will change SWAT, patrol., Dec. 7, 2016.

bottom of page