Salon: AI is automating injustice in American policing

Salon: AI is automating injustice in American policing by Nicholas Liu (“AI has raised deep concerns about police power and the erosion of rights, finding scapegoats instead of solutions.”):

Forty years ago, ‘RoboCop’ imagined a world in which a police officer could be recast as a highly efficient, if inhuman, crime-fighting cyborg. Today, in an age when artificial intelligence is promoted as synonymous with modernity, police departments across the U.S. are embracing all manner of machines that are not only, by definition, inhuman — but also, according to critics, not very efficient at actually maintaining public safety.

Over the last few years, police have arrested scores of people using AI facial-recognition tools, only to find out later that many of them were miles away from where an alleged crime took place, or physically incapable of committing it. Most of those false leads targeted people of color. Those incidents potentially reveal the material function of AI tools that are increasingly being used by law enforcement, as well as the pitfalls of automating not only the management and discipline of specific communities, but also the legitimacy of their use of force.

Humans often defer to AI because its calculating, unfeeling nature implies objective authority, ignoring the reality that AI knowledge learns from the past to predict the future. In American policing, that past is defined by the systemic over-policing and containment of working class Black and brown communities. If police forces are already marked with bias, then the AI tools they use would serve as an automator of existing hierarchies rather than questioning or correcting them.

This entry was posted in AI. Bookmark the permalink.

Comments are closed.