ABAJ: Will big-data tools make policing less biased–or violate people’s rights? (podcast) by Lee Rawles:
One day, there is a knock at the door, and you are handed a letter. The letter lists your friends, your activities, and the possible legal consequences you might face for those activities. The letter also says that you may be the next to be shot—or to kill someone else.
This is neither a murder mystery nor a dystopian fiction; this is a real-world example of how Chicago police are using predictive technology to identify people who are most at risk of committing violence or becoming victims themselves. They call it the “heat list.” This is big-data policing, and it could be coming to your neighborhood.
With resource-strapped police departments facing pressure to avert crime and end racially discriminatory police practices, many are turning to data-driven surveillance technology with the thought that it could be both more objective and more effective. But without transparency into what technology police are using and how the data is gathered, can the public have confidence that these tools will be used responsibly or effectively?