Skip to content

Is AI Crime Detection Biased

tags:: #AI #bias projects::

  • CrimeScan is an example of software that is referred to predictive policing
    • Developed by computer scientists at Carneige Mellon University
    • Trained on indicator data such as: crime reports, assaults, vandalism and 911 calls
    • It attempts to predict the geographic clusters where crimes will occur
  • There is fear that bias is baked into this kind of software
    • Historical data from police practices state that in result of this a feedback loop is created where the algorithm decides which neighbourhoods are bad and which are good
    • Police focusing on these numbers may forget that it's human beings they're dealing with, possibly being more aggressive in higher risk areas

Citational Information

Magazine, S., Rieland, R., n.d. Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased? [WWW Document]. Smithsonian Magazine. URL https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/ (accessed 2.10.23).


  • Bias
    • So, in reference to What biases exist in machine learning models, what bias exists here and is it dangerous?
      • The scary part of this in my eyes is the possibilty of allowing machines profile people, cutting a sense of humanity out of the process of justance
    • Establishing computers to analyze criminal data could lead to:
      • Police being more aggressive in higher crime risk areas
      • AI may not account for context of community, including povery, lack of access to resources and other social factors that contribute to crime
        • This could result in unfairly targeting certain communitys and neglecting others, even increasing the existing inequalities and creating new ones
  • How predictive policing could be dangerous
    • Note, this isn't the case in our current system (fortunately) but this is a capibility and is being tested
    • Imagine a model that flags individuals that are likely to be criminals
      • It's likely that certain racial or ethnic groups could be overrepresented in crime data
      • The algorithm may flag that group as higher risk even if they haven't commited (or wont ever commit) a crime
      • This could lead to higher surveillance, arrests of individuals based on their race rather than their criminal behavour