
23 May Trust Incident Northpointe (Equivant)
Case Author
GPT-4.5, OpenAI, peer-reviewd by Claude Sonnet 3.7, Anthropic
Date Of Creation
16.03.2025

Incident Summary
COMPAS, an AI-based criminal risk assessment tool developed by Northpointe (now Equivant), was found by ProPublica to disproportionately label Black defendants as higher-risk compared to white defendants, revealing significant racial bias. This incident raised substantial ethical, legal, and societal concerns about fairness, transparency, and the use of AI in criminal justice.
Ai Case Flag
AI
Name Of The Affected Entity
Northpointe (Equivant)
Brand Evaluation
3
Upload The Logo Of The Affected Entity
Industry
Technology & Social Media
Year Of Incident
2016
Upload An Image Illustrating The Case
Key Trigger
Discovery by ProPublica of significant racial disparities in COMPAS risk assessments.
Detailed Description Of What Happened
The COMPAS tool, developed by Northpointe (now Equivant), was used in multiple U.S. states to predict defendant likelihood of reoffending. In May 2016, ProPublica analyzed over 7,000 risk scores in Broward County, Florida, and found that Black defendants were falsely labeled as ""high risk"" at nearly twice the rate of white defendants (45% vs. 23%), while white defendants were mislabeled as ""low risk"" more often than Black defendants (48% vs. 28%). The analysis revealed that even when controlling for prior crimes, future recidivism, age, and gender, Black defendants were 77% more likely to be pegged as higher risk of committing future violent crimes. This investigation sparked widespread debate about algorithmic bias in the criminal justice system, leading to court challenges, academic research, and eventually influencing AI ethics policy development nationally.
Primary Trust Violation Type
Competence-Based
Secondary Trust Violation Type
Integrity-Based
Analytics Ai Failure Type
Bias
Ai Risk Affected By The Incident
Algorithmic Bias and Discrimination Risk, Transparency and Explainability Risk, Ethical and Regulatory Compliance Risk
Capability Reputation Evaluation
3
Capability Reputation Rationales
Prior to the ProPublica investigation, COMPAS had achieved moderate adoption in several state court systems, but had already faced academic scrutiny regarding its statistical validity. While courts valued its standardized approach to risk assessment, academic researchers had raised questions about its predictive accuracy and methodology. The tool was perceived as innovative in applying algorithmic approaches to criminal justice, but academic validation of its effectiveness was mixed. COMPAS benefited more from being an early market entrant than from exceptional technical capability, with its primary selling point being efficiency rather than proven accuracy.
Character Reputation Evaluation
3
Character Reputation Rationales
Before the ProPublica investigation, Northpointe maintained an average character reputation, primarily positioning itself as a technical solution provider rather than emphasizing ethical frameworks. The company proprietary algorithm was deliberately opaque, with limited transparency about its methodology or validation processes. Northpointe had not proactively addressed potential bias concerns despite operating in the sensitive domain of criminal justice. While not explicitly criticized for ethical lapses prior to the incident, the company emphasis on technical capability over transparency and ethical considerations created vulnerability to trust violations once potential bias was discovered.
Reputation Financial Damage
The ProPublica investigation severely damaged Northpointe reputation, particularly among civil rights organizations, academic researchers, and progressive policymakers. The company faced multiple legal challenges, including a notable Wisconsin Supreme Court case (State v. Loomis) that, while ultimately allowing continued use of COMPAS, mandated disclosure of its limitations. Several jurisdictions, including New York and California, implemented algorithmic transparency requirements directly responding to concerns raised by the COMPAS case. The company rebranded as ""Equivant"" in 2017, likely in part to distance itself from negative publicity. While specific financial impact remains undisclosed, the incident established COMPAS as the primary cautionary example in AI ethics discussions, creating ongoing market challenges for the company.
Severity Of Incident
4
Company Immediate Action
Northpointe issued a direct technical rebuttal to ProPublica analysis in July 2016, publishing a 39-page response titled ""COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity."" The company specifically challenged ProPublica methodology, arguing that different definitions of fairness were being applied. CEO Tim Brennan defended COMPAS in media interviews, emphasizing that their tool achieved ""predictive parity"" across racial groups while maintaining that different base rates of recidivism explained disparate impact. Notably, Northpointe focused exclusively on statistical defenses rather than addressing broader ethical concerns about using risk assessment algorithms in criminal justice contexts or improving transparency of their proprietary system.
Response Effectiveness
Northpointe’s initial response was largely ineffective in addressing stakeholder concerns, as it primarily focused on statistical defense rather than ethical accountability or transparency. The public, judiciary, and academic communities remained critical, demanding deeper transparency and fairness assurances. This controversy significantly influenced AI policy discourse, sparking regulatory and legislative responses aimed at ensuring greater accountability and explainability of AI algorithms in judicial contexts. To this date, the response has not fully restored trust, with many stakeholders continuing to express skepticism regarding fairness and transparency of AI-driven judicial assessments.
Upload Supporting Material
Model L1 Elements Affected By Incident
Reciprocity, Brand, Social Adaptor
Reciprocity Model L2 Cues
Transparency & Explainability, Accountability & Liability, Algorithmic Fairness & Non-Discrimination
Brand Model L2 Cues
Brand Ethics & Moral Values, Brand Image & Reputation
Social Adaptor Model L2 Cues
User Control & Agency, Generative AI Disclosures, Auditable Algorithms & Open-Source Frameworks, Algorithmic Recourse & Appeal
Social Protector Model L2 Cues
N/A
Response Strategy Chosen
Justification, Denial
Mitigation Strategy
Northpointe justified its COMPAS algorithm by highlighting its statistical accuracy and validity. The company denied allegations of intrinsic racial bias, attributing disparities reported by ProPublica to variations in recidivism rates among demographic groups rather than bias in the algorithm itself. Northpointe argued their methodology was scientifically sound and that differences in recidivism across demographic groups accounted for the apparent bias. This defense primarily emphasized technical robustness without effectively addressing concerns related to fairness, transparency, and ethical implications, resulting in ongoing skepticism among stakeholders and policymakers. Addendum: Should include direct quotes from Northpointe response such as: ""We have tested all of our risk models for bias and find that they all achieve predictive parity by race"" and their argument that different definitions of fairness were being applied
Model L1 Elements Of Choice For Mitigation
Reciprocity, Brand, Social Adaptor
L2 Cues Used For Mitigation
Transparency & Explainability, Accountability & Liability, Algorithmic Fairness & Non-Discrimination, Auditable Algorithms & Open-Source Frameworks, Compliance & Regulatory Features, Generative AI Disclosures
Further References
https://www.wicourts.gov/sc/opinion/DisplayDocument.pdf?content=pdf&seqNo=171690, https://mallika-chawla.medium.com/compas-case-study-investigating-algorithmic-fairness-of-predictive-policing-339fe6e5dd72, https://ctlj.colorado.edu/wp-content/uploads/2021/02/17.1_4-Washington_3.18.19.pdf
Curated
1

The Trust Incident Database is a structured repository designed to document and analyze cases where data analytics or AI failures have led to trust breaches.
© 2025, Copyright Glinz & Company
No Comments