Trust Incident MidJourney

Trust Incident MidJourney



Case Author


GPT-4.5, OpenAI, peer-reviewed by Claude 3.7 Sonnet, Anthropic



Date Of Creation


16.03.2025



Incident Summary


MidJourney faced class-action lawsuits from artists alleging the company used millions of copyrighted artworks without permission to train its generative AI system. The lawsuit claimed MidJourney created derivative works that mimicked artist styles and compositions, raising fundamental questions about intellectual property rights in AI training data.



Ai Case Flag


AI



Name Of The Affected Entity


MidJourney



Brand Evaluation


4



Industry


Technology & Social Media



Year Of Incident


2023



Key Trigger


Public exposure and legal challenges regarding MidJourney practice of training generative AI models on copyrighted artwork without obtaining explicit consent or providing compensation.



Detailed Description Of What Happened


MidJourney, launched in 2022, trained its AI model on millions of images scraped from the internet, including copyrighted artwork without permission or attribution. In January 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class-action lawsuit alleging copyright infringement. The lawsuit claimed MidJourney AI could reproduce artist distinctive styles and compositions on demand, effectively creating unauthorized derivative works. The case highlighted fundamental tensions between AI innovation and intellectual property protection, with artists arguing their livelihoods were threatened by AI systems trained on their work without compensation or consent. The incident became a pivotal case in emerging discussions about AI regulation and fair use doctrine applications to machine learning.



Primary Trust Violation Type


Integrity-Based



Secondary Trust Violation Type


Benevolence-Based



Analytics Ai Failure Type


Privacy, Explainability



Ai Risk Affected By The Incident


Privacy and Data Protection Risk, Transparency and Explainability Risk, Economic Crime and Intellectual Property Risk, Ethical and Regulatory Compliance Risk



Capability Reputation Evaluation


4



Capability Reputation Rationales


Prior to the incident, MidJourney demonstrated strong technical capabilities that earned industry recognition. Since its public beta release in July 2022, the platform gained over 1 million users within months, showcasing exceptional market traction. Technology reviewers consistently praised MidJourney image quality, with particular recognition for its artistic aesthetics compared to competitors like DALL-E and Stable Diffusion. The platform reliable performance, distinctive artistic output, and innovative features established it as a leading generative AI solution, justifying a strong capability rating despite the ethical concerns that would later emerge regarding its data practices.



Character Reputation Evaluation


2



Character Reputation Rationales


Before the incident, MidJourney character reputation was below average, particularly among creative professionals. Like other generative AI companies, MidJourney provided minimal transparency about training data sources or ethical frameworks for respecting intellectual property. The company terms of service included broad claims of ownership over generated images without addressing artist compensation. Several prominent artists had already voiced concerns about AI art tools before the lawsuit, creating an environment of suspicion. MidJourney approach prioritized technological advancement over stakeholder concerns, with no clearly communicated ethical stance on copyright issues despite operating in a space where such concerns were already prevalent.



Reputation Financial Damage


The incident created significant reputational damage within creative communities, with hashtags like #NoToAIArt gaining traction and several prominent artists publicly boycotting MidJourney. Major art communities including Newgrounds and Getty Images banned AI-generated content in response to concerns raised by the lawsuit. While MidJourney continued to grow its user base due to general consumer interest in generative AI, it faced increased scrutiny from regulators and potential financial exposure from legal proceedings. The company was forced to allocate resources to legal defense and public relations efforts. Furthermore, the incident contributed to calls for regulation, with potential long-term financial implications for MidJourney business model if required to compensate artists or obtain explicit training data permissions.



Severity Of Incident


4



Company Immediate Action


MidJourney immediate response was primarily defensive, with CEO David Holz arguing in public statements that training AI on copyrighted material constituted fair use. In a November 2022 interview with Forbes, Holz stated, ""There isnt really a way to get a hundred million images and know the copyright status of all of them."" The company filed motions to dismiss the lawsuit in February 2023, maintaining that AI training represented transformative use. MidJourney made no immediate changes to its data practices or compensation models for artists, though it did begin exploring potential opt-out mechanisms for artists who didnt want their work used in training. The response appeared calculated to protect MidJourney legal position rather than address the underlying ethical concerns raised by stakeholders.



Response Effectiveness


The response was only partially effective. While MidJourney’s justification strategy clarified its perspective on generative AI and copyright, it failed to adequately address stakeholders’ primary concerns about consent, transparency, and intellectual property rights. Legal actions persisted, and public debates intensified, suggesting stakeholder dissatisfaction with MidJourney initial response. Without concrete corrective measures, such as clearer data governance policies, explicit artist consent, or compensation frameworks, the effectiveness of the response remained limited. To fully restore trust, MidJourney would likely need to demonstrate substantial reforms in ethical AI practices, data transparency, and intellectual property protections.



Model L1 Elements Affected By Incident


Reciprocity



Reciprocity Model L2 Cues


Transparency & Explainability, Accountability & Liability, Terms & Conditions (Legal Clarity), Algorithmic Fairness & Non-Discrimination



Brand Model L2 Cues


N/A



Social Adaptor Model L2 Cues


N/A



Social Protector Model L2 Cues


N/A



Response Strategy Chosen


Justification



Mitigation Strategy


MidJourney employed a justification strategy centered on legal arguments about fair use doctrine. CEO David Holz publicly stated, ""We expect this case to clarify how copyright law applies to AI systems."" The company maintained that training algorithms on publicly available images constituted transformative use under copyright law, similar to how search engines index content. In court filings, MidJourney argued that its outputs were new creative works distinct from training data. While the company acknowledged artist concerns conceptually, it offered no concrete concessions or changes to its practices, focusing instead on defending the legality and technical nature of its approach rather than addressing the ethical dimensions that concerned stakeholders.



Model L1 Elements Of Choice For Mitigation


Reciprocity, Brand, Social Adaptor



L2 Cues Used For Mitigation


Transparency & Explainability, Accountability & Liability, Terms & Conditions (Legal Clarity), Generative AI Disclosures, Algorithmic Fairness & Non-Discrimination, Privacy Management & Consent Mechanisms



Further References


https://www.reuters.com/legal/litigation/judge-pares-down-artists-ai-copyright-lawsuit-against-midjourney-stability-ai-2023-10-30/, https://www.theverge.com/2024/8/13/24219520/stability-midjourney-artist-lawsuit-copyright-trademark-claims-approved, https://www.lawinc.com/court-allows-copyright-claims-ai-art-lawsuit, https://www.technollama.co.uk/artists-file-class-action-lawsuit-against-stability-ai-deviantart-and-midjourney



Curated


1




The Trust Incident Database is a structured repository designed to document and analyze cases where data analytics or AI failures have led to trust breaches.

© 2025, Copyright Glinz & Company



No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.