OpenEvidence Lawsuit: AI Rivalry and Trade Secret Disputes
The burgeoning field of artificial intelligence (AI) is not just about technological advancements; it’s increasingly becoming a battleground for legal disputes, particularly concerning trade secrets. One prominent case highlighting this intersection is the OpenEvidence lawsuit, which brings to the forefront the complexities of AI rivalry and the protection of proprietary information in the age of intelligent machines. In February 2025, OpenEvidence, Inc., a Massachusetts-based AI company specializing in medical information, filed a lawsuit against Pathway Medical, a Canadian competitor, alleging trade secret misappropriation. This case, filed in the U.S. District Court for the District of Massachusetts, has broad implications for how AI-generated content and proprietary algorithms are legally protected.
The Allegations: Cyberattacks and Stolen Blueprints
OpenEvidence claims that Pathway Medical launched cyberattacks to steal its blueprint for an AI medical information platform. The lawsuit alleges that Pathway Medical violated the Defend Trade Secrets Act (DTSA) by impersonating healthcare professionals to gain unauthorized access to OpenEvidence’s platform. This access was then purportedly used to manipulate the AI system into divulging sensitive and proprietary information. The complaint also includes claims for violation of the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA).
Specifically, OpenEvidence contends that Pathway Medical executed “prompt injection” attacks—a type of cyberattack uniquely harmful to AI systems. In these attacks, malicious inputs are disguised as legitimate prompts to trick the AI into revealing confidential data and proprietary algorithms. OpenEvidence argues that Pathway Medical used this extracted information to develop a competing AI system, directly violating federal trade secret laws and contractual agreements.
The Core of the Dispute: AI System Prompts
At the heart of the OpenEvidence lawsuit is the concept of AI system prompts. These prompts are the hidden instructions that guide a large language model (LLM) in how it reasons, responds, and maintains consistency. They dictate the AI’s role, personality, and subject matter expertise. OpenEvidence argues that its system prompts are its “crown jewel,” representing years of investment and development work.
The lawsuit raises a critical question: Can these AI system prompts qualify as trade secrets? The court is being asked to decide whether manipulating an AI’s interface to extract these hidden instructions constitutes digital theft. This is a novel issue, as it challenges the traditional understanding of trade secret misappropriation in the context of AI.
Legal and Technical Challenges
The OpenEvidence case highlights several legal and technical challenges in protecting AI-related trade secrets:
- Defining Trade Secrets in AI: AI software and algorithms are well-suited for trade secret protection due to their intangible nature. However, disputes often arise over whether AI-related code constitutes a true trade secret or merely general programming knowledge.
- “Improper Means” of Acquisition: The definition of “improper means” in acquiring trade secrets is being tested. Can using AI prompts to extract sensitive data be considered an improper means?
- Reverse Engineering vs. Misappropriation: A major question is whether extracting data from a generative AI model constitutes legal “reverse engineering” or a violation of trade secret law.
- Data Security: The case underscores the importance of robust data security measures to protect AI systems from unauthorized access and data breaches.
Broader Implications for the AI Industry
The OpenEvidence lawsuit has significant implications for the AI industry:
- Setting Legal Precedents: The case could set precedents for how AI-generated content and proprietary algorithms are legally protected under U.S. law.
- Raising Cybersecurity Concerns: The lawsuit highlights the growing concerns about cybersecurity, data scraping, and unauthorized access in the AI field.
- Encouraging Proactive Protection: The case underscores that companies must take proactive steps to safeguard their proprietary information, as courts will not enforce protections that businesses themselves fail to uphold.
- Impacting AI Innovation: The outcome of the lawsuit could influence how companies approach AI innovation and intellectual property protection.
Advice
Given the complexities and high stakes involved in AI-related trade secret disputes, companies should take proactive measures to protect their intellectual property:
- Implement Robust Security Measures: Implement strong cybersecurity protocols to prevent unauthorized access to AI systems and data.
- Clearly Define Trade Secrets: Clearly identify and document AI-related trade secrets, including algorithms, data sets, and system prompts.
- Restrict Access to Sensitive Information: Limit access to sensitive AI-related information to authorized personnel only.
- Use Restrictive Covenants: Use employment contracts and non-disclosure agreements to prevent employees from disclosing confidential information.
- Monitor AI Systems for Suspicious Activity: Continuously monitor AI systems for suspicious activity, such as prompt injection attacks or unauthorized data access.
- Develop Incident Response Plans: Develop incident response plans to address potential trade secret misappropriation incidents.
- Seek Legal Counsel: Consult with experienced intellectual property attorneys to develop and implement comprehensive trade secret protection strategies.
Conclusion
The OpenEvidence lawsuit is a landmark case that highlights the growing importance of trade secret protection in the AI industry. As AI systems become more sophisticated and valuable, companies must take proactive steps to safeguard their proprietary information. The outcome of this case could have far-reaching implications for AI innovation, competition, and intellectual property law.