Validating AI Output in Patent Practice: Solve Intelligence at ABA-IPL 2026
The American Bar Association’s Intellectual Property Law Section Spring Conference (ABA-IPL) remains one of the premier annual gatherings for IP professionals, bringing together practitioners, in-house counsel, academics, and policymakers to explore the latest developments shaping the field.
Solve Intelligence was invited not only to attend, but to share their expertise on the concluding panel as leaders in AI.
.png)
Key insights
- Generative tools produce probabilistic responses; even confident outputs can be wrong.
- Every AI output needs review for hallucination, fabricated citations, and weak reasoning.
- Document what you directed, what the model returned, and what you conceived independently.
Held in April 2026 in Washington, D.C., the conference featured a packed agenda of more than 20 Continuing Legal Education (CLE) sessions alongside networking events, keynote programming, and section business meetings. A central theme of this year’s conference was the rapid evolution of intellectual property law in response to emerging technologies, particularly artificial intelligence and data-driven innovation.
One of these sessions was the hotly debated topic of AI at the intersection of IP and ethics.
AI validation and ethics in patent practice
At this year's ABA-IPL "AI & Ethics Roundup," Chris Parsonson, CEO and co-founder of Solve Intelligence, joined Emil Ali (McCabe & Ali), Paul Morico (Baker Botts), and Tammy Pennington Rhodes (Supercharger AI) to discuss how patent practitioners should handle AI tool output across validity, infringement, and freedom-to-operate work.
Chris underscored two non-negotiables when integrating generative tools into any professional workflow:
Validate outputs thoroughly
Generative tools should not be taken just at face value. Instead, patent professionals should always confirm accuracy and independently verify any cited sources. Because these models generate responses probabilistically rather than retrieving information from a reliable database, even outputs that sound confident and authoritative can still be incorrect.
Apply critical, informed judgment
Every output should be reviewed for relevance, logical consistency, and overall credibility. This means actively looking for signs of hallucination, such as fabricated citations, claims that aren’t supported by the referenced material, or reasoning that doesn’t hold up under closer scrutiny.

USPTO guidance on AI-assisted inventorship
That same emphasis on human oversight ran through the panel's discussion of the November 2025 USPTO guidance on AI-assisted inventorship. The guidance treats AI systems as tools, analogous to lab equipment or research databases, and keeps conception as the touchstone: a natural person must possess the "definite and permanent idea of the complete and operative invention."
A single human inventing with AI raises no joint inventorship question, since AI cannot be a joint inventor; when multiple humans collaborate using AI, the Pannu factors apply among the humans only.
Relatedly, on the question of human inventorship, participants at the conference discussed the importance of having documentation to show what the human directed, what the model returned, and what the human conceived independently.
IP law in motion
Beyond substantive programming, IPLSPRING emphasized community and collaboration. Attendees had ample opportunities to connect through networking sessions, sponsor showcases, and social events, reinforcing the conference’s role as a hub for relationship-building across the IP ecosystem.
Overall, the ABA-IPL Spring Conference underscored both the dynamism and complexity of modern intellectual property law.
AI for patents.
Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.

.png)
.png)

