Patent Drafting with AI: An EU AI Act Perspective

Artificial intelligence (AI) is already having a substantial impact in the practice of Intellectual Property (IP) Law, with platforms such as Solve Intelligence's Patent Copilot assisting attorneys in drafting and prosecuting patent applications. These AI platforms can help patent attorneys realise efficiency gains and help to provide high-quality patents. 

Until earlier this year, the use of AI was largely unregulated across the world. Now, the picture has somewhat changed, with different countries implementing different strategies when it comes to regulating AI, to promote safety but also to remain competitive. Earlier this year, the Artificial Intelligence Act entered into force in the EU, becoming the world's first comprehensive regulation for AI. In this article we have a look at the obligations that the EU AI Act puts on AI technology providers, such as providers of AI patent drafting and prosecution tools.

Patent Drafting with AI: An EU AI Act Perspective

The EU AI Act establishes a regulatory framework for AI systems within the European Union, ensuring that AI technology is used safely and responsibly. The Act classifies AI systems based on the potential risks they pose to safety, fundamental rights, and public interests, creating four risk levels: prohibited, high-risk, limited-risk, and minimal-risk. This tiered approach is designed to vary the degree of regulation based on an AI system's potential impact.

1. Prohibited AI Systems

These systems are considered to pose an unacceptable risk by the AI Act and are thus prohibited from use. These may include, for example, AI systems for biometric data scraping in public spaces or over the web. Other prohibited AI systems include social scoring systems, which may use AI to rank or assess people based on their behaviour. Similarly, AI systems used for emotional recognition in the workplace, education, and law enforcement are prohibited. Furthermore, prohibited AI systems include those designed to manipulate human behaviour or exploit vulnerabilities in ways that could cause harm.

2. High-Risk AI Systems

High-risk systems are those used in sensitive areas where errors or misuse could lead to significant harm. These include AI applications in critical infrastructure (e.g., energy or transport systems), law enforcement, healthcare (e.g., medical diagnostics), and employment (e.g., AI used to evaluate job candidates). Other examples of AI systems deemed to be high-risk include AI systems used in democratic processes and in education, when used as a tool for assessing students.

Providers of High-Risk AI Systems must comply with a range of strict requirements set out by the AI Act. These requirements include:

  • Technical Documentation: Providers must create detailed technical documentation for the system, essentially a comprehensive "manual" that includes specific, mandatory information about how the AI operates.
  • Transparency: Such high risk systems must be accompanied by detailed instructions for use, to ensure users fully understand its functions.
  • Human Oversight: These high risk systems must allow for human oversight including allowing human operators to intervene and stop the system when necessary. 
  • Risk Management: Providers must implement a process throughout the system's lifecycle that can identify and mitigate any associated risks.
  • Data Measures: Training and testing of such high-risk systems must adhere to strict data governance protocols, ensuring that the data used is of high quality and free from bias.
  • Robustness, Accuracy, and Cybersecurity: High risk AI systems must be resilient, demonstrate accuracy, and be robust against cyberattacks.
  • Quality Management: Providers must implement a comprehensive quality management process to ensure the consistent reliability of the high risk AI system.
  • Record-Keeping: The AI system must be designed to automatically log certain events, such as usage periods and input data. Providers are required to retain these logs for specific durations as defined by the regulatory framework.
  • Monitoring: Providers must have a system in place to collect and analyze performance data throughout the AI system’s lifecycle, based on user feedback and real-world performance.

3. General Purpose AI (GPAI)

General Purpose AI (GPAI) refers to AI models that exhibit significant generality and can competently perform a wide array of distinct tasks. These models, often trained on large datasets using self-supervised learning techniques, are versatile and can be integrated into various downstream systems or applications. However, it's important to note that this definition excludes AI models intended for research, development, or prototyping activities prior to their market release.

Given their adaptability, GPAI systems can sometimes be used with high-risk AI systems or be integrated into them, necessitating collaboration between GPAI system providers and those offering high-risk systems to ensure compliance with relevant regulations.

The providers of GPAI have the following obligations under the EU AI Act:

  • Providers must draw up technical documentation, including training and testing process and evaluation results.
  • Providers must draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Providers must establish a policy to respect the Copyright Directive.
  • Providers must publish a sufficiently detailed summary about the content used for training the GPAI model.

4. Limited risk and Minimal risk AI Systems

The primary requirement for all other AI systems is an obligation of transparency. Providers of these other AI systems must ensure that AI systems intended for interaction with individuals are designed and developed to make users aware that they are engaging with an AI system. Another general obligation is that they should ensure that the personnel responsible for operating and utilizing AI systems possess adequate AI literacy. This is dependent on the specific context in which the AI systems are employed.

Conclusion

The EU AI Act represents a comprehensive approach to regulating AI technologies, ensuring they are used safely and responsibly while promoting innovation. It is important for providers of AI-related services, tools, and models to adhere to the requirements set out by the EU AI Act, and more generally, the regulatory environment concerning AI around the world.

Here at Solve, security is our number one priority, and it will be throughout the development of our platform. If you have any questions regarding our policies in this regard, or with respect to the EU AI Act, please feel free to reach out.

AI for patents.

Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.

Related articles

Solve Intelligence Ranked #1 IP Platform by the World's Leading Law Firms

Solve Intelligence has been ranked the number one intellectual property platform in the latest Legal AI survey published by SKILLS (the Strategic Knowledge & Innovation Legal Leaders Summit). The study surveyed 130 leaders at the world's top law firms about their legal AI product usage across every major practice area, scoring platforms based on live deployments, active pilots, and tools under consideration. In the Patents/IP category, Solve Intelligence placed first with a weighted score of 67, making it the most widely-used platform in the category. See the full report here.

The Hidden Cost of Ignoring AI in Patent Practice

As patent practitioners, the choice to “do nothing” about AI is not a neutral act. 

Law firms or in-house counsel that delay the adoption of AI may believe they are minimizing risk, but oftentimes they are taking on a different set of less visible, long-term risks. 

These hidden costs can accumulate quickly, from compounding inefficiencies in traditional patent drafting workflows to missed revenue opportunities that remain untapped without leveraging AI-driven capabilities.

So, what can patent practitioners do to stay ahead of the game? Here is what the Solve Intelligence team has seen speaking with thousands of practitioners.

Key takeaways

  • Waiting to adopt AI is itself a strategic decision with compounding costs.
  • Manual patent workflows create time, quality, and knowledge bottlenecks that grow over time.
  • Firms already experimenting with AI gain operational insight that late adopters cannot shortcut.
  • Low-risk entry points let practitioners build confidence without compromising legal judgment.

Why Patent Attorneys Need Purpose-Built AI

Legal AI platforms like Harvey and Legora are valuable productivity tools. Powered by large language models and enriched with legal data sources, firm-specific knowledge, and purpose-built workflows, they perform well on tasks like legal research, document summarisation, and contract or email drafting.

But their workflows are optimised for breadth across practice areas, not for the structural, technical, and jurisdictional depth that patent work requires.

For IP teams that already have access to a generalist platform, or are trying one out, the natural follow-up question is whether a vertical solution adds enough to justify the investment. 

At Solve Intelligence, we build AI specifically for patent practitioners. In our experience scaling the platform to over 500 IP teams, there is no question that patent-specific tooling delivers ROI that generalist platforms alone cannot. This article sets out why.

Key takeaways

  • Generalist legal AI tools weren't trained for the structural depth patent work demands.
  • Solve Intelligence is shaped by in-house patent attorneys who joined Solve from firms like Carpmaels & Ransford and Fish & Richardson.
  • Custom templating lets attorneys match output to house style, client/technology area, or jurisdiction.
  • Generalist and patent-specific AI are complementary investments, not competing ones.

Marbury Law sees 3x-4x efficiency gain from using Solve Intelligence

When we sat down with Bob Hansen for this conversation, we knew it would be grounded in both legal depth and real-world business experience. Bob is a founding partner of The Marbury Law Group and has extensive experience across patent prosecution, litigation, licensing, portfolio strategy, and complex IP transactions. But what makes his perspective particularly compelling is that he also brings 20 years of real-world experience as an engineer, program manager, and business executive in Fortune 50 companies and start-ups. He understands firsthand how innovation moves from idea to product, and how intellectual property law fits into that journey.

That dual lens is exactly why we wanted to have this discussion. Bob evaluates technology not just as a patent attorney, but as someone who has managed engineering teams, navigated acquisitions and divestitures, raised capital, and built businesses. When someone with that background says AI has been transformative and backs it up with measurable 3 to 4x efficiency gains, it’s worth listening.

Key Insights

  • AI adoption requires proof. Bob and his team tested multiple tools before committing, and only moved forward once they saw quantifiable results.
  • 3 to 4x efficiency gains changed the business case. By tracking his own drafting time, Bob demonstrated that AI-enabled workflows made fixed-fee work viable at partner rates.
  • Demonstration drives adoption. Live drafting sessions, client transparency, and side-by-side cost comparisons created full buy-in from both clients and colleagues.
  • Integrated chat removes friction. Keeping research, drafting, and revisions inside one contextual workspace eliminated copy-paste workflows and saved significant time.
  • Context is a force multiplier. AI performs best when it understands the full invention disclosure, file history, and drafting materials in one place.
  • Speed expands strategic value. Faster drafting didn’t just save time - it enabled better coverage, stronger enablement, and real-time responsiveness to client needs.

About Marbury Law

The Marbury Law Group is a premier mid-size, full-service intellectual property and technology law firm in the Washington, D.C. area, with additional strength in commercial law, litigation, and trademark litigation. Recognized by Juristat as a top 35 law firm nationwide and holding Martindale-Hubbell’s AV® Preeminent™ Peer Review Rating, Marbury serves clients ranging from Fortune 500 companies and mid-size technology businesses to high-tech startups and inventors. Its practitioners bring unusually wide-ranging experience, including former technology executives, government R&D managers, startup founders, in-house counsel, “big-law” attorneys, USPTO patent examiners, and judicial clerks. 

Marbury delivers “big-law” service with the flexibility and personal attention of a smaller firm, pairing high-quality work with efficient, budget-aware billing. Based near the USPTO, the firm has drafted and prosecuted thousands of U.S. and foreign patent applications and trademarks, and advises on IP strategy, diligence, and licensing. Formed in 2009 through the merger of two established practices (with roots dating back to 1994), the firm takes its name from Marbury v. Madison (1803), the landmark Supreme Court case that established judicial review.