Patent Drafting with AI: An EU AI Act Perspective

Artificial intelligence (AI) is already having a substantial impact in the practice of Intellectual Property (IP) Law, with platforms such as Solve Intelligence's Patent Copilot assisting attorneys in drafting and prosecuting patent applications. These AI platforms can help patent attorneys realise efficiency gains and help to provide high-quality patents. 

Until earlier this year, the use of AI was largely unregulated across the world. Now, the picture has somewhat changed, with different countries implementing different strategies when it comes to regulating AI, to promote safety but also to remain competitive. Earlier this year, the Artificial Intelligence Act entered into force in the EU, becoming the world's first comprehensive regulation for AI. In this article we have a look at the obligations that the EU AI Act puts on AI technology providers, such as providers of AI patent drafting and prosecution tools.

Patent Drafting with AI: An EU AI Act Perspective

The EU AI Act establishes a regulatory framework for AI systems within the European Union, ensuring that AI technology is used safely and responsibly. The Act classifies AI systems based on the potential risks they pose to safety, fundamental rights, and public interests, creating four risk levels: prohibited, high-risk, limited-risk, and minimal-risk. This tiered approach is designed to vary the degree of regulation based on an AI system's potential impact.

1. Prohibited AI Systems

These systems are considered to pose an unacceptable risk by the AI Act and are thus prohibited from use. These may include, for example, AI systems for biometric data scraping in public spaces or over the web. Other prohibited AI systems include social scoring systems, which may use AI to rank or assess people based on their behaviour. Similarly, AI systems used for emotional recognition in the workplace, education, and law enforcement are prohibited. Furthermore, prohibited AI systems include those designed to manipulate human behaviour or exploit vulnerabilities in ways that could cause harm.

2. High-Risk AI Systems

High-risk systems are those used in sensitive areas where errors or misuse could lead to significant harm. These include AI applications in critical infrastructure (e.g., energy or transport systems), law enforcement, healthcare (e.g., medical diagnostics), and employment (e.g., AI used to evaluate job candidates). Other examples of AI systems deemed to be high-risk include AI systems used in democratic processes and in education, when used as a tool for assessing students.

Providers of High-Risk AI Systems must comply with a range of strict requirements set out by the AI Act. These requirements include:

  • Technical Documentation: Providers must create detailed technical documentation for the system, essentially a comprehensive "manual" that includes specific, mandatory information about how the AI operates.
  • Transparency: Such high risk systems must be accompanied by detailed instructions for use, to ensure users fully understand its functions.
  • Human Oversight: These high risk systems must allow for human oversight including allowing human operators to intervene and stop the system when necessary. 
  • Risk Management: Providers must implement a process throughout the system's lifecycle that can identify and mitigate any associated risks.
  • Data Measures: Training and testing of such high-risk systems must adhere to strict data governance protocols, ensuring that the data used is of high quality and free from bias.
  • Robustness, Accuracy, and Cybersecurity: High risk AI systems must be resilient, demonstrate accuracy, and be robust against cyberattacks.
  • Quality Management: Providers must implement a comprehensive quality management process to ensure the consistent reliability of the high risk AI system.
  • Record-Keeping: The AI system must be designed to automatically log certain events, such as usage periods and input data. Providers are required to retain these logs for specific durations as defined by the regulatory framework.
  • Monitoring: Providers must have a system in place to collect and analyze performance data throughout the AI system’s lifecycle, based on user feedback and real-world performance.

3. General Purpose AI (GPAI)

General Purpose AI (GPAI) refers to AI models that exhibit significant generality and can competently perform a wide array of distinct tasks. These models, often trained on large datasets using self-supervised learning techniques, are versatile and can be integrated into various downstream systems or applications. However, it's important to note that this definition excludes AI models intended for research, development, or prototyping activities prior to their market release.

Given their adaptability, GPAI systems can sometimes be used with high-risk AI systems or be integrated into them, necessitating collaboration between GPAI system providers and those offering high-risk systems to ensure compliance with relevant regulations.

The providers of GPAI have the following obligations under the EU AI Act:

  • Providers must draw up technical documentation, including training and testing process and evaluation results.
  • Providers must draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Providers must establish a policy to respect the Copyright Directive.
  • Providers must publish a sufficiently detailed summary about the content used for training the GPAI model.

4. Limited risk and Minimal risk AI Systems

The primary requirement for all other AI systems is an obligation of transparency. Providers of these other AI systems must ensure that AI systems intended for interaction with individuals are designed and developed to make users aware that they are engaging with an AI system. Another general obligation is that they should ensure that the personnel responsible for operating and utilizing AI systems possess adequate AI literacy. This is dependent on the specific context in which the AI systems are employed.

Conclusion

The EU AI Act represents a comprehensive approach to regulating AI technologies, ensuring they are used safely and responsibly while promoting innovation. It is important for providers of AI-related services, tools, and models to adhere to the requirements set out by the EU AI Act, and more generally, the regulatory environment concerning AI around the world.

Here at Solve, security is our number one priority, and it will be throughout the development of our platform. If you have any questions regarding our policies in this regard, or with respect to the EU AI Act, please feel free to reach out.

AI for patents.

Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.

Related articles

Validating AI Output in Patent Practice: Solve Intelligence at ABA-IPL 2026

The American Bar Association’s Intellectual Property Law Section Spring Conference (ABA-IPL) remains one of the premier annual gatherings for IP professionals, bringing together practitioners, in-house counsel, academics, and policymakers to explore the latest developments shaping the field. 

Solve Intelligence was invited not only to attend, but to share their expertise on the concluding panel as leaders in AI.

Sughrue Mion Integrates Solve Intelligence into Patent Practice

Sughrue Mion has always set the standard for what patent prosecution looks like. Founded in 1957, the firm has obtained more U.S. patents than any other law firm in the world. That record is built on deep technical expertise, disciplined prosecution strategy, and a culture that takes the quality of every work product seriously.

When Sughrue decided to integrate AI into patent workflows for select clients, their approach reflected that culture. Sughrue thoughtfully structured its implementation, and demonstrated a clear vision of where technology and AI adds value and where attorney judgment remains irreplaceable.

Key Insights

  • Sughrue adopted Solve Intelligence's platform for certain clients across Drafting, Prosecution, and Charts following firm-wide testing, culminating in an enterprise partnership.
  • The rollout was driven by Firm leadership prioritising practitioner education and a structured implementation framework from day one.
  • Solve Intelligence is now integrated into numerous preparation and prosecution workflows, helping Sughrue's attorneys work faster, think more expansively, and deliver higher-quality outcomes for a global client base.

Solve Intelligence, Powered by Claude

At Solve Intelligence, we believe the future of intellectual property belongs to professionals who can combine deep legal expertise with the most capable AI available. That's why our platform is powered by Claude, and why we're expanding what's possible for patent professionals and inventors worldwide.

The Speed-Quality Trade-Off in UPC Provisional Measures

Preliminary injunctions, or “provisional measures” in Unified Patent Court (UPC) terminology, have become the most consequential procedural tool in European patent litigation. In under three years, the UPC has issued 63 decisions across 88 cases, with filings accelerating year on year. The analytical rigour courts demand has increased at precisely the moment timelines have compressed.

For patent teams on both sides, the procedural reality is stark: court-ready claim analysis that once took months must now be produced in days, at a depth that no longer rewards manual workflows.

Tools like Solve Intelligence’s Charts are emerging as a response to that structural pressure, compressing the mechanical phases of claim charting while preserving the practitioner-led judgment that courts expect.