Patent Drafting with AI: An EU AI Act Perspective

Artificial intelligence (AI) is already having a substantial impact in the practice of Intellectual Property (IP) Law, with platforms such as Solve Intelligence's Patent Copilot assisting attorneys in drafting and prosecuting patent applications. These AI platforms can help patent attorneys realise efficiency gains and help to provide high-quality patents. 

Until earlier this year, the use of AI was largely unregulated across the world. Now, the picture has somewhat changed, with different countries implementing different strategies when it comes to regulating AI, to promote safety but also to remain competitive. Earlier this year, the Artificial Intelligence Act entered into force in the EU, becoming the world's first comprehensive regulation for AI. In this article we have a look at the obligations that the EU AI Act puts on AI technology providers, such as providers of AI patent drafting and prosecution tools.

Patent Drafting with AI: An EU AI Act Perspective

The EU AI Act establishes a regulatory framework for AI systems within the European Union, ensuring that AI technology is used safely and responsibly. The Act classifies AI systems based on the potential risks they pose to safety, fundamental rights, and public interests, creating four risk levels: prohibited, high-risk, limited-risk, and minimal-risk. This tiered approach is designed to vary the degree of regulation based on an AI system's potential impact.

1. Prohibited AI Systems

These systems are considered to pose an unacceptable risk by the AI Act and are thus prohibited from use. These may include, for example, AI systems for biometric data scraping in public spaces or over the web. Other prohibited AI systems include social scoring systems, which may use AI to rank or assess people based on their behaviour. Similarly, AI systems used for emotional recognition in the workplace, education, and law enforcement are prohibited. Furthermore, prohibited AI systems include those designed to manipulate human behaviour or exploit vulnerabilities in ways that could cause harm.

2. High-Risk AI Systems

High-risk systems are those used in sensitive areas where errors or misuse could lead to significant harm. These include AI applications in critical infrastructure (e.g., energy or transport systems), law enforcement, healthcare (e.g., medical diagnostics), and employment (e.g., AI used to evaluate job candidates). Other examples of AI systems deemed to be high-risk include AI systems used in democratic processes and in education, when used as a tool for assessing students.

Providers of High-Risk AI Systems must comply with a range of strict requirements set out by the AI Act. These requirements include:

  • Technical Documentation: Providers must create detailed technical documentation for the system, essentially a comprehensive "manual" that includes specific, mandatory information about how the AI operates.
  • Transparency: Such high risk systems must be accompanied by detailed instructions for use, to ensure users fully understand its functions.
  • Human Oversight: These high risk systems must allow for human oversight including allowing human operators to intervene and stop the system when necessary. 
  • Risk Management: Providers must implement a process throughout the system's lifecycle that can identify and mitigate any associated risks.
  • Data Measures: Training and testing of such high-risk systems must adhere to strict data governance protocols, ensuring that the data used is of high quality and free from bias.
  • Robustness, Accuracy, and Cybersecurity: High risk AI systems must be resilient, demonstrate accuracy, and be robust against cyberattacks.
  • Quality Management: Providers must implement a comprehensive quality management process to ensure the consistent reliability of the high risk AI system.
  • Record-Keeping: The AI system must be designed to automatically log certain events, such as usage periods and input data. Providers are required to retain these logs for specific durations as defined by the regulatory framework.
  • Monitoring: Providers must have a system in place to collect and analyze performance data throughout the AI system’s lifecycle, based on user feedback and real-world performance.

3. General Purpose AI (GPAI)

General Purpose AI (GPAI) refers to AI models that exhibit significant generality and can competently perform a wide array of distinct tasks. These models, often trained on large datasets using self-supervised learning techniques, are versatile and can be integrated into various downstream systems or applications. However, it's important to note that this definition excludes AI models intended for research, development, or prototyping activities prior to their market release.

Given their adaptability, GPAI systems can sometimes be used with high-risk AI systems or be integrated into them, necessitating collaboration between GPAI system providers and those offering high-risk systems to ensure compliance with relevant regulations.

The providers of GPAI have the following obligations under the EU AI Act:

  • Providers must draw up technical documentation, including training and testing process and evaluation results.
  • Providers must draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Providers must establish a policy to respect the Copyright Directive.
  • Providers must publish a sufficiently detailed summary about the content used for training the GPAI model.

4. Limited risk and Minimal risk AI Systems

The primary requirement for all other AI systems is an obligation of transparency. Providers of these other AI systems must ensure that AI systems intended for interaction with individuals are designed and developed to make users aware that they are engaging with an AI system. Another general obligation is that they should ensure that the personnel responsible for operating and utilizing AI systems possess adequate AI literacy. This is dependent on the specific context in which the AI systems are employed.

Conclusion

The EU AI Act represents a comprehensive approach to regulating AI technologies, ensuring they are used safely and responsibly while promoting innovation. It is important for providers of AI-related services, tools, and models to adhere to the requirements set out by the EU AI Act, and more generally, the regulatory environment concerning AI around the world.

Here at Solve, security is our number one priority, and it will be throughout the development of our platform. If you have any questions regarding our policies in this regard, or with respect to the EU AI Act, please feel free to reach out.

AI for patents.

Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.

Related articles

How Solve Intelligence Handles Invention Disclosures and Unstructured Data

If you've been drafting patents for any length of time, you know the real bottleneck is often not the drafting itself. It's the messy inputs that precede it: partial forms, internal review decks, or email threads where the inventive aspects are buried. Getting from that to a coherent starting point for a draft consumes time most practices simply can't afford.

AI can perform much of that translation work: extracting what matters, flagging what's missing, and generating the necessary follow-up questions based on holes and shortcomings. But it must operate inside proper confidentiality controls, and its output requires attorney review before going near a draft. This guide covers how that works in practice in Solve Intelligence's platform .

Key takeaways

  • The disclosure bottleneck is upstream; AI structures messy inputs before the drafting phase begins.
  • AI extracts features, normalises terminology, surfaces gaps, and generates inventor questions, but attorney review is mandatory.
  • The danger is plausible but fabricated detail, not obvious errors. Watch for AI-generated parameters or 'helpful' specifics.
  • Disclosures contain trade secrets and unpublished IP. Use only tools with verified zero-training, zero-retention policies and enterprise-grade security.
  • A sensible pilot, without client approval, uses anonymised or historical disclosures to define 'good' output and track key metrics over limited timeframe.

Solve Intelligence Acquires Palito.ai to Unify AI Patent Litigation and Prosecution in One Platform

Solve Intelligence has acquired Palito.ai, a Munich-based startup specialising in AI-powered patent litigation and prior art analysis.

The acquisition deepens Solve’s investment in patent litigation, adding Palito's strengths in validity analysis, case law research, and European patent workflows to Solve’s existing Charts product. The result is a single platform where IP professionals can handle invalidity claim charts, SEP claim charts, freedom-to-operate and clearance analyses, infringement mappings, claim construction analyses, portfolio analyses, and more.

Solve Intelligence is an AI platform for IP professionals, covering patent drafting, prosecution, and litigation. Palito.ai is a Munich-based startup specialising in AI-powered validity analysis and European patent litigation workflows.

At a glance:

  • Solve Intelligence acquires Munich-based Palito.ai
  • Adds validity analysis, prior art research, EPO/UPC/German court workflows
  • New Munich office established
  • Existing Charts users get expanded litigation capabilities

The Shift Has Already Happened: How Legal's Relationship with AI Changed

Two years ago, the dominant argument in the legal industry was whether AI had any place in the profession at all. That debate is over.

Analysts are now calling 2026 the year AI moves from an “interesting tool” to “operational infrastructure”. The speed at which that narrative has changed tells you everything about where the industry is heading.

Key takeaways

  • The legal profession's central question has moved from "can we trust this?" to "how do we integrate this properly?"
  • AI adoption across IP practice has risen from 57% in 2023 to 85% in 2025.
  • Firms are not just trialling AI tools, they are expanding its use across full workflows. Practitioners using Solve Intelligence grew ~560% in 2025 alone.
  • Clearer regulatory guidance has removed one of the most significant psychological barriers to adoption.
  • The profile of firms now adopting AI has changed: these are not early experimenters, but some of the most demanding legal professionals in the world.

Solve Intelligence Ranked #1 IP Platform by the World's Leading Law Firms

Solve Intelligence has been ranked the number one intellectual property platform in the latest Legal AI survey published by SKILLS (the Strategic Knowledge & Innovation Legal Leaders Summit). The study surveyed 130 leaders at the world's top law firms about their legal AI product usage across every major practice area, scoring platforms based on live deployments, active pilots, and tools under consideration. In the Patents/IP category, Solve Intelligence placed first with a weighted score of 67, making it the most widely-used platform in the category. See the full report here.