Client Consent for AI: PCC Opinion on epi Guidelines

The Professional Conduct Committee (PCC) of the The Institute of Professional Representatives before the European Patent Office (epi) recently published an opinion on the interpretation of the epi Guidelines on AI in the work of patent attorneys. In particular, the PCC responded to an enquiry regarding guideline 4 and what patent attorneys can do to establish client consent for using AI tools during the course of their practice.

Client Consent for AI: PCC Opinion on epi Guidelines

What PCC opinions are

The Professional Conduct Committee provides opinions upon enquiries from epi members under Article 7(c) of the epi Code of Conduct. As the Committee itself states, these opinions do not have regulatory force and are prepared with the intention to provide helpful assistance. No liability attaches to the epi, the Professional Conduct Committee, or any members of that Committee in respect of these opinions. In accordance with Article 7(c) of the Code of Conduct, PCC opinions are not binding on disciplinary bodies.

The enquiry: two questions about client consent

According to the epi Guidelines on Use of Generative AI, ‘members must in all instances establish, in advance of using generative AI in their cases, the wishes of their clients with regard to the use of generative AI’. The enquiry had two questions regarding this requirement:

Question 1: Can this requirement be fulfilled by including a clause in the general terms of engagement? For example: “The member and their staff may use AI tools unless the client explicitly objects in writing for a specific case. The use of AI tools does not diminish the member's responsibility regarding diligence.”

Would this approach place the burden on the client? If this is a desirable approach, how should it be phrased to our clients?

Question 2: Would the situation be improved if a firm adopted an internal AI code of conduct, which would be available to clients upon request? Additionally, do we have examples of AI codes of conduct that we recommend to our epi members?

What the PCC said: Question 1

The PCC stated in the opinion that if permission is given in a generalised way, “the client may not truly know the extent to which such tools are being used by a representative or for which aspects of the representative's work. Any permission given in this circumstance would not meet the requirement to establish the wishes of the client in advance of using an AI tool”.

The PCC does however state "the use of a default agreement is not inherently unacceptable as long as the conditions it creates are clear and explicit. In other words if the terms and conditions forming the core of a default agreement to use AI tools identify the areas in which the tools are to be used this would seem to provide adequate safeguards for clients”.

The PCC continues: “if the terms are made available via a website this has the additional advantage that any change in the uses to which the tools are to be put can readily be communicated”.

What the PCC said: Question 2

According to the opinion, the PCC notes “implementing an internal AI code of conduct and stating in a public way that this exists is favorable and can be recommended”.

However, the PCC expressed concern about restricting access to detailed information only to existing clients, noting the result of this is that “the only information available to potential clients amounts to a generalised agreement set down in the Terms,” which would not appear adequate to allow potential clients to make informed decisions.

The PCC therefore states that "at least the main features of such a code should be available to non-clients, in order to allow them to make informed decisions over whether a firm's AI policies are acceptable”.

The PCC's summary

In their summary, the PCC stated their belief that “compliance with the epi Guidelines on use of AI can be achieved through the inclusion of consent provisions in terms of engagement on a firm's website” provided such terms identify the principal areas in which generative AI is to be used.

The PCC continued: “more detailed information on the likely areas of use should be readily available in the event of an enquiry to the firm. Such detailed information should not be restricted only to entities that are clients of the firm in question, and should be available so that clients and potential clients alike can make informed decisions about whether to accept the firm’s AI usage policies”. 

The full opinion can be found here.

Observations

This opinion gives welcome clarifying assistance to epi members regarding a potential approach for informing clients on their intended use of generative AI, which may satisfy the associated epi guidelines.

As the use of AI in patent work becomes more and more prevalent, it makes sense for professional representatives to adopt an approach for keeping clients and potential clients informed that is transparent, efficient and repeatable. 

At Solve Intelligence, our team of patent attorneys ensure we keep abreast of the law and regulations surrounding the use of AI in patent practice. Furthermore, we design all of our products from the perspective of the attorney, focusing on keeping them in the driving seat, and making it explicitly clear when and how AI is being used to augment their work and expertise.

AI for patents.

Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.

Disclaimer
This article is for general information only and does not constitute legal advice. Professional representatives may wish to seek their own appropriate guidance on professional conduct matters.

Related articles

UK Supreme Court aligns UK software patentability with EPO approach

The UK Supreme Court’s Emotional Perception decision moves UK practice closer to the EPO for computer implemented inventions, including AI. Claims with ordinary hardware will usually avoid the “computer program as such” exclusion, but only technical features can support inventive step. In practice, applicants should focus arguments and evidence on technical contribution and inventive step.

Key takeaways

  1. UK moves closer to EPO, inventive step becomes the main battleground.
  2. Ordinary hardware avoids exclusion, but may not support inventiveness.
  3. Only technical features count at inventive step, not business aims.
  4. Neural networks are treated as software, no special treatment either way.
  5. Draft around technical contribution, measurable effects, and system level impact.

Kicking Off 2026: New Investors, New Customers, New Product Features

A lot has happened in the last two months. We wanted to take a moment to share what we've been building, who's joined us, and where we're headed next.

Since we started Solve, the goal has been simple: help IP teams do their best work by combining real-world patent expertise with deep AI research, intuitive UX, and state-of-the-art security. The momentum we're seeing across the business tells us the market agrees as 400+ IP teams across 6 continents now use Solve.

Here's what's new.

Reflections from AUTM: What Tech Transfer Offices Really Need in 2026

Last week, my colleagues and I attended the annual meeting of AUTM, the global association for technology transfer professionals. For anyone building in the intellectual property (IP) space, it’s one of the most important rooms you can be in.

The three-day conference brings together high education decision-makers from around the world who are shaping how intellectual property is evaluated, protected, and commercialized. This year’s conversations revealed something important: the question is no longer if AI will influence tech transfer, but instead about how institutions will integrate it.

PTAB Case Studies of AI Disclosure Requirements: Part I

Artificial intelligence (AI) is a fast-evolving field with new technical methods, systems, and products constantly being developed. This growth has also been reflected in the dramatic increase in patent filings for AI-related inventions. According to Patents and Artificial Intelligence: A Primer from the Center for Security and Emerging Technology, more than ten times as many AI-related patent applications were published worldwide in 2019 than in 2013, and the increasing trend has only continued since.

Although AI-related patent applications have been on the rise, explicit guidance on patentability requirements have only recently begun to be published by patent offices around the world. Indeed, as a burgeoning field of technology, AI inventions have unique features, such as the importance of training data and the lack of explainability and predictability of trained AI models, that differentiate such innovations from traditional types of computer-implemented inventions (CII). 

These features raise questions about the interpretation of disclosure requirements, among other patentability requirements, for AI-related inventions. For example, how much information, such as source code, training data sets, or machine learning model architectures, should be provided to satisfy the written description and enablement requirements of Title 35 of the U.S. Code § 112(a) or analogs in other patent jurisdictions?

As we await further official guidance from the U.S. Patent & Trademark Office (USPTO) on disclosure requirements for AI-related inventions, we can gather initial indications from recent patent prosecution decisions from the Patent Trial & Appeal Board (PTAB) on such issues. In this article, we study a selection of PTAB appeals decisions for applications for AI-related inventions rejected under § 112. To set the background, we first review a classification of AI inventions and USPTO guidelines on disclosure requirements for computer-implemented inventions. After analyzing three case studies, we conclude with general takeaways and best practices, which emphasize that applicants must disclose specific algorithms and implementation details, not just desired outcomes, to satisfy written description requirements.