PTAB Case Studies of AI Disclosure Requirements: Part I
Artificial intelligence (AI) is a fast-evolving field with new technical methods, systems, and products constantly being developed. This growth has also been reflected in the dramatic increase in patent filings for AI-related inventions. According to Patents and Artificial Intelligence: A Primer from the Center for Security and Emerging Technology, more than ten times as many AI-related patent applications were published worldwide in 2019 than in 2013, and the increasing trend has only continued since.
Although AI-related patent applications have been on the rise, explicit guidance on patentability requirements have only recently begun to be published by patent offices around the world. Indeed, as a burgeoning field of technology, AI inventions have unique features, such as the importance of training data and the lack of explainability and predictability of trained AI models, that differentiate such innovations from traditional types of computer-implemented inventions (CII).
These features raise questions about the interpretation of disclosure requirements, among other patentability requirements, for AI-related inventions. For example, how much information, such as source code, training data sets, or machine learning model architectures, should be provided to satisfy the written description and enablement requirements of Title 35 of the U.S. Code § 112(a) or analogs in other patent jurisdictions?
As we await further official guidance from the U.S. Patent & Trademark Office (USPTO) on disclosure requirements for AI-related inventions, we can gather initial indications from recent patent prosecution decisions from the Patent Trial & Appeal Board (PTAB) on such issues. In this article, we study a selection of PTAB appeals decisions for applications for AI-related inventions rejected under § 112. To set the background, we first review a classification of AI inventions and USPTO guidelines on disclosure requirements for computer-implemented inventions. After analyzing three case studies, we conclude with general takeaways and best practices, which emphasize that applicants must disclose specific algorithms and implementation details, not just desired outcomes, to satisfy written description requirements.
AI-related Inventions
Artificial intelligence (AI) is a broad field, spanning many subfields such as rule-based systems, computer vision, machine learning, robotics, and generative AI. AI-related inventions can be roughly classified into three categories, although a given invention may include elements that fall into more than one category:
- Applied AI: e.g., using machine learning (ML) predictions for healthcare;
- Core AI: e.g., new architectures, training methods, or data sets;
- AI-enabled inventions: e.g., using AI for drug discovery.
In this article, we focus on the first two categories of AI-related inventions, applied AI and core AI. The third category, also referred to as AI-assisted inventions, covers inventions developed in part or completely by AI, and questions of inventorship and patentability are being actively considered by the USPTO, e.g., in the February 2024 Inventorship Guidance for AI-Assisted Inventions, which was revised in November 2025.
Methodology and Data (PTAB Decisions Database)
Case studies were selected from the PTAB decisions database by first specifying “Issue type = 112” and searching for keywords in art units related to AI inventions, such as 2100 (computer architecture, software and information security). We primarily entered the keyword “neural network.” While this search does not cover all possible AI-related inventions, neural networks form the basis of many machine learning techniques and applications, which itself is one of the dominant subfields of artificial intelligence.
Over this series of articles, we study several recent cases with a variety of § 112(a) rejections to understand the level of disclosure that is necessary to satisfy written description and enablement requirements for AI-related inventions. It is important to note that while PTAB decisions are useful for understanding how patentability requirements are applied to specific technology areas, the U.S. Court of Appeals for the Federal Circuit (“CAFC” or “Fed. Cir.”) does not treat such decisions as precedential.
Ex Parte El-Masri: § 112(a) Rejection Affirmed
U.S. patent application no. 14/327,257 entitled “Systems and Methods for Providing Information to an Audience in a Defined Space” was filed on July 9, 2014, with Visio Media Inc. as the assignee (the current assignee is Vertical City Inc.). After seven office actions with § 112(a) rejections, the case was appealed to the PTAB, which affirmed the examiner’s decision in Ex Parte El-Masri. Ultimately, the application was abandoned.
The ’257 patent application covered methods and systems for providing digital information and advertising in an elevator based on elevator passenger data. Data about individuals in an elevator is collected from camera, microphone, and motion detectors in the elevator. Based on the data, information such as facial expressions, brands worn by individuals, topics of discussion between individuals, and network addresses are extracted and used to determine which advertisements or information to display on a screen in the elevator.
Claim 1 at the time of the PTAB appeal read:
1. A computer-implemented method for providing information in a defined space, the method comprising:
[…]
(f) identifying, for each individual identified in the defined space using the at least one processor, one or more brands associated with the identified individual based on visual representations of the one or more brands within the received data, wherein the visual representations include images of clothing worn by the identified individual and of accessories carried by the identified individual, and identifying the one or more brands associated with the identified individual also includes determining respective brands of products discussed by the identified individual using data obtained by the microphone;
[…]
(j) after providing the information to the one or more individuals, determining via a motion detector and the at least one processor, a hand gesture from the one or more individuals, and after determining that the hand gesture has been made, providing additional information on the display unit about previously-displayed information based on the determination that the hand gesture has been made. (Emphasis added.)
The examiner rejected the claims under § 112(a) for lack of written description. Specifically, the claims of “identifying a brand based on camera data, determining a topic of discussion via a microphone, and determining a hand gesture” were not sufficiently described in the specification. The examiner writes, “although the Specification describes that ‘the system takes inputs and produces outputs,’ [the Specification] does not describe the actual processing to make the determination.”
The appellants argued that “a person of ordinary skill in image processing art would understand a ‘logo detection’ to be an adequate ‘step’ in identifying data… employing any generically used and widely available text detecting functions/tools, such as optical character recognition (OCR).” Similar arguments were proffered for the steps of speech recognition for extracting brands of products discussed by individuals, and determining hand gestures from motion data.
However, the USPTO Manual of Patent Examining Procedure (MPEP) states in § 2161 that “It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement.”
This issue arose in Vasudevan Software, Inc. v. MicroStrategy, Inc. (Fed. Cir. 2015), where the Federal Circuit stated that “the written description requirement is not met if the specification merely describes a ‘desired result.’”
The PTAB cited this principle, affirming the Examiner’s rejection:
“[A]t best Appellant’s specification either (1) teaches a desired output or (2) alternatively discusses using a general field of speech recognition, not an algorithm or steps/procedure to understand how the inventor intended the function to be performed.”
By only specifying the desired brand detection based on visual or audio inputs or hand gesture determination from video inputs, the application only stated the desired output of a system without explicitly showing how the outputs are achieved.
From this case, we take away that it is necessary to disclose algorithms that achieve claimed functions. Although many AI inventions are driven by their inputs and outputs (e.g., detecting objects or speech from images and audio), it is not sufficient to simply state these inputs and outputs. Rather, applicants should also disclose how to get the desired outputs from the inputs.
In the Ex Parte El-Masri case, the specification could have listed a few off-the-shelf image recognition algorithms, including neural networks, and explicitly described the steps to apply the neural networks to detect brands from images (and likewise for speech and motion detection).
Ex Parte Kirti: § 112(a) Rejection Reversed, § 101 Rejection Affirmed
We next explore a case in which the PTAB reversed a § 112(a) rejection, finding that the patent application provided sufficient disclosure for a claimed function, but along the same line of reasoning, maintained a § 101 rejection under patentable subject matter eligibility.
This case illustrates that with ML-based inventions, there is a delicate balance between satisfying disclosure requirements by referring to well-known algorithms and ensuring that the ML invention is more than simply applying ML to implement a well-established method or abstract idea.
U.S. patent application no. 14/616,543 entitled “Determining a Number of Cluster Groups Associated with Content Identifying Users Eligible to Receive the Content” was filed on February 6, 2015, with Facebook, Inc. as the assignee. After four office actions with rejections under §§ 101, 103, and 112(a), the case was appealed to the PTAB, which reversed the §§ 103 and 112(a) rejections and affirmed the § 101 rejection in Ex Parte Kirti. Ultimately, the application was abandoned.
The ’543 patent application covers methods and systems for a social networking system to determine groups of eligible users to be presented an advertisement with multiple targeting criteria. Given targeting criteria for an advertisement, the social network finds users with characteristics that satisfy the targeting criteria, then find clusters of similar users around the initial target group.
Different cluster groups for different targeting criteria can be combined when a group overlap threshold is exceeded, and the combined cluster groups form the overall group that sees the advertisement.
Claim 1 at the time of the PTAB appeal read:
1. A computer-implemented method comprising:
[…]
generating a first cluster group by applying a first cluster model to the characteristics of at least some of the plurality of users of the online system who are not in the first targeting group, where the first cluster model is a machine learning model that is trained to determine membership in the first cluster group using the first subset of users as a training set, wherein the first cluster model outputs a score for inclusion in the first cluster group based on the users in the first targeting group … (Emphasis added.)
The examiner rejected most of the claims, including Claim 1, under § 112(a) for lack of written description. Specifically, with respect to the claims of “a machine learning model that is trained to determine membership in the first [or second] cluster group using the first [or second] subset of users as a training set,” the examiner found that the Specification did not supply a machine learning algorithm to determine cluster membership, but merely supplies “a generic statement that basically says we can do cluster modeling with machine learning,” and that this is insufficient for a person having ordinary skill in the art (PHOSITA) to implement the invention without undue experimentation.
The appellants argued that the Specification does indeed provide sufficient implementation details, including the training input, desired output, and usage of the clustering model. The PTAB agreed, stating that ML algorithms that fit training data and determine scores for new data are known in the art, and citing Hybritech Inc. v. Monoclonal Antibodies, Inc. (Fed. Cir. 1986) that “information that is conventional or well known in the art need not be described in detail in the specification.”
The specification also had additional support for ML cluster models, providing various examples of implementing cluster models: “For example, the cluster model is a statistical classifier using a weighted linear combination of values… As another example, the cluster model is an unsupervised machine learning algorithm, such as an artificial neural network, and the cluster model parameters are weights of connections between input, hidden, and output layers of the neural network” (Para. 70, specification).
However, the Examiner also rejected Claim 1 under § 101 stating that the claim is directed to an abstract idea: “all of the grouping and data analysis are in fact a mental process,” and “generically [machine learning] is little more than automating the same process that can be done by a human in the exact same manner.” The appellants tried to argue that their claimed ML technique was integrated into a practical application for “addressing overlap among cluster groups, which is a problem that arises due to the use of machine learning to generate the expanded cluster.”
The PTAB was not persuaded and affirmed the § 101 rejection, stating that the specification did not describe how the claimed ML model was different from known machine learning algorithms, and thus did not indicate evidence of any improvement to a technology. Rather, the PTAB agreed with the examiner that the problem addressed by the application was not an ML issue but “a grouping issue which is related to the abstract idea.”
In this case, the reliance on ML clustering algorithms that are known in the art allowed the applicants to satisfy § 112(a) requirements, but also hindered their appeal of the § 101 rejection, since their application of known ML algorithms precluded specific technical improvements and could be seen simply as another way to implement an abstract idea (grouping).
If the applicants had specified how their ML models are different from, and improve upon, existing models, their § 101 arguments could possibly have been more persuasive. Thus, we see a potential downside of relying on known machine learning methods to satisfy disclosure requirements: using known ML methods may satisfy written description and enablement requirements, but may also limit the patentable subject matter of the invention.
Thus, the level of disclosure of an AI-related invention will depend on the categorical aspect of the invention (e.g., applied AI, core AI, or AI-enabled inventions) and should be carefully considered when drafting a patent application. For example, for a core AI invention, one should provide sufficient implementation details, especially those differentiating the invention from known methods. Meanwhile, for an applied AI invention that relies on off-the-shelf ML models, one should emphasize the technical improvements and applications of the invention, and if relevant, its integration with hardware, in order to avoid § 101 patentability rejections.
Ex Parte Allen: § 112(a) Rejection Affirmed
We next examine an example of insufficient written description in a case regarding U.S. patent application no. 15/339,973 entitled “Cognitive Medication Reconciliation,” which was filed on November 1, 2016, and assigned to IBM Corporation. After two office actions with rejections under §§ 101, 103, and 112(a), the case was appealed to the PTAB, which affirmed the §§ 101 and 112(a) rejections and most of the § 103 rejections in Ex Parte Allen.
Ultimately, the application was abandoned. While we focus our attention on the affirmed § 112(a) rejections, it should be noted that they only apply to some dependent claims, while the §§ 101 and 103 rejections applied to the independent claims and thus were the more significant hurdles for the approval of the application.
The ’973 patent application covered methods and systems to reconcile patient medication data from multiple different sources, such as different health care providers, to generate a medication listing data structure and to determine whether a medication prescription should be removed or updated, e.g., in the case of a duplicate prescription or a contraindication. To do so, a data processing system generates various scores from data and uses a weighted aggregation of the scores to determine whether a medication should be removed from the listing.
Claim 1 at the time of the PTAB appeal read:
1. A method, in a data processing system comprising at least one processor and at least one memory, …, comprising:
[…]
determining, for each medication in a same class of medication in the medication listing data structure, whether the medication is a valid duplicate or an invalid duplicate medication relative to one or more other medications in the same class of medication based on an application of evaluation rules implemented in logic of the duplicate medication analysis engine to characteristics of the medication and characteristics of the one or more medications in the same class … (Emphasis added.)
This claim was rejected under § 101 for claiming an abstract method of organizing human activity or mental process, as all steps in the method could be interpreted as mental processes and the claim does not have sufficient additional elements to warrant a judicial exception. The PTAB affirmed, citing Alice Corp. Pty. Ltd. v. CLS Bank Int’l (Fed. Cir. 2014): “claims that amount to nothing significantly more than an instruction to apply an abstract idea,” and Intellectual Ventures I LLC v. Capital One Bank (USA) (Fed. Cir. 2015): “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer does not provide a sufficient inventive concept.”
We now focus on aspects of dependent claim 27 that are representative of the § 112(a) written description rejections. Dependent claim 27 recites a method that includes generating a duplicate medication score “based on the determination of whether the medication is a valid duplicate or an invalid duplicate medication.”
As in previous cases we analyzed, this claim was rejected for lack of written description because only a desired outcome was specified, without explicit disclosure on how to obtain that outcome. The Examiner found that the specification failed to adequately disclose the duplicate medication score (among other scores listed in the claim) because “the specification fails to portray how the scores are generated or how the scores are compared.”
Once again, with reference to Vasudevan Software, Inc. v. MicroStrategy, Inc. (Fed. Cir. 2015), the claims recite a desired result without sufficient support in the specification about how to achieve that result. The appellant cited descriptions of a “likeliness” value in the specification as supporting disclosure for the duplicate medication score, but only its use and desired properties, i.e., the desired outcomes, are described, without details on how it is calculated or obtained:
The duplicate medication analysis engine 132 generates a determination, …, whether that instance of the medication is likely a duplicate of another medication in the aggregate patient EMR data 126, and if so, whether that duplicate is likely a valid or invalid duplicate. The likeliness may be evaluated relative to one or more threshold values indicating a threshold degree of confidence necessary for determining that the medication instance is a duplicate and the duplicate is valid/invalid. (Specification, Para. 110.)
This case is also a reminder that under U.S. patent law, written description and enablement are related but distinct requirements, though other jurisdictions may blur the distinction. Dependent claim 27 additionally claimed “generat[ing] an aggregate score” by combining several individual scores, but the specification only listed generic methods for how such an aggregate score would be calculated: “[t]he weights applied to the various scores may be specified by a subject matter expert, may be learned through a machine learning process, or the like.”
Unfortunately, even if enablement is satisfied because a PHOSITA would be able to implement one of the generic methods listed, the inventors still need to specify a specific implementation to show possession of the invention to satisfy written description. Indeed, citing Vasudevan Software, Inc. v. MicroStrategy, Inc. (Fed. Cir. 2015), the USPTO MPEP §2161 states that “it is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement.”
The fact that there are well-known ways to combine and evaluate scores to obtain the claimed results is irrelevant to the written description requirement, which requires the description of a possible implementation.
From this case, we find that it is not sufficient to specify a desired result of a process (e.g., generating various scores for medication records), nor is it sufficient to list a generic method (e.g., “learned through a machine learning process”) for a specific application. Rather, one needs to provide an algorithm to show how to get the desired outputs from the inputs. For example, the specification could have included mathematical formulae or described computational steps used to calculate the various claimed scores and weights for combining scores.
Conclusion
We have reviewed three PTAB decisions with § 112(a) rejections in order to understand the form and extent of disclosure requirements for AI-related inventions. The case of Vasudevan Software, Inc. v. MicroStrategy, Inc. (Fed. Cir. 2015) was repeatedly cited, emphasizing the core principle that even when the enablement requirement is satisfied (i.e., a PHOSITA can write a program to achieve a claimed function), more is needed to satisfy the written description requirement.
Specifically, it is advisable that applicants provide technical implementation details for machine learning algorithms, such as training data, processing steps, or architectures, to show how a claimed function is actually achieved. If there are any aspects of the algorithms that differ from conventional approaches, these aspects should be sufficiently detailed, as we saw in the Ex Parte El-Masri and the Ex Parte Allen cases.
There is also a balance to be struck between §§ 101 and 112 requirements, as seen in the Ex Parte Kirti case. If part of an invention uses off-the-shelf machine learning methods, listing such methods may be sufficient to satisfy § 112 requirements. At the same time, care must be taken to avoid § 101 rejections, which may occur when an AI-related invention is simply an implementation of an abstract idea. As such, applicants with applied AI inventions may want to emphasize technical improvements and concrete applications of their invention, and if relevant, its integration with hardware.
AI for patents.
Be 50%+ more productive. Join thousands of legal professionals around the World using Solve’s Patent Copilot™ for drafting, prosecution, invention harvesting, and more.




