🤖 AI Summary
This work addresses the lack of dedicated datasets and evaluation protocols for patent claim revision. We introduce Patent-CR, the first open-source English dataset comprising rejected initial drafts and granted final claims, explicitly designed to assess legal compliance—specifically claim scope clarity, technical accuracy, linguistic precision, and robustness—rather than generic language refinement. We formally define the patent claim revision task for the first time and empirically expose critical limitations of current large language models (LLMs) in legally rigorous revision: domain-adapted models substantially outperform general-purpose LLMs; GPT-4 achieves the highest performance and strongest correlation between automated and human evaluations, yet still falls short of practical patent examination standards. Leveraging expert human review augmented by GPT-4–assisted evaluation, our study establishes a foundational benchmark dataset, a principled evaluation framework, and empirically grounded insights for legal AI and intelligent patent examination.
📝 Abstract
This paper presents Patent-CR, the first dataset created for the patent claim revision task in English. It includes both initial patent applications rejected by patent examiners and the final granted versions. Unlike normal text revision tasks that predominantly focus on enhancing sentence quality, such as grammar correction and coherence improvement, patent claim revision aims at ensuring the claims meet stringent legal criteria. These criteria are beyond novelty and inventiveness, including clarity of scope, technical accuracy, language precision, and legal robustness. We assess various large language models (LLMs) through professional human evaluation, including general LLMs with different sizes and architectures, text revision models, and domain-specific models. Our results indicate that LLMs often bring ineffective edits that deviate from the target revisions. In addition, domain-specific models and the method of fine-tuning show promising results. Notably, GPT-4 outperforms other tested LLMs, but further revisions are still necessary to reach the examination standard. Furthermore, we demonstrate the inconsistency between automated and human evaluation results, suggesting that GPT-4-based automated evaluation has the highest correlation with human judgment. This dataset, along with our preliminary empirical research, offers invaluable insights for further exploration in patent claim revision.