Fine-tuning Large Language Models for Entity Matching

๐Ÿ“… 2024-09-12
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 5
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates how fine-tuning large language models (LLMs) affects entity matching performance and cross-domain generalization. Method: We propose a dual-dimensional optimization framework: (1) leveraging LLM-generated structured explanations as training sample representations, and (2) designing an LLM-driven sample selection and synthesis strategy. We conduct systematic zero-shot and cross-domain generalization evaluations across multiple domains. Contribution/Results: Fine-tuning substantially improves matching accuracy and in-domain generalization for small-scale LLMs (e.g., Llama 3.1 8B). Structured explanations boost performance in 75% of evaluated models. However, cross-domain transfer exhibits strong model dependencyโ€”GPT-4o-mini suffers performance degradation under specific sampling strategies. This work provides the first empirical evidence characterizing both the generalization benefits of structured explanations for small LLMs and their transferability limits. It establishes a reproducible methodology for lightweight LLM adaptation to entity matching tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Generative large language models (LLMs) are a promising alternative to pre-trained language models for entity matching due to their high zero-shot performance and ability to generalize to unseen entities. Existing research on using LLMs for entity matching has focused on prompt engineering and in-context learning. This paper explores the potential of fine-tuning LLMs for entity matching. We analyze fine-tuning along two dimensions: 1) the representation of training examples, where we experiment with adding different types of LLM-generated explanations to the training set, and 2) the selection and generation of training examples using LLMs. In addition to the matching performance on the source dataset, we investigate how fine-tuning affects the models ability to generalize to other in-domain datasets as well as across topical domains. Our experiments show that fine-tuning significantly improves the performance of the smaller models while the results for the larger models are mixed. Fine-tuning also improves the generalization to in-domain datasets while hurting cross-domain transfer. We show that adding structured explanations to the training set has a positive impact on the performance of three out of four LLMs, while the proposed example selection and generation methods, only improve the performance of Llama 3.1 8B while decreasing the performance of GPT-4o-mini.
Problem

Research questions and friction points this paper is trying to address.

Exploring fine-tuning LLMs for entity matching performance
Analyzing impact of training example representation on generalization
Investigating effects of LLM-generated explanations on model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning LLMs for entity matching
Adding LLM-generated explanations to training
Selecting training examples using LLMs
๐Ÿ”Ž Similar Papers
No similar papers found.