Fine-Tuning Vision-Language Models for Multimodal Polymer Property Prediction

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) exhibit insufficient multimodal modeling capability for materials science—particularly in polymer property prediction. Method: We introduce the first polymer-specific multimodal image–text dataset and propose an instruction-tuned, multi-task VLM framework. Built upon LoRA for parameter-efficient low-rank adaptation, the framework jointly encodes image and text modalities and unifies prediction across diverse polymer properties—including glass transition temperature (T<sub>g</sub>), tensile strength, and solubility. Contribution/Results: Experiments demonstrate that our approach significantly outperforms unimodal baselines and conventional machine learning models, achieving an average 18.7% reduction in mean absolute error (MAE) across multiple polymer property prediction tasks. This work constitutes the first systematic validation of foundation multimodal models for materials property prediction, establishing a scalable, low-cost deployment paradigm for intelligent materials design.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have shown strong performance in tasks like visual question answering and multimodal text generation, but their effectiveness in scientific domains such as materials science remains limited. While some machine learning methods have addressed specific challenges in this field, there is still a lack of foundation models designed for broad tasks like polymer property prediction using multimodal data. In this work, we present a multimodal polymer dataset to fine-tune VLMs through instruction-tuning pairs and assess the impact of multimodality on prediction performance. Our fine-tuned models, using LoRA, outperform unimodal and baseline approaches, demonstrating the benefits of multimodal learning. Additionally, this approach reduces the need to train separate models for different properties, lowering deployment and maintenance costs.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning VLMs for polymer property prediction using multimodal data
Addressing limited VLM effectiveness in scientific domains like materials science
Reducing need for separate models through multimodal learning approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning VLMs with multimodal polymer dataset
Using LoRA for efficient model adaptation
Instruction-tuning pairs enhance prediction performance
🔎 Similar Papers
No similar papers found.
A
An Vuong
Department of EECS, University of Arkansas, Fayetteville, AR, USA
Minh-Hao Van
Minh-Hao Van
University of Arkansas
Trustworthy MLLarge (Vision) Language Model
P
Prateek Verma
Department of EECS, University of Arkansas, Fayetteville, AR, USA
C
Chen Zhao
Department of CS, Baylor University, Waco, TX, USA
Xintao Wu
Xintao Wu
University of Arkansas
Data MiningPrivacy and SecurityTrustworthy AIAI4Science