🤖 AI Summary
Existing vision-language models (VLMs) exhibit insufficient multimodal modeling capability for materials science—particularly in polymer property prediction. Method: We introduce the first polymer-specific multimodal image–text dataset and propose an instruction-tuned, multi-task VLM framework. Built upon LoRA for parameter-efficient low-rank adaptation, the framework jointly encodes image and text modalities and unifies prediction across diverse polymer properties—including glass transition temperature (T<sub>g</sub>), tensile strength, and solubility. Contribution/Results: Experiments demonstrate that our approach significantly outperforms unimodal baselines and conventional machine learning models, achieving an average 18.7% reduction in mean absolute error (MAE) across multiple polymer property prediction tasks. This work constitutes the first systematic validation of foundation multimodal models for materials property prediction, establishing a scalable, low-cost deployment paradigm for intelligent materials design.
📝 Abstract
Vision-Language Models (VLMs) have shown strong performance in tasks like visual question answering and multimodal text generation, but their effectiveness in scientific domains such as materials science remains limited. While some machine learning methods have addressed specific challenges in this field, there is still a lack of foundation models designed for broad tasks like polymer property prediction using multimodal data. In this work, we present a multimodal polymer dataset to fine-tune VLMs through instruction-tuning pairs and assess the impact of multimodality on prediction performance. Our fine-tuned models, using LoRA, outperform unimodal and baseline approaches, demonstrating the benefits of multimodal learning. Additionally, this approach reduces the need to train separate models for different properties, lowering deployment and maintenance costs.