🤖 AI Summary
This study investigates how syntactic enhancement improves BERT’s language understanding capabilities and alters the geometric structure of its internal representations.
Method: We introduce a “polypersonal” perspective to characterize representational diversity across layers after fine-tuning, integrating manifold geometry analysis, inter-layer cosine similarity spectra, SVD-based dimensionality reduction, and interpretability-aware visualizations to systematically examine geometric shifts induced by syntactic module integration and novel structured data.
Contribution/Results: We first reveal that high-layer fine-tuning significantly strengthens directional separation along semantic dimensions. Second, we identify a positive correlation between polypersonal diversity and model generalization robustness. Third, we establish an implicit link between inter-layer geometric displacement and task-specific adaptability, yielding an interpretable geometric criterion for efficient fine-tuning. Collectively, these findings provide principled, geometry-grounded insights into how syntactic priors reshape transformer representation spaces and enhance downstream performance.