🤖 AI Summary
Existing Swedish-language benchmarks are predominantly translations of U.S.-centric datasets, failing to assess models’ mastery of indigenous Swedish knowledge. Method: We introduce SwedQA, the first manually curated bilingual (Swedish–English) question-answering benchmark, focusing on Swedish figures, history, popular culture, and sports—designed specifically for diagnostic evaluation of localized factual knowledge and cross-lingual consistency. Contribution/Results: Experiments reveal that smaller models with strong Swedish-language pretraining outperform multilingual large language models three times their size on Swedish factual recall. Moreover, while continued pretraining enhances retention of targeted knowledge, it induces substantial catastrophic forgetting. SwedQA is the first benchmark to systematically expose the trade-off between linguistic specialization and factual knowledge retention, providing a critical evaluation tool for regional language adaptation and fact-aware language modeling.
📝 Abstract
Many Swedish benchmarks are translated US-centric benchmarks, and therefore not suitable for testing knowledge that is particularly relevant, or even specific, to Sweden. We therefore introduce a manually written question-answering benchmark specifically targeted to Sweden-related personalities and events, many of which receive very limited coverage in international media. Our annotators drew inspiration from a popular radio program featuring public figures from culture and media, as well as major sports events in Sweden. The dataset can be used to measure factual recall across models of varying sizes and degrees of Swedish coverage, and allows to probe cross-lingual factual consistency as to contains English translations. Using the dataset, we find that smaller models with stronger Swedish coverage perform comparably to a three times larger multilingual model in recalling Sweden-related facts. We also observe that continued pre-training on Swedish generally improves factual knowledge but also leads to forgetting of a part of the previously known information. These results demonstrate the dataset's potential as a diagnostic tool for studying language adaptation and knowledge retention in multilingual models and during language adaptation.