Red Teaming Contemporary AI Models: Insights from Spanish and Basque Perspectives

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systemic biases and safety risks in mainstream multilingual large language models (LLMs) when deployed in Spanish–Basque bilingual contexts. We conduct the first bilingual, collaborative red-teaming evaluation—targeting OpenAI’s o3-mini, DeepSeek R1, and Spain’s indigenous ALIA Salamandra—using a novel adversarial probing framework comprising 670 multi-turn dialogues and cross-lingual safety assessments. Results reveal significant vulnerabilities across all models, with unsafe or biased response rates ranging from 29.5% (o3-mini) to 50.6% (Salamandra). Our key contribution lies in systematically extending red-teaming methodologies to the Spanish–Basque bilingual setting—a previously unexplored domain—and establishing the first empirical credibility assessment for small-parameter LMs within multilingual public AI infrastructure. This work fills a critical gap in trustworthy AI evaluation for low-resource languages and provides both empirical evidence and methodological guidance for AI governance in linguistically marginalized communities.

Technology Category

Application Category

📝 Abstract
The battle for AI leadership is on, with OpenAI in the United States and DeepSeek in China as key contenders. In response to these global trends, the Spanish government has proposed ALIA, a public and transparent AI infrastructure incorporating small language models designed to support Spanish and co-official languages such as Basque. This paper presents the results of Red Teaming sessions, where ten participants applied their expertise and creativity to manually test three of the latest models from these initiatives$unicode{x2013}$OpenAI o3-mini, DeepSeek R1, and ALIA Salamandra$unicode{x2013}$focusing on biases and safety concerns. The results, based on 670 conversations, revealed vulnerabilities in all the models under test, with biased or unsafe responses ranging from 29.5% in o3-mini to 50.6% in Salamandra. These findings underscore the persistent challenges in developing reliable and trustworthy AI systems, particularly those intended to support Spanish and Basque languages.
Problem

Research questions and friction points this paper is trying to address.

Evaluating biases and safety in AI models
Testing OpenAI, DeepSeek, and ALIA models
Addressing vulnerabilities in Spanish and Basque AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Red Teaming tests AI model vulnerabilities
ALIA supports Spanish and Basque languages
Manual testing reveals biases and safety issues
🔎 Similar Papers
Miguel Romero-Arjona
Miguel Romero-Arjona
PhD Student at Universidad de Sevilla, Spain
Software EngineeringAI4SE
P
Pablo Valle
Mondragon University, Mondragon, Spain
J
Juan C. Alonso
SCORE Lab, I3US Institute, Universidad de Sevilla, Seville, Spain
A
Ana B. Sánchez
SCORE Lab, I3US Institute, Universidad de Sevilla, Seville, Spain
M
Miriam Ugarte
Mondragon University, Mondragon, Spain
A
Antonia Cazalilla
SCORE Lab, I3US Institute, Universidad de Sevilla, Seville, Spain
V
Vicente Cambrón
SCORE Lab, I3US Institute, Universidad de Sevilla, Seville, Spain
J
José A. Parejo
SCORE Lab, I3US Institute, Universidad de Sevilla, Seville, Spain
A
Aitor Arrieta
Mondragon University, Mondragon, Spain
Sergio Segura
Sergio Segura
Professor of Software Engineering at Universidad de Sevilla, Spain
Software TestingSoftware EngineeringAI4SETrustworthy AI