Juru: Legal Brazilian Large Language Model from Reputable Sources

📅 2024-03-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining large language models (LLMs) incurs prohibitive computational costs and struggles with domain specialization, particularly in resource-constrained vertical domains like Brazilian law. Method: We propose Juru, a lightweight LLM tailored to Brazilian legal expertise, built upon the Sabiá-2 Small architecture and adapted via domain-specific pretraining on only 1.9B high-quality, locally curated legal texts. Our approach integrates precise legal corpus filtering, architectural fine-tuning, and few-shot evaluation. Contribution/Results: Juru significantly outperforms general-purpose baselines on Brazilian legal licensing exams, demonstrating that carefully selected domain data enables efficient, high-fidelity specialization. We further characterize the trade-off between domain proficiency and cross-domain generalization induced by specialization. Crucially, our results show that authoritative legal data at minimal scale suffices for substantial professional capability gains, drastically reducing pretraining costs. This establishes a new paradigm for building domain-specific LMs under computational and data constraints.

Technology Category

Application Category

📝 Abstract
The high computational cost associated with pretraining large language models limits their research. Two strategies have emerged to address this issue: domain specialization and pretraining with high-quality data. To explore these strategies, we specialized the Sabi'a-2 Small model with 1.9 billion unique tokens from reputable Brazilian legal sources and conducted few-shot evaluations on legal and general knowledge exams. Our model, Juru, demonstrates the benefits of domain specialization with a reduced amount of pretraining data. However, this specialization comes at the expense of degrading performance in other knowledge areas within the same language. This study contributes to the growing body of scientific evidence showing that pretraining data selection may enhance the performance of large language models, enabling the exploration of these models at a lower cost.
Problem

Research questions and friction points this paper is trying to address.

High compute cost limits large language model research
Domain specialization improves legal benchmark performance
Pretraining data selection enhances model performance cost-effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized Mistral-7B with legal data
Used 1.9B unique Brazilian legal tokens
Few-shot evaluations on legal benchmarks
🔎 Similar Papers
No similar papers found.
R
Roseval Malaquias Junior
Computer Science Departament, University of São Paulo (USP), São Carlos, Brazil
Ramon Pires
Ramon Pires
Ph.D. in Computer Science at University of Campinas
Natural Language ProcessingComputer VisionMachine LearningPattern Recognition
R
R. Romero
Computer Science Departament, University of São Paulo (USP), São Carlos, Brazil
Rodrigo Nogueira
Rodrigo Nogueira
Founder and CEO of Maritaca AI
Deep LearningNatural Language ProcessingInformation Retrieval