Yongwei Zhou
Scholar

Yongwei Zhou

Google Scholar ID: 9uGWNycAAAAJ
MeiTuan // Harbin Institute of Technology
LLMsPre-TrainingReasoning
Citations & Impact
All-time
Citations
94
 
H-index
5
 
i10-index
3
 
Publications
11
 
Co-authors
8
list available
Resume (English only)
Academic Achievements
  • 2025: Core contributor to the release of the LongCat-Flash-Chat model (560B-A26B parameters).
  • 2025: Paper 'Cross-Lingual Semantic Information Fusion for Word Translation Enhancement' accepted by Information Fusion (CAS Q1, JCR Q1).
  • 2025: Two pretraining papers accepted by ACL Findings 2025: 'FRAMES: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy' and 'Preference Curriculum: LLMs Should Always Be Pretrained on Their Preferred Data'.
  • 2024: Paper 'Operation-Augmented Numerical Reasoning for Question Answering' published in IEEE/ACM Transactions on Audio, Speech, and Language Processing (CAS Q1, CCF-B, Tsinghua A, IF=5.4).
  • 2022: UniRPG (EMNLP 2022) achieved #1 on the TAT-QA leaderboard.
  • 2022: OPERA++ (NAACL 2022) achieved #1 on the DROP leaderboard.
  • 2021: RoR (EMNLP Findings 2021) achieved #1 on the QuAC leaderboard.
  • Published multiple papers at top venues including EMNLP, NAACL, NLPCC, ACL Findings, IEEE/ACM TASLP, and Information Fusion, covering program generation, numerical reasoning, and cross-lingual semantic fusion.