Jianwei Li
Scholar

Jianwei Li

Google Scholar ID: 6lFRe2MAAAAJ
PhD Student from North Carolina State University
AI SafetyEfficient AI
Citations & Impact
All-time
Citations
172
 
H-index
7
 
i10-index
4
 
Publications
14
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • Paper 'Safety Alignment Can Be Not Superficial With Explicit Safety Signals' accepted by ICML 2025
  • Paper 'Greedy Output Approximation: Towards Efficient Structured Pruning for LLMs Without Retraining' accepted by CPAL 2025
  • Paper 'Beyond Gradient and Priors in Privacy Attacks' selected as Oral at FL@FM-NeurIPS 2023
  • Two papers accepted by EMNLP 2023
  • Paper 'FP8-BERT: Post-Training Quantization for Transformer' published at DCAA@AAAI 2023
  • Authored multiple papers on LLM safety, efficient pruning, privacy attacks, and keystroke dynamics authentication
  • Served as reviewer for DCAA 2023 and AAAI 2024
  • Publicity Chair for KDD 2023 Workshop on Resource-Efficient Learning
  • Publicity Chair for AAAI 2023 Workshop on DL-Hardware Co-Design for AI Acceleration
  • Passed PhD Written Preliminary Exam at NCSU (Oct 2025)
  • Participated in AI Hardware Summit 2022; Moffett S30 Accelerator won MLPerf V2.1
Background
  • Third-year PhD student at North Carolina State University, focusing on AI Safety and AI Efficiency
  • Specializes in data privacy, adversarial attacks, model robustness and uncertainty, and Large Language Model (LLM) alignment
  • Also involved in model compression and red-teaming research
  • Actively seeking summer 2026 internship opportunities in Artificial Intelligence, especially in AI Safety and Efficient AI
  • Open to industry collaborations in Shadow LLM-related research