Nandan Kumar Jha
Scholar

Nandan Kumar Jha

Google Scholar ID: NX7zp18AAAAJ
New York University
LLMPrivacyDeep Learning
Citations & Impact
All-time
Citations
501
 
H-index
11
 
i10-index
11
 
Publications
20
 
Co-authors
15
list available
Resume (English only)
Academic Achievements
  • 1. Published paper: Entropy-Guided Attention for Private LLMs (AAAI’25 PPAI Workshop)
  • 2. Delivered talk: ReLU in norm-free LLMs (NeurIPS’24 ATTRIB)
  • 3. Released preprint: AERO: Softmax-only private LLMs with entropy regularization (arXiv)
  • 4. Published paper: DeepReShape (TMLR)
  • 5. Published paper: End-to-end private inference system (ASPLOS)
  • 6. Published paper: Circa (NeurIPS)
  • 7. Released preprint: CryptoNite: throughput limits under realistic load (arXiv)
  • 8. Published paper: Sisyphus (ACM CCS PPML)
  • 9. Published paper: DeepReDuce: criticality-based ReLU dropping (ICML Spotlight)
Research Experience
  • 1. Contributed to the DPRIVE project, proposing new architectures and algorithms for efficient inference on encrypted data.
  • 2. Developed DeepReDuce (ICML'21) and DeepReShape (TMLR'24), redefining the state-of-the-arts in private inference.
  • 3. Proposed NerVE, an eigenspectral framework that characterizes the nonlinear transformations of FFNs and uses spectral utilization metrics to quantify FFN width utilization (EMNLP 2025).
  • 4. Developed AERO, an information-theoretic framework that studies how nonlinearities influence entropy budgets of attention mechanisms and introduces entropy-guided attention for private LLM architectures (PPAI@AAAI'25, ATTRIB@NeurIPS'24).
Education
  • 1. Ph.D. in Neural Architectures for Efficient Private Inference, 2020 - present, New York University
  • 2. M.Tech. (Research) in Computer Science and Engineering, 2017 - 2020, Indian Institute of Technology Hyderabad
  • 3. B.Tech. in Electronics and Communication Engineering, 2009 - 2013, National Institute of Technology Surat
Background
  • Ph.D. candidate in Electrical and Computer Engineering at New York University, advised by Prof. Brandon Reagen. Research interests include the mathematical foundations of large language models, representation integrity, scientific foundations, and high-dimensional learning dynamics.
Miscellany
  • Interests include representation integrity in LLMs, scientific foundations of LLMs, and high-dimensional learning dynamics.