Julia Ive
Scholar

Julia Ive

Google Scholar ID: WMYcG5EAAAAJ
University College London
Citations & Impact
All-time
Citations
786
 
H-index
14
 
i10-index
19
 
Publications
20
 
Co-authors
6
list available
Resume (English only)
Academic Achievements
  • - Safe Training with Sensitive In-domain Data: Leveraging Data Fragmentation To Mitigate Linkage Attacks
  • - Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection
  • - Classifying Social Media Users Before and After Depression Diagnosis via their Language Usage: A Dataset and Study
  • - Using Large Language Models (LLMs) to Extract Evidence from Pre-Annotated Social Media Data
  • - Embracing the uncertainty in human-machine collaboration to support clinical decision making for Mental Health Conditions
  • - Medical Scientific Table-to-Text Generation with Human-in-the-Loop under the Data Sparsity Constraint
  • - Leveraging the potential of synthetic text for AI in mental healthcare
  • - SURF: Semantic-level Unsupervised Reward Function for Machine Translation
  • - Modeling Disagreement in Automatic Data Labelling for Semi-Supervised Learning in Clinical Natural Language Processing
  • - Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation
  • - Generation and evaluation of artificial mental health records for Natural Language Processing
  • - Distilling Translations with Visual Awareness
Research Experience
  • Conducts research in areas like mental health and legal data using Large Language Models (LLMs); applies Reinforcement Learning methods to LLMs; explores Bayesian Deep Learning techniques to enhance model decision-making transparency.
Background
  • Research interests include ethical aspects of human-AI collaboration, such as bias, privacy, and transparency; focusing on developing text rewriting techniques for normalizing, de-biasing, and de-identifying text to build responsible AI models that protect sensitive individual information.
Miscellany
  • Participates in discussions about the future direction of AI governance structures; represents the Responsible AI UK project at international conferences.