Scholar
Dominick Reilly
Google Scholar ID: YlFKOTkAAAAJ
UNC Charlotte
video understanding
multimodal learning
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
88
H-index
6
i10-index
3
Publications
12
Co-authors
0
Contact
Email
dreilly1@charlotte.edu
CV
Open ↗
GitHub
Open ↗
Publications
5 items
UniLACT: Depth-Aware RGB Latent Action Learning for Vision-Language-Action Models
2026
Cited
0
VisCoP: Visual Probing for Video Domain Adaptation of Vision Language Models
2025
Cited
0
SKI Models: Skeleton Induced Vision-Language Embeddings for Understanding Activities of Daily Living
2025
Cited
0
From My View to Yours: Ego-Augmented Learning in Large Vision Language Models for Understanding Exocentric Daily Living Activities
2025
Cited
0
LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living
2024
Cited
2
Resume (English only)
Background
Fourth-year PhD student in Computer Science at the University of North Carolina at Charlotte
Advised by Dr. Srijan Das
Member of the Charlotte Machine Learning Lab (CharMLab)
Current research focus: multi-modal learning in Vision Language Models (VLMs) for video understanding and robotic control
Worked on ego-exo viewpoint transfer, cross-modal domain adaptation, and fine-grained action understanding
Interested in developing simple, generalizable, and scalable methods
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up