Research Interests

My main line of research is in statistical machine learning, with an emphasis on trustworthy deep learning. I equally enjoy working on theoretical and applied projects. Overall, my research focuses on understanding and controlling deep learning systems to make them reliable, interpretable, and deployable in real-world settings, with particular interests in optimization dynamics, multi-modal reasoning, and privacy-preserving learning.

Below you will find a list of my published work in journals and conferences, as well as ongoing projects.

Journal papers

  • Spherical Perspective on Learning with Normalization Layers - Published in Neurocomputing in 2022.
    📄 PDF) 🧑‍💻 GitHub

Conference papers

  • Spherical Perspective on Learning with Batch Normalization - Published in NeurIPS workshop on Optimization in Machine Learning in 2021.
    📄 PDF
  • Localizing Objects with Self-Supervised Transformers and no Labels - Published in BMVC in 2021.
    📄 PDF 🧑‍💻 GitHub
  • Take One Gram of Neural Features, Get Enhanced Group Robustness - Published in ECCV workshop on Out of Distribution Detection in 2022.
    📄 PDF
  • Retrieval-Based Interleaved Visual Chain-of-Thought in Real-World Driving Scenarios – Published in EACL in 2026.
    🔗 Project Page 📄 PDF 🧑‍💻 GitHub 🤗 Hugging Face-
  • Privacy Amplification by Missing Data - Published in arXiv in 2026. 📄 PDF

Ongoing Projects

  • Differential Privacy and Efficient Training: working on a theoretical framework and methodological approach to simultaneously improve the confidentiality and training efficiency of machine learning models.