Research Interests

My main line of research is in statistical machine learning, with an emphasis on deep learning and its applications in computer vision and natural language processing. I equally enjoy working on theoretical and applied projects. Overall, I focus on deepening the understanding of the optimisation process in deep learning and improving the performances of models in real-world applications.

Below you will find a list of my published work in journals and conferences, as well as ongoing projects.

Journal papers

  • Spherical Perspective on Learning with Normalization Layers - Published in Neurocomputing in 2022.
    📄 PDF) 🧑‍💻 GitHub 📊 Slides

Conference papers

  • Spherical Perspective on Learning with Batch Normalization - Published in NeurIPS workshop on Optimization in Machine Learning in 2021.
    📄 PDF
  • Localizing Objects with Self-Supervised Transformers and no Labels - Published in BMVC in 2021.
    📄 PDF 🧑‍💻 GitHub
  • Take One Gram of Neural Features, Get Enhanced Group Robustness - Published in ECCV workshop on Out of Distribution Detection in 2022.
    📄 PDF
  • Retrieval-Based Interleaved Visual Chain-of-Thought in Real-World Driving ScenariosarXiv, 2025.
    🔗 Project Page 📄 PDF 🧑‍💻 GitHub 🤗 Hugging Face

Ongoing Projects

  • Differential Privacy and Efficient Training: working on a theoretical framework and methodological approach to simultaneously improve the confidentiality and training efficiency of machine learning models with a special empirical focus on large language models.