Matteo Pirotta

About Me

I am research scientist of the FAIR team of Meta in Paris. Previously, I was postdoc at INRIA Lille - Nord Europe in the SequeL team for almost two years. Before I was postdoc at Politecnico di Milano. I have received my PhD in computer science at Politecnico di Milano, under the supervision of Luca Bascetta and Marcello Restelli.

My research interest is machine learning. In particular I am interested in reinforcement learning, transfer learning and online learning.

CV (Oct 2022)

matteo DOT pirotta AT


  • I've been invited to speak at the TTIC Chicago Summer Workshop on Online Decision Making in July (13-15). See you in Chicago. I'm also co-organizer of the Responsible Decision Making in Dynamic Environments workshop at ICML 2022. 25.05.2022
  • Jean Tarbouriech will present our recent paper "Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret" at the RL Theory Seminar. 25.06.2021
  • 1 Paper accepted at AISTATS'21, 1 at ALT'21 and 2 at ICML'21. 25.06.2021
  • Happy to announce that I will be talking about exploration-exploitation in Deep RL at the virtual school RLVS-ANITI. Here a draft of the slides 02.04.2021
  • Long time after the last update! I will be guest host of the RL Theory Seminar organized by Gergely Neu, Ciara Pike-Burke and Csaba Szepesvari. 01.02.2021
  • Busy February! I spent the last two weeks in Ghana teaching Reinforcerment Learning at AIMS AMMI. It has been a wonderful and enriching experience. Please visit the AMMI website to know more about this very nice initiative. 01.03.2020
  • I gave a tutorial on exploration-exploitation in RL with M. Ghavamzadeh and A. Lazaric at AAAI'20. You can find the material at this page 20.02.2020
  • 1 Paper accepted at AAAI'20 and 1 at AISTATS'20. 16.01.2020
  • 2 Papers accepted at NeurIPS'19. 10.9.2019
  • I gave a tutorial on policy gradient and actor-critic at the Reinforcement Learning Summer School (RLSS) in Lille. It is always nice to be back in Lille and meet with the amazing people in Sequel! Very well organized summer school. 15.7.2019
  • Heading to Chicago where, together with Ronan and Alessandro, I will give a tutorial on regret minimization in reinforcement learning at ALT'19. Visit out wesite for more info 20.3.2019
  • I've been invited to give a talk at ARWL'18 in Beijing, China. I will talk about regret minimization (exploration-exploitation) in RL with prior knowledge (slides). I've been also invited to give the same talk at MSRA in Beijing. 6.11.2018
  • Going to NeurIPS! I've received a free registration as one of the "top" reviewers. Moreover, I have one paper accepted at NeurIPS'18. 29.9.2018
  • I'm really happy to announce that I've been selected for a research position (CR) at INRIA - Lille (link). I've even more happy to announce that I will join Facebook AI Research (Paris) in October 2018. 30.7.2018
  • Busy April! I have been giving several talks on exploration-exploitation in RL: Politecnico di Milano (Apr 03), Facebook Paris (Apr 17) and Google Zurich (Apr 27). 1.6.2018
  • 3 papers accepted at ICML'18.
  • I am organizing the 14th European workshop on reinforcement learning (EWRL 2018) .
  • ICML/IJCAI workshop on Prediction and Generative Modeling in Reinforcement Learning (PGMRL).
    Organizers: Me, Roberto Calandra (UC Berkeley), Sergey Levine (UC Berkeley), Martin Riedmiller (DeepMind), Alessandro Lazaric (Facebook).
  • Ronan Fruit and I are developping a Python library for Exploration-Exploitation in Reinforcement Learning.
    It is available on GitHub.
  • I'm going to visit Berlin and I'll give a talk at Amazon (Mar 19, 2018) on Efficient Exploration-Exploitation in RL. 2.3.2017
  • 3 papers accepted at NIPS 2017.
  • I'm going to spend two weeks in California. I will visit UC Berkeley and I'll give a talk on Regret Minimization in MDPs with Options (Jul 14, 2017). I will then spend one week at Stanford University. 1.6.2017