Mathurin Massias

alt text 

Junior researcher
Inria
E-mail: mathurin.massias [@] gmail [DOT] com

About me

I am a Junior researcher (Chargé de recherche) in Inria Lyon, in the OCKHAM (ex-DANTE) team led by Rémi Gribonval, working in optimization for Machine Learning.

From January 2020 to October 2021, I was a post-doc at University of Genova, where I worked with Lorenzo Rosasco and Silvia Villa to develop new algorithms for implicit regularization. I hold a PhD from Télécom Paris and Inria Saclay (Parietal Team), under the supervision of Joseph Salmon and Alexandre Gramfort. In my PhD, I improved the efficiency of brain signals reconstruction algorithms (more details here), which involves optimization, sparsity and high dimensional statistics.

I have a keen interest in the Python programming language: I am the lead developer of celer (fastest Lasso solver) and skglm (fast and flexible sklearn GLMs). I am also a core developer of benchopt, a benchmarking suite that makes optimization benchmarks easy, transparent and reproducible.
To foster scientific reproducibility, my papers usually come with Python packages to reproduce my experiments and make my code available to the community (e.g. Anderson acceleration for coordinate descent or Iterative regularization for convex regularizers)
I also contribute to scikit-learn, MNE, and sparse-ho.


You can find more details on my résumé (French (01/2021)/English (07/2022)) and my list of publications.

Job offers

  • M2 internship offer on improving sparse penalties with Emmanuel Soubiès here

News

  • I gave a course on convex optimization at the Computation and Modelling school in Wrocław; resources are on my teaching page.

  • celer 0.7 is released, with a fast ElasticNet solver thanks to Badr Moufad!

  • We have integrated skglm into scikit-learn, providing a customizable and accelerated solver for sparse GLMs in python.

  • Slides for my talk at ML-MTP (Montpellier)

  • Our two papers on Iterative regularization for convex regularizers (w. C. Molinari, L. Rosasco and S. Villa) and Anderson acceleration of coordinate descent (w. Q. Bertrand) got accepted to AISTATS 2021.

  • Slides for my presentation Efficient approaches to regularized inverse problems in the DANTE team (Lyon)

  • celer 0.6 is released: along with the fast sklearn Group Lasso solver, the Lasso class now supports weights in the penalty, paving the way for an efficient Adaptive Lasso (iterative reweighted L1) which should be released in version 0.7.

  • Slides for my presentation Efficient approaches to regularized inverse problems in the GAIA team of the GIPSA-lab (Grenoble)

  • I was awarded the Programme Gaspard Monge Optimisation PhD prize!

  • I was awarded Telecom Paris’ 2020 PhD prize! A warm thank you to my advisors Alexandre Gramfort, Joseph Salmon and coauthor Quentin Bertrand

  • We recently started the benchopt package to automate benchmarks of optimization algorithms on popular Machine Learning tasks

  • I defended my PhD! Slides are here, and here is the manuscript. Starting next January I'll be a postdoc in Lorenzo Rosasco's Lab in Genova

  • I joined Taiji Suzuki's lab for an internship in Tokyo from Feb. 2019 to May 2019