news

all news

02/2026 New preprint: ECHOSAT: Estimating Canopy Height Over Space And Time, the first global-scale height map that accurately quantifies tree growth and disturbances over time. We expect ECHOSAT to advance global efforts in carbon monitoring and disturbance assessment. Blogpost coming soon! In the meantime, check out the map on the Google Earth Engine.
02/2026 Me and Christophe Roux are visiting Alexandre d’Aspremont at INRIA Paris from February to April 2026.
02/2026 Three new preprints online:
01/2026 Two papers have been accepted at ICLR 2026! Congratulations to Nico and Alonso!
09/2025 Excited to announce that our paper Computational Algebra with Attention: Transformer Oracles for Border Basis Algorithms has been accepted at NeurIPS 2025!
06/2025 Neural Discovery in Mathematics: Do Machines Dream of Colored Planes? was selected for an oral (top 1%) presentation at ICML25!
05/2025 Happy to announce that four papers have been accepted at ICML25!
02/2025 Two new preprints on arXiv!
01/2025 On the Byzantine-Resilience of Distillation-Based Federated Learning has been accepted at ICLR25!
11/2024 Our work on the Hadwiger-Nelson problem has been featured in the spanish newspaper El País!
10/2024 Happy to share that I am now the Research Area Lead of the Machine Learning subgroup of IOL, iol.LEARN, alongside Sai Ganesh Nagarajan!
07/2024 Join our Team in Berlin - We are seeking highly motivated PhD students to work on (efficient) Deep Learning, preferably with strong math/CS background and PyTorch experience. Happy to answer questions via email or at ICML2024 in Vienna! Directly apply here!
06/2024 Our paper Estimating Canopy Height at Scale has been accepted to ICML24 and is now available on arXiv! Checkout the map on the Earth Engine!
04/2024 Our group was featured on german local TV regarding the use of generative AI! You can watch the interview with Sebastian and myself here (in german, available until 04/2026).
02/2024 Two papers accepted to conferences as posters: Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging at ICLR24 and Interpretability Guarantees with Merlin-Arthur Classifiers at AISTATS24! I will update with camera-ready versions soon and will be in Vienna.
12/2023 Our new preprint is on arXiv: PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs!
07/2023 We published a new paper: Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging! You can find the corresponding blogpost here.
05/2023 I presented our poster “How I Learned to Stop Worrying and Love Retraining” at ICLR2023 in Kigali, Rwanda! You can find all information in the list of publications below.