Max Zimmer

Berlin, Germany

prof_pic.png

I am a third-year PhD candidate at the Institute of Mathematics, TU Berlin under the supervision of Prof. Dr. Sebastian Pokutta and work as a doctoral researcher in the Interactive Optimization and Learning (IOL) research lab at the Zuse Institute Berlin (ZIB). My research interests are centered around Machine Learning and Optimization in general. In particular, I am interested in Compression of Neural Networks, Explainability (XAI), Fairness in AI as well as first-order optimization. Since 2022, I am also a member of the BMS graduate school, which is part of the MATH+ Cluster of Excellence.

Before joining IOL in 2020 as a student research assistant, I worked on Nash Flows over Time together with Leon Sering at the Combinatorial Optimization and Graph Algorithms Group (COGA) at TU Berlin. During my BSc and MSc in Mathematics, which I both received from TU Berlin, I got the chance to intern in the research groups of Prof. Sergio de Rosa at Università degli Studi di Napoli Federico II and Prof. Marco Mondelli at IST Austria.

You can email me at zimmer [at] zib.de.

news

Feb 4, 2024 Two papers accepted to conferences as posters: Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging at ICLR24 and Interpretability Guarantees with Merlin-Arthur Classifiers at AISTATS24! I will update with camera-ready versions soon and will be in both Valencia and Vienna.
Dec 27, 2023 Our new preprint is on arXiv: PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs!
Jul 1, 2023 We published a new paper: Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging! You can find the corresponding blogpost here.
May 10, 2023 I presented our poster “How I Learned to Stop Worrying and Love Retraining” at ICLR2023 in Kigali, Rwanda! You can find all information in the list of publications below.

recent publications

  1. hadwigernelson.png
    Extending the Continuum of Six-Colorings
    arXiv preprint arXiv:2404.05509, 2024
  2. pde.png
    Neural Parameter Regression for Explicit Representations of PDE Solution Operators
    Konrad MundingerMax Zimmer, and Sebastian Pokutta
    ICLR 2024 Workshop on AI4DifferentialEquations In Science, 2024
  3. byzantine_feddistill.png
    On the Byzantine-Resilience of Distillation-Based Federated Learning
    Christophe RouxMax Zimmer, and Sebastian Pokutta
    arXiv preprint arXiv:2402.12265, 2024
  4. perp_dog.png
    PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
    Max Zimmer, Megi Andoni, Christoph Spiegel, and Sebastian Pokutta
    arXiv preprint arXiv:2312.15230, 2023
  5. sms_algo_sketch.png
    Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging
    Max ZimmerChristoph Spiegel, and Sebastian Pokutta
    In International Conference on Learning Representations, 2024
  6. bimp_lr_schedules.png
    How I Learned To Stop Worrying And Love Retraining
    Max ZimmerChristoph Spiegel, and Sebastian Pokutta
    In International Conference on Learning Representations, 2023
  7. sparsefw_pruneddistance.png
    Compression-aware Training of Neural Networks using Frank-Wolfe
    Max ZimmerChristoph Spiegel, and Sebastian Pokutta
    arXiv preprint arXiv:2205.11921, 2022
  8. arthurmerlin.png
    Interpretability Guarantees with Merlin-Arthur Classifiers
    In International Conference on Artificial Intelligence and Statistics, 2024
  9. nashflow.png
    Flows over time as continuous limits of packet-based network simulations
    Theresa Ziemke, Leon Sering, Laura Vargas KochMax Zimmer, Kai Nagel, and Martin Skutella
    Transportation Research Procedia, 2021
  10. fw.png
    Deep Neural Network training with Frank-Wolfe
    Sebastian PokuttaChristoph Spiegel, and Max Zimmer
    arXiv preprint arXiv:2010.07243, 2020