ucmllogo-text.svg

The Calgary Machine Learning Lab is a research group led by Yani Ioannou within the Schulich School of Engineering at the University of Calgary. The lab has a research focus on improving Deep Neural Network (DNN) training and models. Topics of research include: Sparse Neural Network Training, Bias and Robustness of Efficient Deep Learning methods and Efficient Inference with Large Language Models.

CML had 5 different works being presented by 6 students at the premier machine learning conference, Neural Information Processing Systems (NeurIPS) 2024, across the workshops and main conference.

news

May 01, 2025 Adnan Mohammed and Rohan Jain’s work on “Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry” (Adnan et al., 2025) has been accepted at the International Conference on Machine Learning (ICML), 2025. This work explores the Lottery Ticket Hypothesis (LTH) and sparse training from a random initialization through the lens of weight and permutation symmetry, proposing a novel approach to improve LTH mask generalization across new random initialization.
Apr 27, 2025 Yani Ioannou and Adnan Mohammed coorganized the Workshop on Sparsity in LLMs (SLLM) Workshop accepted at the International Conference on Learning Representations (ICLR), 2025. The workshop aimed to bring together researchers and practitioners to discuss the latest advancements in efficient inference and training for large language models (LLMs). The workshop accepted 71 papers as posters, with 4 as Orals, along with 4 invited talks and a panel of 6 leading figures in the research area. Please see more about the workshop in the following Linkedin post: Sparsity in LLMs Workshop at ICLR 2025.
Feb 24, 2025 Yufan Feng, has been awarded an Alberta Graduate Excellence Scholarship (AGES) to support her MSc. studies. The AGES scholarships are awarded to students enrolled in an Alberta-based Graduate degree, and are based on outstanding academic achievement.

latest blog posts

selected publications

  1. Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
    Mohammed AdnanRohan Jain, Ekansh Sharma, Rahul Krishnan, and Yani Ioannou
    In Forty-second International Conference on Machine Learning (ICML), 2025
  2. What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias
    Aida Mohammadshahi, and Yani Ioannou
    Transactions on Machine Learning Research (TMLR), 2025
  3. Navigating Extremes: Dynamic Sparsity in Large Output Spaces
    Nasib Ullah, Erik Schultheis, Mike LasbyYani Ioannou, and Rohit Babbar
    In 38th Annual Conference Neural Information Processing Systems (NeurIPS), 2024
  4. Dynamic Sparse Training with Structured Sparsity
    Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, and Yani Ioannou
    In International Conference on Learning Representations (ICLR), 2024