Skip to content
@CERT-Lab

Collaboration, Efficiency, Responsibility and Trust Lab

CERT Lab is headed by Prof. Praneeth Vepakomma, and is a part of Mohamed bin Zayed University of Artificial Intelligence.

Collaboration, Efficiency, Responsibility and Trust Lab (CERT Lab)

About Us

The CERT Lab at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is dedicated to advancing the frontiers of Trustworthy, Collaborative and Private Machine Learning. Our research focuses on enabling effective collaboration through intelligence sharing across device ecosystems while maintaining privacy, security, safety, trust, and regulatory compliance.

Principal Investigator

Prof. Praneeth Vepakomma

  • Assistant Professor, Mohamed bin Zayed University of Artificial Intelligence
  • Visiting Assistant Professor, Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology (MIT)
  • Research Page: https://sites.mit.edu/praneeth/

Prof. Vepakomma leads research initiatives with a major focus on trustworthy/responsible and collaborative ML. The ultimate goal is to harness collaborative and trustworthy intelligence from networks of organizations and people in data-driven economies while achieving scale and maintaining ethics.

Research Focus

Our overarching research question addresses: "How can one effectively enable individual, organizational, regional, and global collaboration through intelligence sharing across device eco-systems without infringing privacy, security, safety, trust, and regulation while incentivizing the entire workflow?"

Key research areas include:

  • Responsible/Trustworthy AI
  • Distributed and private computation for machine learning
  • Statistical inference
  • Privacy-preserving data science

Notable Research

LoRA-SB: Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning

LoRA Silver Bullet (LoRA-SB) represents a breakthrough in efficient fine-tuning of large language models. The method approximates full fine-tuning within low-rank subspaces using an innovative initialization strategy. Key achievements include:

  • Theoretical demonstration of optimal conditions using LoRA-XS architecture
  • Optimal scaling for high-rank gradient updates without hyperparameter tuning
  • 27-90x parameter reduction compared to standard approaches while maintaining performance
  • Comprehensive outperformance of existing LoRA-XS methods

FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models

This research addresses the challenges of applying LoRA in federated learning environments. FedEx-LoRA introduces:

  • A novel residual error term for exact updates
  • Minimal computational and communication overhead
  • Consistent performance improvements across NLU and NLG tasks
  • Practical solution for accurate federated fine-tuning of foundation models

Power-Learning: Differentially Private and Model Agnostic Tabular Data Embeddings

Power-Learning presents an innovative approach to collaborative learning through:

  • Privacy-preserving activation sharing instead of traditional weight sharing
  • Co-designed collaborative and private learning framework
  • Single-round privatized communication
  • Model-agnostic privatized activations compatible with various server-side models (deep learning, random forests, XGBoost)
  • Reduced client-side computational requirements

Popular repositories Loading

  1. lora-sb lora-sb Public

    Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning

    Python 44 3

  2. fedex-lora fedex-lora Public

    FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models

    Python 8 3

  3. fed-sb fed-sb Public

    Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning

    Python 8

  4. Power-Mechanism-new Power-Mechanism-new Public

    Jupyter Notebook 1 1

  5. .github .github Public

Repositories

Showing 5 of 5 repositories
  • .github Public
    CERT-Lab/.github’s past year of commit activity
    0 0 0 0 Updated Feb 25, 2025
  • fed-sb Public

    Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning

    CERT-Lab/fed-sb’s past year of commit activity
    Python 8 0 0 0 Updated Feb 24, 2025
  • lora-sb Public

    Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning

    CERT-Lab/lora-sb’s past year of commit activity
    Python 44 3 0 1 Updated Feb 19, 2025
  • fedex-lora Public

    FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models

    CERT-Lab/fedex-lora’s past year of commit activity
    Python 8 3 1 0 Updated Jan 30, 2025
  • CERT-Lab/Power-Mechanism-new’s past year of commit activity
    Jupyter Notebook 1 1 0 0 Updated Jan 2, 2025

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…