The CERT Lab at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is dedicated to advancing the frontiers of Trustworthy, Collaborative and Private Machine Learning. Our research focuses on enabling effective collaboration through intelligence sharing across device ecosystems while maintaining privacy, security, safety, trust, and regulatory compliance.
Prof. Praneeth Vepakomma
- Assistant Professor, Mohamed bin Zayed University of Artificial Intelligence
- Visiting Assistant Professor, Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology (MIT)
- Research Page: https://sites.mit.edu/praneeth/
Prof. Vepakomma leads research initiatives with a major focus on trustworthy/responsible and collaborative ML. The ultimate goal is to harness collaborative and trustworthy intelligence from networks of organizations and people in data-driven economies while achieving scale and maintaining ethics.
Our overarching research question addresses: "How can one effectively enable individual, organizational, regional, and global collaboration through intelligence sharing across device eco-systems without infringing privacy, security, safety, trust, and regulation while incentivizing the entire workflow?"
Key research areas include:
- Responsible/Trustworthy AI
- Distributed and private computation for machine learning
- Statistical inference
- Privacy-preserving data science
LoRA-SB: Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
LoRA Silver Bullet (LoRA-SB) represents a breakthrough in efficient fine-tuning of large language models. The method approximates full fine-tuning within low-rank subspaces using an innovative initialization strategy. Key achievements include:
- Theoretical demonstration of optimal conditions using LoRA-XS architecture
- Optimal scaling for high-rank gradient updates without hyperparameter tuning
- 27-90x parameter reduction compared to standard approaches while maintaining performance
- Comprehensive outperformance of existing LoRA-XS methods
This research addresses the challenges of applying LoRA in federated learning environments. FedEx-LoRA introduces:
- A novel residual error term for exact updates
- Minimal computational and communication overhead
- Consistent performance improvements across NLU and NLG tasks
- Practical solution for accurate federated fine-tuning of foundation models
Power-Learning presents an innovative approach to collaborative learning through:
- Privacy-preserving activation sharing instead of traditional weight sharing
- Co-designed collaborative and private learning framework
- Single-round privatized communication
- Model-agnostic privatized activations compatible with various server-side models (deep learning, random forests, XGBoost)
- Reduced client-side computational requirements