Fashion Recommenders is a research-oriented platform designed to facilitate the development and deployment of advanced fashion recommendation systems. Built on PyTorch, it provides researchers and practitioners with the tools needed to explore and implement cutting-edge techniques in fashion-recommendation modeling.
Although numerous advanced recommendation methods have been proposed in the literature since 2018, practical implementations remain scarce. This repository bridges the gap by offering a robust foundation, complete with a growing collection of pre-implemented models inspired by recent research. While we strive to faithfully reproduce methods from the literature, some customizations reflect the experimental nature of this project. Contributions from the community are highly encouraged to further enrich this platform.
- Pre-Implemented Models: A diverse collection of recommendation models ready for use and experimentation, saving you the effort of starting from scratch.
- Streamlined Input Processing: Standardized tools for structuring item data into formats optimized for model input.
- Modular Design: Flexible components for data preprocessing, model design, training, and evaluation, all seamlessly integrating with PyTorch.
- Multimodal Support: Easily incorporate images, text, and metadata to enhance recommendation performance.
We welcome community contributions! From adding new models and features to optimizing existing implementations or exploring innovative ideas, your input is invaluable to the growth of Fashion Recommenders.
pip install fashion_recommenders==0.1.1
Model | Paper | FITB Acc. (Ours) |
FITB Acc. (Original) |
---|---|---|---|
siamese-net | Baseline | 50.7 32, ResNet18 Image |
54.0 64, ResNet18 Image |
type-aware-net | [ECCV 2018] Learning Type-Aware Embeddings for Fashion Compatibility | 52.6 32, ResNet18 Image |
54.5 64, ResNet18 Image + Text |
csa-net | [CVPR 2020] Category-based Subspace Attention Network (CSA-Net) | 55.8 32, ResNet18 Image |
59.3 64, ResNet18 Image |
fashion-swin | [IEEE 2023] Fashion Compatibility Learning Via Triplet-Swin Transformer | ? 32, Swin-t Image |
60.7 64, Swin-t Image + Text |