Skip to content

recombee/ReALM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DOI (RecSys 2025) By Recombee Research

Overview

ReALM is a simple, scalable, and interpretable model for next-basket recommendation (NBR).
Instead of relying on deep neural networks or heavy sequential models, ReALM directly learns item-item dependency matrices across multiple temporal and sequential lags, using:

  • a recurrent linear formulation,
  • a closed-form (ridge regression) objective, and
  • a sparse approximate matrix-inversion procedure for efficient computation on large item catalogs.

This allows ReALM to capture short-term and long-term purchasing patterns (e.g., replenishment cycles, substitution preferences, delayed effects), while training orders of magnitude faster than deep learning approaches such as DNNTSP or Sets2Sets. Overall,

  • ReALM achieves state-of-the-art or near-SOTA accuracy on TaFeng and Recombee's production NBR1 dataset.
  • Training is 100×–1,000× faster than deep baselines (seconds vs. hours).
  • The model scales gracefully to hundreds of thousands of items.
  • Weight matrices offer direct interpretability (temporal triggers, replenishment cycles, lagged effects).

💡 Check out the conference poster for more details

Model Architecture

ReALM Objective

ReALM formulates next-basket prediction as a recurrent linear objective:
the next basket $X^{(t+1)}$ is predicted by linearly combining a user’s past baskets across both temporal and sequential windows.
Each dependency type is represented by a learned item–item matrix, making the model fully interpretable.

1. Temporal Dependencies

Matrices $W^{(0)}, W^{(1)}, \ldots, W^{(L-1)}$ capture how baskets from $L$, $L-1$, … months ago influence the next basket.

Interpretation: replenishment cycles, seasonal repeat patterns, long-term delayed effects.

2. Sequential Dependencies

Matrices $W_{(0)}, W_{(1)}, \ldots, W_{(K-1)}$ capture relationships with the most recent non-empty baskets, regardless of time gaps.

Interpretation: short-term triggers, co-purchase patterns, quick follow-up buys.

⚡️ Efficient Training (Approximate Closed-Form Solution)

Training ReALM requires solving a large linear system of size $(L+K)n$.
Instead of explicitly inverting this system, ReALM uses the sparse approximate inverse method from
SANSA, enabling scalable, memory-efficient optimization for large item catalogs.

Citation

If you find this work useful, please consider citing our paper:

@inproceedings{10.1145/3705328.3759313,
author = {Zme\v{s}kalov\'{a}, Tereza and Ledent, Antoine and Spi\v{s}\'{a}k, Martin and Kord\'{\i}k, Pavel and Alves, Rodrigo},
title = {Recurrent Autoregressive Linear Model for Next-Basket Recommendation},
year = {2025},
isbn = {9798400713644},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3705328.3759313},
doi = {10.1145/3705328.3759313},
abstract = {Next-basket recommendation aims to predict the (sets of) items that a user is most likely to purchase during their next visit, capturing both short-term sequential patterns and long-term user preferences. However, effectively modeling these dynamics remains a challenge for traditional methods, which often struggle with interpretability and computational efficiency, particularly when dealing with intricate temporal dependencies and inter-item relationships. In this paper, we propose ReALM, a Recurrent Autoregressive Linear Model that explicitly captures temporal item-to-item dependencies across multiple time steps. By leveraging a recurrent loss function and a closed-form optimization solution, our approach offers both interpretability and scalability while maintaining competitive accuracy. Experimental results on real-world datasets demonstrate that ReALM outperforms several state-of-the-art baselines in both recommendation quality and efficiency, offering a robust and interpretable solution for modern personalization systems.},
booktitle = {Proceedings of the Nineteenth ACM Conference on Recommender Systems},
pages = {1273–1278},
numpages = {6},
keywords = {Next-basket Recommendation, Sparse Approximation, Scalability},
location = {
},
series = {RecSys '25}
}

License

MIT License

Releases

No releases published

Packages

No packages published