(RecSys 2025) By Recombee Research
ReALM is a simple, scalable, and interpretable model for next-basket recommendation (NBR).
Instead of relying on deep neural networks or heavy sequential models, ReALM directly learns item-item dependency matrices across multiple temporal and sequential lags, using:
- a recurrent linear formulation,
- a closed-form (ridge regression) objective, and
- a sparse approximate matrix-inversion procedure for efficient computation on large item catalogs.
This allows ReALM to capture short-term and long-term purchasing patterns (e.g., replenishment cycles, substitution preferences, delayed effects), while training orders of magnitude faster than deep learning approaches such as DNNTSP or Sets2Sets. Overall,
- ReALM achieves state-of-the-art or near-SOTA accuracy on TaFeng and Recombee's production NBR1 dataset.
- Training is 100×–1,000× faster than deep baselines (seconds vs. hours).
- The model scales gracefully to hundreds of thousands of items.
- Weight matrices offer direct interpretability (temporal triggers, replenishment cycles, lagged effects).
💡 Check out the conference poster for more details
ReALM formulates next-basket prediction as a recurrent linear objective:
the next basket
Each dependency type is represented by a learned item–item matrix, making the model fully interpretable.
Matrices
Interpretation: replenishment cycles, seasonal repeat patterns, long-term delayed effects.
Matrices
Interpretation: short-term triggers, co-purchase patterns, quick follow-up buys.
Training ReALM requires solving a large linear system of size
Instead of explicitly inverting this system, ReALM uses the sparse approximate inverse method from
SANSA, enabling scalable, memory-efficient optimization for large item catalogs.
If you find this work useful, please consider citing our paper:
@inproceedings{10.1145/3705328.3759313,
author = {Zme\v{s}kalov\'{a}, Tereza and Ledent, Antoine and Spi\v{s}\'{a}k, Martin and Kord\'{\i}k, Pavel and Alves, Rodrigo},
title = {Recurrent Autoregressive Linear Model for Next-Basket Recommendation},
year = {2025},
isbn = {9798400713644},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3705328.3759313},
doi = {10.1145/3705328.3759313},
abstract = {Next-basket recommendation aims to predict the (sets of) items that a user is most likely to purchase during their next visit, capturing both short-term sequential patterns and long-term user preferences. However, effectively modeling these dynamics remains a challenge for traditional methods, which often struggle with interpretability and computational efficiency, particularly when dealing with intricate temporal dependencies and inter-item relationships. In this paper, we propose ReALM, a Recurrent Autoregressive Linear Model that explicitly captures temporal item-to-item dependencies across multiple time steps. By leveraging a recurrent loss function and a closed-form optimization solution, our approach offers both interpretability and scalability while maintaining competitive accuracy. Experimental results on real-world datasets demonstrate that ReALM outperforms several state-of-the-art baselines in both recommendation quality and efficiency, offering a robust and interpretable solution for modern personalization systems.},
booktitle = {Proceedings of the Nineteenth ACM Conference on Recommender Systems},
pages = {1273–1278},
numpages = {6},
keywords = {Next-basket Recommendation, Sparse Approximation, Scalability},
location = {
},
series = {RecSys '25}
}