Releases: xhluca/bm25s
0.2.10
What's Changed
- fix: update tokenize docstring to avoid SyntaxWarning - invalid escape sequence \w by @yaminivibha in #124
New Contributors
- @yaminivibha made their first contribution in #124
Full Changelog: 0.2.9...0.2.10
0.2.9
What's Changed
- fix: raise ValueError when the corpus size is less than k by @emmanuel-stone in #117
New Contributors
- @emmanuel-stone made their first contribution in #117
Full Changelog: 0.2.8...0.2.9
0.2.8
What's Changed
New Contributors
- @Restodecoca made their first contribution in #102
Full Changelog: 0.2.7...0.2.8
0.2.7post1
What's Changed
- Fix query filtering and vocabulary dict by @mossbee in #92 (1/2)
- Fix query filtering and vocabulary dict by @xhluca in #96 (2/2)
- Update corpus.py by @Restodecoca in #102
- Add pypi and pepy badges by @xhluca in #103
Notes
The behavior of tokenizers have changed wrt null token. Now, the null token will be added first to the vocab rather than at the end, as the previous approach is inconsistent with the general standard (the "" string should map to 0 in general). However, it is a backward compatible change because the tokenizers should work the same way as before, but expect the tokenizers before 0.2.7 to differ from the tokenizers in 0.2.7 and beyond in the behavior, even though both will work with the retriever object.
New Contributors
- @Restodecoca made their first contribution in #102
Full Changelog: 0.2.6...0.2.7
0.2.7pre3
What's Changed
- Update corpus.py by @Restodecoca in #102
New Contributors
- @Restodecoca made their first contribution in #102
Full Changelog: 0.2.7pre2...0.2.7pre3
0.2.7pre2
Full Changelog: 0.2.7pre1...0.2.7pre2
0.2.7pre1
What's Changed
Notes
- The behavior of tokenizers have changed wrt null token. Now, the null token will be added first to the vocab rather than at the end, as the previous approach is inconsistent with the general standard (the "" string should map to 0 in general). However, it is a backward compatible change because the tokenizers should work the same way as before, but expect the tokenizers before 0.2.7 to differ from the tokenizers in 0.2.7 and beyond in the behavior, even though both will work with the retriever object.
Full Changelog: 0.2.6...0.2.7
0.2.6
0.2.5
0.2.4
What's Changed
Fix crash tokenizing with empty word_to_id by @mgraczyk in #72
Create nltk_stemmer.py by @aflip in #77
aa31a23: The commit primarily focused on improving the handling of unknown tokens during the tokenization and retrieval processes, enhancing error handling, and improving the logging mechanism for better debugging.
bm25s/init.py:
Added checks in the get_scores_from_ids method to raise a ValueError if max_token_id exceeds the number of tokens in the index. Enhanced handling of empty queries in _get_top_k_results method by returning zero scores for all documents.bm25s/tokenization.py:
Fixed the behavior of streaming_tokenize to correctly handle the addition of new tokens and updating word_to_id, word_to_stem, and stem_to_sid.
New Contributors
Full Changelog: 0.2.3...0.2.4