Skip to content

Fix spelling errors #351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,13 @@ repos:
name: Run pre-commit checks (format, sort-imports, check-mypy)
entry: bash -c 'poetry run format && poetry run sort-imports && poetry run check-mypy'
language: system
pass_filenames: false
pass_filenames: false
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
name: Check spelling
args:
- --write-changes
- --skip=*.pyc,*.pyo,*.lock,*.git,*.mypy_cache,__pycache__,*.egg-info,.pytest_cache,docs/_build,env,venv,.venv
- --ignore-words-list=enginee
6 changes: 3 additions & 3 deletions docs/user_guide/02_hybrid_queries.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1090,7 +1090,7 @@
"source": [
"## Non-vector Queries\n",
"\n",
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but soley takes a ``FilterExpression``."
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but solely takes a ``FilterExpression``."
]
},
{
Expand Down Expand Up @@ -1434,7 +1434,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "redisvl-VnTEShF2-py3.13",
"display_name": "env",
"language": "python",
"name": "python3"
},
Expand All @@ -1448,7 +1448,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
"version": "3.12.8"
},
"orig_nbformat": 4
},
Expand Down
25 changes: 7 additions & 18 deletions docs/user_guide/03_llmcache.ipynb
Original file line number Diff line number Diff line change
@@ -1,16 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Semantic Caching for LLMs\n",
"\n",
"RedisVL provides a ``SemanticCache`` interface to utilize Redis' built-in caching capabilities AND vector search in order to store responses from previously-answered questions. This reduces the number of requests and tokens sent to the Large Language Models (LLM) service, decreasing costs and enhancing application throughput (by reducing the time taken to generate responses).\n",
"\n",
"This notebook will go over how to use Redis as a Semantic Cache for your applications"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -111,7 +100,7 @@
" name=\"llmcache\", # underlying search index name\n",
" redis_url=\"redis://localhost:6379\", # redis connection url string\n",
" distance_threshold=0.1, # semantic cache distance threshold\n",
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embdding model\n",
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embedding model\n",
")"
]
},
Expand Down Expand Up @@ -316,12 +305,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customize the Distance Threshhold\n",
"## Customize the Distance Threshold\n",
"\n",
"For most use cases, the right semantic similarity threshhold is not a fixed quantity. Depending on the choice of embedding model,\n",
"the properties of the input query, and even business use case -- the threshhold might need to change. \n",
"For most use cases, the right semantic similarity threshold is not a fixed quantity. Depending on the choice of embedding model,\n",
"the properties of the input query, and even business use case -- the threshold might need to change. \n",
"\n",
"Fortunately, you can seamlessly adjust the threshhold at any point like below:"
"Fortunately, you can seamlessly adjust the threshold at any point like below:"
]
},
{
Expand Down Expand Up @@ -924,7 +913,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "redisvl-VnTEShF2-py3.13",
"display_name": "env",
"language": "python",
"name": "python3"
},
Expand All @@ -938,7 +927,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
"version": "3.12.8"
},
"orig_nbformat": 4
},
Expand Down
6 changes: 3 additions & 3 deletions docs/user_guide/04_vectorizers.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@
}
],
"source": [
"# openai also supports asyncronous requests, which we can use to speed up the vectorization process.\n",
"# openai also supports asynchronous requests, which we can use to speed up the vectorization process.\n",
"embeddings = await oai.aembed_many(sentences)\n",
"print(\"Number of Embeddings:\", len(embeddings))\n"
]
Expand Down Expand Up @@ -495,7 +495,7 @@
"\n",
"mistral = MistralAITextVectorizer()\n",
"\n",
"# embed a sentence using their asyncronous method\n",
"# embed a sentence using their asynchronous method\n",
"test = await mistral.aembed(\"This is a test sentence.\")\n",
"print(\"Vector dimensions: \", len(test))\n",
"print(test[:10])"
Expand Down Expand Up @@ -775,7 +775,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.12.8"
},
"orig_nbformat": 4
},
Expand Down
2 changes: 1 addition & 1 deletion docs/user_guide/05_hash_vs_json.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@
"from redisvl.query import VectorQuery\n",
"from redisvl.query.filter import Tag, Text, Num\n",
"\n",
"t = (Tag(\"credit_score\") == \"high\") & (Text(\"job\") % \"enginee*\") & (Num(\"age\") > 17)\n",
"t = (Tag(\"credit_score\") == \"high\") & (Text(\"job\") % \"enginee*\") & (Num(\"age\") > 17) # codespell:ignore enginee\n",
"\n",
"v = VectorQuery(\n",
" vector=[0.1, 0.1, 0.5],\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/user_guide/07_message_history.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticable when asking simple questions, it becomes a hinderance when engaging in long running conversations that rely on conversational context.\n",
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticeable when asking simple questions, it becomes a hindrance when engaging in long running conversations that rely on conversational context.\n",
"\n",
"The solution to this problem is to append the previous conversation history to each subsequent call to the LLM.\n",
"\n",
Expand Down Expand Up @@ -276,7 +276,7 @@
"source": [
"You can adjust the degree of semantic similarity needed to be included in your context.\n",
"\n",
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everthing."
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everything."
]
},
{
Expand Down
Loading