Skip to content

Commit f5f54b7

Browse files
committed
add codespell, fix more spelling issues
1 parent 3aaf7e7 commit f5f54b7

File tree

6 files changed

+216
-346
lines changed

6 files changed

+216
-346
lines changed

docs/user_guide/02_hybrid_queries.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1090,7 +1090,7 @@
10901090
"source": [
10911091
"## Non-vector Queries\n",
10921092
"\n",
1093-
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but soley takes a ``FilterExpression``."
1093+
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but solely takes a ``FilterExpression``."
10941094
]
10951095
},
10961096
{
@@ -1434,7 +1434,7 @@
14341434
],
14351435
"metadata": {
14361436
"kernelspec": {
1437-
"display_name": "redisvl-VnTEShF2-py3.13",
1437+
"display_name": "env",
14381438
"language": "python",
14391439
"name": "python3"
14401440
},
@@ -1448,7 +1448,7 @@
14481448
"name": "python",
14491449
"nbconvert_exporter": "python",
14501450
"pygments_lexer": "ipython3",
1451-
"version": "3.13.2"
1451+
"version": "3.12.8"
14521452
},
14531453
"orig_nbformat": 4
14541454
},

docs/user_guide/03_llmcache.ipynb

Lines changed: 7 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,5 @@
11
{
22
"cells": [
3-
{
4-
"cell_type": "markdown",
5-
"metadata": {},
6-
"source": [
7-
"# Semantic Caching for LLMs\n",
8-
"\n",
9-
"RedisVL provides a ``SemanticCache`` interface to utilize Redis' built-in caching capabilities AND vector search in order to store responses from previously-answered questions. This reduces the number of requests and tokens sent to the Large Language Models (LLM) service, decreasing costs and enhancing application throughput (by reducing the time taken to generate responses).\n",
10-
"\n",
11-
"This notebook will go over how to use Redis as a Semantic Cache for your applications"
12-
]
13-
},
143
{
154
"cell_type": "markdown",
165
"metadata": {},
@@ -111,7 +100,7 @@
111100
" name=\"llmcache\", # underlying search index name\n",
112101
" redis_url=\"redis://localhost:6379\", # redis connection url string\n",
113102
" distance_threshold=0.1, # semantic cache distance threshold\n",
114-
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embdding model\n",
103+
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embedding model\n",
115104
")"
116105
]
117106
},
@@ -316,12 +305,12 @@
316305
"cell_type": "markdown",
317306
"metadata": {},
318307
"source": [
319-
"## Customize the Distance Threshhold\n",
308+
"## Customize the Distance Threshold\n",
320309
"\n",
321-
"For most use cases, the right semantic similarity threshhold is not a fixed quantity. Depending on the choice of embedding model,\n",
322-
"the properties of the input query, and even business use case -- the threshhold might need to change. \n",
310+
"For most use cases, the right semantic similarity threshold is not a fixed quantity. Depending on the choice of embedding model,\n",
311+
"the properties of the input query, and even business use case -- the threshold might need to change. \n",
323312
"\n",
324-
"Fortunately, you can seamlessly adjust the threshhold at any point like below:"
313+
"Fortunately, you can seamlessly adjust the threshold at any point like below:"
325314
]
326315
},
327316
{
@@ -924,7 +913,7 @@
924913
],
925914
"metadata": {
926915
"kernelspec": {
927-
"display_name": "redisvl-VnTEShF2-py3.13",
916+
"display_name": "env",
928917
"language": "python",
929918
"name": "python3"
930919
},
@@ -938,7 +927,7 @@
938927
"name": "python",
939928
"nbconvert_exporter": "python",
940929
"pygments_lexer": "ipython3",
941-
"version": "3.13.2"
930+
"version": "3.12.8"
942931
},
943932
"orig_nbformat": 4
944933
},

docs/user_guide/04_vectorizers.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@
175175
}
176176
],
177177
"source": [
178-
"# openai also supports asyncronous requests, which we can use to speed up the vectorization process.\n",
178+
"# openai also supports asynchronous requests, which we can use to speed up the vectorization process.\n",
179179
"embeddings = await oai.aembed_many(sentences)\n",
180180
"print(\"Number of Embeddings:\", len(embeddings))\n"
181181
]
@@ -495,7 +495,7 @@
495495
"\n",
496496
"mistral = MistralAITextVectorizer()\n",
497497
"\n",
498-
"# embed a sentence using their asyncronous method\n",
498+
"# embed a sentence using their asynchronous method\n",
499499
"test = await mistral.aembed(\"This is a test sentence.\")\n",
500500
"print(\"Vector dimensions: \", len(test))\n",
501501
"print(test[:10])"
@@ -775,7 +775,7 @@
775775
"name": "python",
776776
"nbconvert_exporter": "python",
777777
"pygments_lexer": "ipython3",
778-
"version": "3.11.11"
778+
"version": "3.12.8"
779779
},
780780
"orig_nbformat": 4
781781
},

docs/user_guide/07_message_history.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticable when asking simple questions, it becomes a hinderance when engaging in long running conversations that rely on conversational context.\n",
14+
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticeable when asking simple questions, it becomes a hindrance when engaging in long running conversations that rely on conversational context.\n",
1515
"\n",
1616
"The solution to this problem is to append the previous conversation history to each subsequent call to the LLM.\n",
1717
"\n",
@@ -276,7 +276,7 @@
276276
"source": [
277277
"You can adjust the degree of semantic similarity needed to be included in your context.\n",
278278
"\n",
279-
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everthing."
279+
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everything."
280280
]
281281
},
282282
{

0 commit comments

Comments
 (0)