Skip to content

Commit 04976db

Browse files
authored
docs: fix typos (ggml-org#7124)
* fix typo * fix typos * fix typo * fix typos * fix typo * fix typos
1 parent 947d3ad commit 04976db

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

docs/BLIS.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Install BLIS:
2323
sudo make install
2424
```
2525

26-
We recommend using openmp since it's easier to modify the cores been used.
26+
We recommend using openmp since it's easier to modify the cores being used.
2727

2828
### llama.cpp compilation
2929

docs/HOWTO-add-model.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -96,9 +96,9 @@ NOTE: The dimensions in `ggml` are typically in the reverse order of the `pytorc
9696

9797
This is the funniest part, you have to provide the inference graph implementation of the new model architecture in `llama_build_graph`.
9898

99-
Have a look to existing implementation like `build_llama`, `build_dbrx` or `build_bert`.
99+
Have a look at existing implementation like `build_llama`, `build_dbrx` or `build_bert`.
100100

101-
When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support of missing backend operations can be added in another PR.
101+
When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support for missing backend operations can be added in another PR.
102102

103103
Note: to debug the inference graph: you can use [eval-callback](../examples/eval-callback).
104104

examples/llava/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-pa
5656
python ./convert.py ../llava-v1.5-7b --skip-unknown
5757
```
5858

59-
Now both the LLaMA part and the image encoder is in the `llava-v1.5-7b` directory.
59+
Now both the LLaMA part and the image encoder are in the `llava-v1.5-7b` directory.
6060

6161
## LLaVA 1.6 gguf conversion
6262
1) First clone a LLaVA 1.6 model:

examples/main/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ The `--ctx-size` option allows you to set the size of the prompt context used by
143143

144144
### Extended Context Size
145145

146-
Some fine-tuned models have extended the context length by scaling RoPE. For example, if the original pre-trained model have a context length (max sequence length) of 4096 (4k) and the fine-tuned model have 32k. That is a scaling factor of 8, and should work by setting the above `--ctx-size` to 32768 (32k) and `--rope-scale` to 8.
146+
Some fine-tuned models have extended the context length by scaling RoPE. For example, if the original pre-trained model has a context length (max sequence length) of 4096 (4k) and the fine-tuned model has 32k. That is a scaling factor of 8, and should work by setting the above `--ctx-size` to 32768 (32k) and `--rope-scale` to 8.
147147

148148
- `--rope-scale N`: Where N is the linear scaling factor used by the fine-tuned model.
149149

@@ -286,7 +286,7 @@ These options help improve the performance and memory usage of the LLaMA models.
286286

287287
- `--numa distribute`: Pin an equal proportion of the threads to the cores on each NUMA node. This will spread the load amongst all cores on the system, utilitizing all memory channels at the expense of potentially requiring memory to travel over the slow links between nodes.
288288
- `--numa isolate`: Pin all threads to the NUMA node that the program starts on. This limits the number of cores and amount of memory that can be used, but guarantees all memory access remains local to the NUMA node.
289-
- `--numa numactl`: Pin threads to the CPUMAP that is passed to the program by starting it with the numactl utility. This is the most flexible mode, and allow arbitraty core usage patterns, for example a map that uses all the cores on one NUMA nodes, and just enough cores on a second node to saturate the inter-node memory bus.
289+
- `--numa numactl`: Pin threads to the CPUMAP that is passed to the program by starting it with the numactl utility. This is the most flexible mode, and allow arbitrary core usage patterns, for example a map that uses all the cores on one NUMA nodes, and just enough cores on a second node to saturate the inter-node memory bus.
290290

291291
These flags attempt optimizations that help on some systems with non-uniform memory access. This currently consists of one of the above strategies, and disabling prefetch and readahead for mmap. The latter causes mapped pages to be faulted in on first access instead of all at once, and in combination with pinning threads to NUMA nodes, more of the pages end up on the NUMA node where they are used. Note that if the model is already in the system page cache, for example because of a previous run without this option, this will have little effect unless you drop the page cache first. This can be done by rebooting the system or on Linux by writing '3' to '/proc/sys/vm/drop_caches' as root.
292292

examples/sycl/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# llama.cpp/example/sycl
22

3-
This example program provide the tools for llama.cpp for SYCL on Intel GPU.
3+
This example program provides the tools for llama.cpp for SYCL on Intel GPU.
44

55
## Tool
66

grammars/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ single-line ::= [^\n]+ "\n"`
5151

5252
## Sequences and Alternatives
5353

54-
The order of symbols in a sequence matter. For example, in `"1. " move " " move "\n"`, the `"1. "` must come before the first `move`, etc.
54+
The order of symbols in a sequence matters. For example, in `"1. " move " " move "\n"`, the `"1. "` must come before the first `move`, etc.
5555

5656
Alternatives, denoted by `|`, give different sequences that are acceptable. For example, in `move ::= pawn | nonpawn | castle`, `move` can be a `pawn` move, a `nonpawn` move, or a `castle`.
5757

0 commit comments

Comments
 (0)