You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/serialization.md
+65-54Lines changed: 65 additions & 54 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,87 +14,98 @@ rendered properly in your Markdown viewer.
14
14
15
15
-->
16
16
17
-
# ONNX
17
+
# Export to production
18
18
19
-
[ONNX](http://onnx.ai) is an open standard that defines a common set of operators and a file format to represent deep learning models in different frameworks, including PyTorch and TensorFlow. When a model is exported to ONNX, the operators construct a computational graph (or *intermediate representation*) which represents the flow of data through the model. Standardized operators and data types makes it easy to switch between frameworks.
19
+
Export Transformers' models to different formats for optimized runtimes and devices. Deploy the same model to cloud providers or run it on mobile and edge devices. You don't need to rewrite the model from scratch for each deployment environment. Freely deploy across any inference ecosystem.
20
20
21
-
The [Optimum](https://huggingface.co/docs/optimum/index) library exports a model to ONNX with configuration objects which are supported for [many architectures](https://huggingface.co/docs/optimum/exporters/onnx/overview) and can be easily extended. If a model isn't supported, feel free to make a [contribution](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute) to Optimum.
21
+
## ExecuTorch
22
22
23
-
The benefits of exporting to ONNX include the following.
23
+
[ExecuTorch](https://pytorch.org/executorch/stable/index.html) runs PyTorch models on mobile and edge devices. It exports a model into a graph of standardized operators, compiles the graph into an ExecuTorch program, and executes it on the target device. The runtime is lightweight and calculates the execution plan ahead of time.
24
24
25
-
-[Graph optimization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) and [quantization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization) for improving inference.
26
-
- Use the [`~optimum.onnxruntime.ORTModel`] API to run a model with [ONNX Runtime](https://onnxruntime.ai/).
27
-
- Use [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines) for ONNX models.
25
+
Install [Optimum ExecuTorch](https://huggingface.co/docs/optimum-executorch/en/index) from source.
28
26
29
-
Export a Transformers model to ONNX with the Optimum CLI or the `optimum.onnxruntime` module.
Export a Transformers model to ExecuTorch with the CLI tool.
34
+
35
+
```bash
36
+
optimum-cli export executorch \
37
+
--model "Qwen/Qwen3-8B" \
38
+
--task "text-generation" \
39
+
--recipe "xnnpack" \
40
+
--use_custom_sdpa \
41
+
--use_custom_kv_cache \
42
+
--qlinear 8da4w \
43
+
--qembedding 8w \
44
+
--output_dir="hf_smollm2"
45
+
```
32
46
33
-
Run the command below to install Optimum and the [exporters](https://huggingface.co/docs/optimum/exporters/overview) module.
47
+
Run the following command to view all export options.
34
48
35
49
```bash
36
-
pip install optimum-onnx
50
+
optimum-cli export executorch --help
37
51
```
38
52
39
-
> [!TIP]
40
-
> Refer to the [Export a model to ONNX with optimum.exporters.onnx](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) guide for all available arguments or with the command below.
41
-
>
42
-
> ```bash
43
-
> optimum-cli export onnx --help
44
-
>```
53
+
## ONNX
54
+
55
+
[ONNX](http://onnx.ai) is a shared language for describing models from different frameworks. It represents models as a graph of standardized operators with well-defined types, shapes, and metadata. Models serialize into compact protobuf files that you can deploy across optimized runtimes and engines.
45
56
46
-
Set the `--model` argument to export a PyTorch model from the Hub.
57
+
[Optimum ONNX](https://huggingface.co/docs/optimum-onnx/index) exports models to ONNX with configuration objects. It supports many [architectures](https://huggingface.co/docs/optimum-onnx/onnx/overview) and is easily extendable. Export models through the CLI tool or programmatically.
You should see logs indicating the progress and showing where the resulting `model.onnx` is saved.
53
-
54
-
```text
55
-
Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx...
56
-
-[✓] ONNX model output names match reference model (start_logits, end_logits)
57
-
- Validating ONNX Model output "start_logits":
58
-
-[✓] (2, 16) matches (2, 16)
59
-
-[✓] all values close (atol: 0.0001)
60
-
- Validating ONNX Model output "end_logits":
61
-
-[✓] (2, 16) matches (2, 16)
62
-
-[✓] all values close (atol: 0.0001)
63
-
The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx
64
-
```
65
+
### optimum-cli
65
66
66
-
For local models, make sure the model weights and tokenizer files are saved in the same directory, for example `local_path`. Pass the directory to the `--model` argument and use `--task` to indicate the [task](https://huggingface.co/docs/optimum/exporters/task_manager) a model can perform. If `--task` isn't provided, the model architecture without a task-specific head is used.
67
+
Specify a model to export and the output directory with the `--model` argument.
The `model.onnx` file can be deployed with any [accelerator](https://onnx.ai/supported-tools.html#deployModel) that supports ONNX. The example below demonstrates loading and running a model with ONNX Runtime.
73
+
Run the following command to view all available arguments or refer to the [Export a model to ONNX with optimum.exporters.onnx](https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model) guide for more details.
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
80
-
>>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
81
-
>>> outputs =model(**inputs)
79
+
To export a local model, save the weights and tokenizer files in the same directory. Pass the directory path to the `--model` argument and use the `--task` argument to specify the [task](https://huggingface.co/docs/optimum/exporters/task_manager#transformers). If you don't provide `--task`, the system auto-infers it from the model or uses an architecture without a task-specific head.
Deploy the model with any [runtime](https://onnx.ai/supported-tools.html#deployModel) that supports ONNX, including ONNX Runtime.
85
86
86
-
The `optimum.onnxruntime` module supports programmatically exporting a Transformers model. Instantiate a [`~optimum.onnxruntime.ORTModel`] for a task and set `export=True`. Use [`~OptimizedModel.save_pretrained`] to save the ONNX model.
87
+
```py
88
+
from transformers import AutoTokenizer
89
+
from optimum.onnxruntime import ORTModelForCausalLM
Export Transformers' models programmatically with Optimum ONNX. Instantiate a [`~optimum.onnxruntime.ORTModel`] with a model and set `export=True`. Save the ONNX model with [`~optimum.onnxruntime.ORTModel.save_pretrained`].
97
101
98
-
>>> ort_model.save_pretrained(save_directory)
99
-
>>> tokenizer.save_pretrained(save_directory)
100
-
```
102
+
```py
103
+
from optimum.onnxruntime import ORTModelForCausalLM
0 commit comments