Skip to content

Commit 24aafdf

Browse files
committed
project name change
1 parent 1f9cfcd commit 24aafdf

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
Intel Low Precision Inference Tool (iLiT)
1+
Intel® Low Precision Inference Toolkit (iLiT)
22
=========================================
33

4-
Intel Low Precision Inference Tool (iLiT) is an open-source python library which is intended to deliver a unified low-precision inference interface cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives.
4+
Intel® Low Precision Inference Toolkit (iLiT) is an open-source python library which is intended to deliver a unified low-precision inference interface cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives.
55

66
> **WARNING**
77
>
@@ -132,7 +132,7 @@ If you use iLiT in your research or wish to refer to the tuning results publishe
132132
```
133133
@misc{iLiT,
134134
author = {Feng Tian, Chuanqi Wang, Guoming Zhang, Penghui Cheng, Pengxin Yuan, Haihao Shen, and Jiong Gong},
135-
title = {Intel Low Precision Inference Tool},
135+
title = {Intel® Low Precision Inference Toolkit},
136136
howpublished = {\url{https://github.com/intel/lp-inference-kit}},
137137
year = {2020}
138138
}

setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
version="1.0a0",
77
author="Intel MLP/MLPC Team",
88
9-
description="Repository of low precision inference toolkit",
9+
description="Repository of intel low precision inference toolkit",
1010
long_description=open("README.md", "r", encoding='utf-8').read(),
1111
long_description_content_type="text/markdown",
1212
keywords='quantization, auto-tuning, post-training static quantization, post-training dynamic quantization, quantization-aware training, tuning strategy',

0 commit comments

Comments
 (0)