You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-13Lines changed: 12 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Sign Language Interpreter using Deep Learning
2
-
> A live sign language interpreter using live video feed from the camera.
2
+
> A sign language interpreter using live video feed from the camera.
3
3
The project was completed in 24 hours as part of final HackUNT19, the University of North Texas's annual Hackathon.
4
4
5
5
## Table of contents
@@ -16,10 +16,9 @@ The project was completed in 24 hours as part of final HackUNT19, the University
16
16
17
17
## General info
18
18
19
-
The theme at HACK UNT 19 was to use technology to improve accessibility by finding a creative solution to benefit the lives of those with disability.
19
+
The theme at HACK UNT 19 was to use technology to improve accessibility by finding a creative solution to benefit the lives of those with a disability.
20
20
We wanted to make it easy for 70 million deaf people across the world to be independent of translators for there daily communication needs, so we designed the app to work as a personal translator 24*7 for the deaf people.
21
21
22
-
23
22
## Screenshots
24
23

25
24

@@ -34,22 +33,22 @@ We wanted to make it easy for 70 million deaf people across the world to be inde
34
33
35
34
## Setup
36
35
37
-
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt
36
+
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt files.
37
+
38
38
`pyton -m pip r using requirements_cpu.txt`
39
39
40
40
This will help you in installing all the libraries required for the project.
41
41
42
-
43
42
## Process
44
43
45
-
* Run set_hand_hist.py to set the hand histogram for creating gestures.
46
-
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here]().
47
-
* Added gestures and lable them using OpenCV which uses webcam feed by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here]()
48
-
* Add different variations to the captured gestures by flipping all the images by uing`flip_images.py`
44
+
* Run `set_hand_hist.py` to set the hand histogram for creating gestures.
45
+
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/blob/master/Code/hist).
46
+
* Added gestures and label them using OpenCV which uses webcam feed. by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here]().
47
+
* Add different variations to the captured gestures by flipping all the images by using`flip_images.py`.
49
48
* Run `load_images.py` to split all the captured gestures into training, validation and test set.
50
-
* To view all the gestures, run `display_all_gestures.py`
51
-
* Train the model using Keras by running `cnn_keras.py`
52
-
* Run `fun_util.py`. This will open up tghe gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
49
+
* To view all the gestures, run `display_all_gestures.py`.
50
+
* Train the model using Keras by running `cnn_keras.py`.
51
+
* Run `fun_util.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
53
52
54
53
## Code Examples
55
54
@@ -138,7 +137,7 @@ K.clear_session();
138
137
## Features
139
138
Our model was able to predict the 44 characters in the ASL with a prediction accuracy >95%.
140
139
141
-
To-do list:
140
+
Features that can be added:
142
141
* Deploy the project on cloud and create an API for using it.
143
142
* Increase the vocabulary of our model
144
143
* Incorporate feedback mechanism to make the model more robust
0 commit comments