Gesture-based presentation helper using a ML model that detects and recognizes various hand gestures.
A functional camera (and your permission to use it).
- The user chooses the camera mode, the presentation app they are using, two hand gestures used as signal to enable and disable hand gesture shortcuts respectively, and calibrates the ML model if needed using the GUI.
- Once the chosen presentation app is in presentation mode, the ML model starts searching for the signal hand gesture.
- If the chosen "enable shortcuts" hand gesture is detected, the user will be able to use presentation keyboard shortcuts via simple hand gestures.
- Camera mode (selfie or normal)
- Presentation app (dropdown menu? preconfigured list?)
- "Toggle" hand gestures
- Calibration menu
- Settings to customize sign mapping