A real-time facial expression recognition system optimized for NVIDIA Jetson hardware using deep learning and computer vision techniques.
- Real-time facial expression detection
- Support for 7 basic emotions: Happy, Sad, Angry, Fear, Surprise, Disgust, Neutral
- Optimized for NVIDIA Jetson Nano/Xavier/Orin
- Web-based interface for easy interaction
- REST API for integration with other systems
- High accuracy using pre-trained deep learning models
- NVIDIA Jetson Nano/Xavier/Orin
- USB Camera or CSI Camera
- At least 4GB RAM (8GB recommended)
- MicroSD card with at least 16GB storage
- JetPack 4.6+ or JetPack 5.0+
- Python 3.8+
- OpenCV 4.5+
- TensorFlow 2.x or PyTorch 1.x
- CUDA 10.2+ (for GPU acceleration)
git clone https://github.com/vipul-sindha/Emoticon.git
cd Emoticon# Update system packages
sudo apt update && sudo apt upgrade -y
# Install system dependencies
sudo apt install -y python3-pip python3-dev python3-venv
sudo apt install -y libopencv-dev python3-opencv
sudo apt install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt install -y libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base
sudo apt install -y gstreamer1.0-plugins-good gstreamer1.0-plugins-bad
sudo apt install -y gstreamer1.0-plugins-ugly gstreamer1.0-libav
sudo apt install -y gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa
sudo apt install -y gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5
sudo apt install -y gstreamer1.0-pulseaudio
# Create virtual environment
python3 -m venv emoticon_env
source emoticon_env/bin/activate
# Install Python dependencies
pip install -r requirements.txt# Download emotion recognition model
wget https://github.com/vipul-sindha/Emoticon/releases/download/v1.0/emotion_model.pth -O models/emotion_model.pth
# Download face detection model
wget https://github.com/vipul-sindha/Emoticon/releases/download/v1.0/face_detection_model.pth -O models/face_detection_model.pthEdit config/camera_config.yaml to set your camera parameters:
camera:
device: 0 # USB camera index or CSI camera path
width: 640
height: 480
fps: 30
codec: "MJPG"# Activate virtual environment
source emoticon_env/bin/activate
# Run the main application
python src/main.pyOpen your browser and navigate to http://localhost:8080 to access the web interface.
The application provides a REST API for integration:
# Get current emotion
curl http://localhost:8080/api/emotion
# Get emotion history
curl http://localhost:8080/api/emotions/history
# Get system status
curl http://localhost:8080/api/statusEmoticon/
├── src/
│ ├── main.py # Main application entry point
│ ├── emotion_detector.py # Emotion recognition module
│ ├── face_detector.py # Face detection module
│ ├── camera_manager.py # Camera interface
│ ├── web_server.py # Web server and API
│ └── utils/
│ ├── preprocessing.py # Image preprocessing utilities
│ └── visualization.py # Visualization utilities
├── models/
│ ├── emotion_model.pth # Pre-trained emotion model
│ └── face_detection_model.pth # Face detection model
├── config/
│ ├── camera_config.yaml # Camera configuration
│ └── model_config.yaml # Model configuration
├── data/
│ ├── training/ # Training data
│ └── validation/ # Validation data
├── tests/
│ ├── test_emotion_detector.py
│ └── test_face_detector.py
├── docs/
│ ├── api.md # API documentation
│ └── deployment.md # Deployment guide
├── requirements.txt # Python dependencies
├── setup.py # Package setup
└── README.md # This file
Edit config/camera_config.yaml:
camera:
device: 0 # Camera device index
width: 640 # Frame width
height: 480 # Frame height
fps: 30 # Frames per second
codec: "MJPG" # Video codec
buffer_size: 1 # Buffer sizeEdit config/model_config.yaml:
model:
emotion_model_path: "models/emotion_model.pth"
face_detection_model_path: "models/face_detection_model.pth"
confidence_threshold: 0.7
gpu_acceleration: true
batch_size: 1# Set performance mode
sudo nvpmodel -m 0
sudo jetson_clocks
# Enable GPU memory
sudo sh -c 'echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo'# Set maximum performance
sudo nvpmodel -m 0
sudo jetson_clocks
# Optimize GPU memory
sudo sh -c 'echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo'# Run all tests
python -m pytest tests/
# Run specific test
python -m pytest tests/test_emotion_detector.pyThis project follows PEP 8 style guidelines. Use the provided linting tools:
# Install development dependencies
pip install -r requirements-dev.txt
# Run linting
flake8 src/
black src/
isort src/- Camera not detected: Check camera permissions and device index
- Low FPS: Reduce resolution or disable GPU acceleration
- Memory issues: Reduce batch size or model complexity
- Model loading errors: Verify model file paths and permissions
- Use CSI camera for better performance
- Enable GPU acceleration when available
- Reduce frame resolution for higher FPS
- Use optimized models for Jetson hardware
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- NVIDIA for Jetson platform
- OpenCV community for computer vision tools
- PyTorch/TensorFlow communities for deep learning frameworks
For support and questions:
- Create an issue on GitHub
- Check the documentation
- Review the troubleshooting guide
- v1.0.0 - Initial release with basic emotion recognition
- v1.1.0 - Added web interface and REST API
- v1.2.0 - Performance optimizations for Jetson hardware