🔗 https://ieeexplore.ieee.org/document/10932090/
The Enigma Sound App aimed to provide a seamless and intuitive music experience by leveraging AI to enhance personalization. It sought to solve common issues such as poor sound quality, lack of personalized recommendations, and inefficient user experience in existing music apps.
It utilizes AI models to analyze text, voice, and facial expressions to detect emotions and generate melodies using Music21 and FluidSynth or recommend Spotify songs.
-> Frontend: Flutter
-> Backend: Python (Flask,Music21,FluidSynth)
-> AI & Machine Learning: TensorFlow, Librosa, CNN-LSTM model for audio, FER model for face detection
- Clone the repository:
- git clone https://github.com/ApurvaPatil2401/Enigma_Sound.git
- cd enigma-sound
- Install backend dependencies:
-
cd enigmasoundbackend
-
pip install -r requirements.txt
**Note: Due to file size limits on GitHub, you must download the SoundFont manually: Place the .sf2 file in: enigmasoundbackend/soundfonts/ 🔗 https://drive.google.com/drive/folders/1Afpft75F2IBZz-L-B_y5zrsaZIPeEBV0?usp=drive_link
- Run the backend server:
- python app.py
- Navigate to the Flutter frontend and install dependencies:
-
cd emotion
-
flutter pub get
- Run the Flutter app:
- flutter run
Video.Project.4.mp4
**Technical Note on Demo: > "This demo was captured during a live test on a mid-range mobile device to demonstrate the model's efficiency on Edge Devices without cloud-side GPU acceleration. The focus is on real-time emotional mapping logic rather than high-fidelity recording."