Music Recommendation System using YOLO v11 For Facial Expression

DOI : 10.17577/IJERTV14IS020072

Download Full-Text PDF Cite this Publication

Text Only Version

Music Recommendation System using YOLO v11 For Facial Expression

Harsh Agarwal Computer Engineering Pimpri Chinchwad College of Engineering

Pune,India

Shyam Borole

Computer Engineering Pimpri Chinchwad College of Engineering

Pune,India

Sarvesh Apshete

Computer Engineering Pimpri Chinchwad College of Engineering

Pune,India

Archana Kadam

Computer Engineering Pimpri Chinchwad College of Engineering

Pune,India

Shivam Bhendekar

Computer Engineering Pimpri Chinchwad College of Engineering

Pune,India

Abstract A music recommendation system based on facial expression analysis offers a more personalized and real-time experience by suggesting songs that match the user's emotions. It uses an advanced technique called YOLO version eleven for facial recognition, which effectively detects emotions such as happiness, sadness, anger, or calmness by analyzing facial expressions. This allows the system to accurately identify the user's mood and create a playlist that best fits their emotional state. As a result, it increases user satisfaction and engagement. When integrated with existing music applications, this system delivers a dynamic and mood-based listening experience, making music selection easy and enjoyable.

  1. INTRODUCTION

    Music Recommendation System As Per Facial Expression is a feature aimed at enriching user experience by dynamically analyzing emotions based on facial expressions and curating personalized playlists. Music has been quite essential to human life as it evokes the human emotions, decreases stress, and enhances productivity. However, the prevalent music recommendation systems are based on history of user preferences or the input of a man, which restricts the system's ability in responding towards real-time change of the emotional conditions of the human being. This feature utilizes sophisticated facial recognition to identify whether a person is feeling happy, sad, angry, or relaxed and then recommends the appropriate music according to the mood of the user. It thus makes the music selection process very easy and ensures that the playlist will connect with the emotional state of the listener.

    Many situations call for real-time mood-based music suggestions. A stressful work meeting requires calming music, while celebrating requires upbeat energetic music. Currently, users are required to find or select specific playlists, which is time-consuming and less efficient. This system integrates facial expression analysis to recognize the user's mood in an instant and will provide relevant music suggestions, thereby saving time and increasing user satisfaction.

    It also captures significant information, including the date, place, and recurrence of certain feelings, so the system can refine its suggestions. This makes it more intuitive and personalized. Combining cutting-edge emotion detection with music recommendation technology would entail the system to enhance the listener's experience while developing a deeper connection between music and the listener's emotions, thereby being a significant leap in the development of a music application. Further , some of the messages are closely related to time. Messages like meetings, gatherings, visits, etc. come with a time constraint or scheduling. For example, there is a very important meeting scheduled on the 23rd of the month, and the message is conveyed to you through your WhatsApp group by a simple text message. Now you have to organize this message or add a note for this message to other applications, which is very time-consuming. Moreover, details like who is the sender?, Through which group this message was forwarded ?, at which time ?, who are the members of the group?, etc. all this information cannot be noted by the user. So, to solve this problem, the "Inbuilt reminder feature" is introduced. As the message under the inbuilt reminder feature will note the message under the reminder section of WhatsApp only, All the details will be stored within the application only, reducing the users' efforts and easing their manual tasks.

  2. LITERATURE SURVEY

A machine learning-based music recommendation system that uses facial expressions to identify emotions is presented in the study by Kedari et al. (2023). The authors suggest a novel method for making music recommendations by employing computer vision techniques to analyze facial features in real time. Through the use of a camera, the system records facial expressions, which are subsequently analyzed to determine the user's emotional condition. Machine learning algorithms create tailored music recommendations that complement or elevate the user's mood based on this emotion identification. The study highlights how non-invasive techniques, including facial expression analysis, might improve the precision of emotion-based recommendations. The study also demonstrates the expanding possibilities of combining machine learning and computer vision to create sophisticated, user focused music recommendation system[1].

A face emotion detection-based music recommendation system is shown in the Malik (2020) study. The use of facial expression recognition to customize music recommendation based on the user's emotional state is investigated by the author. The system classifies emotions like happiness, sadness, or rage by analyzing camera-captured face expressions using image processing and machine learning techniques. The music tracks that complement or elevate the user's mood are then chosen based on the identified emotion. This method demonstrates how emotion-aware computers can provide a more immersive and customized music experience. By highlighting how facial expression analysis can be used to improve user happiness in digital music platforms, the study advances the field of emotion-based recommendation systems.[2]

A Facial Expression-based Music Recommendation System Using Deep Convolutional Neural Networks (CNNs) is presented in the study by Ashwini et al. (2024). In order to categorize emotions and recommend music appropriately, the algorithm examines facial expressions. The authors combine a recommendation model with deep learning to improve the accuracy of emotion recognition. Their approach enhances the user experience by offering real-time, tailored music recommendations. The study also draws attention to

dataset constraints and difficulties with emotion identification. Results from experiments show how well the model matches musical tastes to emotional states.[3]

In order to improve user experience by offering tailored song recommendations, this study investigates a face emotion- based music recommendation system. The authors examine facial expressions and categorize them into various emotional states using emotion detection algorithms. These feelings are connected to suitable musical genres using a machine learning-based method. The study draws attention to dataset constraints and difficulties in accurately identifying emotions. When compared to conventional techniques, the results show an improvement in recommendation efficiency.[4]

Several music recommendation systems that use facial expressions to select songs based on emotions are covered in this review study. The authors examine the effects of face masks on the accuracy of emotion recognition and evaluate current techniques, such as deep learning and machine learning models. The study contrasts several datasets and emotion identification algorithms. In order to increase music personalization, particulrly in practical situations, it ends by proposing enhancements to feature extraction approaches.[5] This study examines an intelligent movie recommendation system that makes emotional film recommendations by analyzing face expressions. The authors categorize facial expressions into emotional groups using machine learning and image processing techniques. In comparison to traditional techniques, the study assesses how well the suggested methodology improves movie suggestions. Limitations that impact suggestion accuracy are also covered, including misclassified expressions and changing lighting conditions.[6] In order to personalize music choices, this research proposes an expression-Based Music Recommendation System that uses face expression recognition algorithms. Deep learning and machine learning algorithms are combined in this work to categorize emotions and link them to appropriate musical genres. The authors draw attention to issues including misclassified expressions and different lighting conditions. Their findings show increased accuracy and user engagement when compared to conventional recommendation methods.[7] The authors use CNN-based deep learning techniques to present a Facial Emotion Recognition and Music Recommendation System. Real-time facial expressions are analyzed by the system to ascertain the user's emotional state and recommend appropriate music. The study highlights how

convolutional neural networks (CNNs) can increase the accuracy of facial recognition. The findings of the experiment demonstrate a high accuracy rate in identifying emotions and making appropriate music recommendations.[8]

This study focuses on a Facial Emotion-Based Music Recommendation System that combines machine learning and computer vision methods. Through image processing, the model recognizes facial expressions and categorizes emotions to produce tailored music recommendations. The impact of background noise on the accuracy of facial recognition and computational complexity are among the difficulties covered in the research. The suggested strategy works better than traditional emotion-based recommendation techniques.[9]

This study focuses on a Facial Emotion-Based Music Recommendation System that combines machine learning and computer vision methods. Through image processing, the model recognizes facial expressions and categorizes emotions to produce tailored music recommendations. The impact of background noise on the accuracy of facial recognition and computational complexity are among the difficulties covered in the research. The suggested strategy works better than traditional emotion-based recommendation techniques.[10] This study introduces an Emotional Detection and Music Recommendation System that makes song recommendations based on face expressions. The authors assess user emotions using deep learning-based emotion classification techniques. The study talks about how well various machine learning models recognize emotions. Their findings show that tailored music recommendations significantly increase user pleasure.[11]

To improve user experience, the authors create a music recommendation system that uses facial emotion recognition. Deep learning algorithms are used in the study to interpret facial expressions and select suitable music. In order to detect emotions, the study analyzes various classification models. The results point to a rise in user engagement and suggestion accuracy.[12]

A music recommendation system that uses facial expression recognition is presented in this PhD dissertation. In order to extract facial features and categorize emotions into predetermined groups, the study uses image processing techniques. After that, the system makes musical recommendations based on emotional analysis. The work draws attention to the limits of the dataset as well as possible deep learning improvements.[13]

Deep learning techniques for facial expression

identification are highlighted in this study's discussion of an emotion-based music recommendation system. To increase the accuracy of emotion categorization, the authors examine a number of feature extraction strategies. The study draws attention to real-world issues like processing in real-time and the impact of various facial expressions on prediction results.[14]

The study presents an Emotion-Based Music Recommendation System that makes music based on face emotions. The authors categorize emotions using machine learning models and associate them with a pre- existing music collection. The study demonstrates how successful their method is in comparison to traditional music recommendation systems. The outcomes demonstrate improved customer satisfaction and customisation.[15]

The Mood-Based Music Recommendation System presented in this paper uses user emotions to recommend appropriate music. The writers categorize emotions and suggest songs based on their classifications using machine learning techniques. By offering customized music selections, the system seeks to improve the user experience. The study addresses issues like accuracy-affecting facial expression changes and real-time emotion recognition.[16]

This study examines how WhatsApp is used in design- based learning modules and examines how it affects student collaboration and communication. The writers emphasize how WhatsApp makes it easier to coordinate projects, share resources, and have discussions in real time. The advantages and drawbacks of using instant messaging for work and school are covered in the study.[17]

The authors suggest a wearable physiological sensor- based emotion-based music recommendation system. In order to categorize emotions, the study incorporates physiological information like skin conductance and heart rate. After that, the algorithm suggests music that fits the mood it has identified. The study emphasizes how physiological signals are superior to facial recognition when it comes to detecting emotions in uncontrolled settings.[18]

This paper presents FaceFetch, a facial expression-based multimedia content recommendation system. Through facial expression analysis, the system determines the user's moods and recommends multimedia, such as films and music. The goal of the research is to increase recommendation accuracy through the use of emotion-driven customisation. The results point to possible uses in e-learning and entertainment.[19]

In this paper, a facial expression-based music recommendation system is presented. To categorize emotions and provide tailored music choices, the authors use deep learning algorithms. The impact of dataset variety, facial occlusions, and lighting conditions on the accuracy of emotion identification is covered in the paper. The results of the experiment show that more personalized music leads to higher customer happiness.[20]

  1. PROPOSED SOLUTION

Music Recommendation System as Per Facial Expression

This section describes the detailed functionality of the music recommendation system based on facial expression analysis. The procedure is explained in the context of a general music streaming application and can be generalized to other platforms.

The scenario is as follows :

Let us assume that a user is using a music streaming app while their emotional state changes throughout the day. For example, after a long day at work, the user feels stressed and wants to listen to calming music, or during a celebration, the user desires energetic tunes. Identifying and curating suitable music manually can be inconvenient. The proposed system solves this issue by analyzing the users facial expressions in real time and suggesting music tailored to their emotions. The solution involves three primary processes: A] Facial Expression Analysis, B] Real-Time Playlist Generation, and C] Emotion Tracking for Personalized Recommendations.

  1. Facial Expression Analysis

    Step 1: Facial expression analysis begins when the user opens the music app and grants camera access. The sytem continuously captures the users facial expressions using the devices camera.

    Step 2: Using YOLO v11 technique, the system identifies emotions such as happiness, sadness, anger, or calmness. This process ensures real-time emotion detection with high accuracy.

    Step 3: The detected emotion is immediately processed, and the system maps the emotion to a predefined music category. For example:

    • Happiness Upbeat and energetic tracks

    • Sadness Soothing or uplifting tracks

    • Anger Relaxing and mellow tracks

    • Calmness Soft and tranquil tunes

  2. Real-Time Playlist Generation

  • Stress Relief Classical or instrumental music

  • Workout Fast-paced tracks

  • Celebration Party anthems

    Step 3: The system provides options for users to save or edit the generated playlists for future use.

    B] Emotion Tracking for Personalized Recommendations

    Step 1: The system records data such as the time, frequency, and type of emotions detected while using the app.

    Step 2: Using this data, the system refines future recommendations by analyzing the users emotional patterns and preferences.

    Step 3: The system also integrates a Set Mood Reminder feature, allowing users to set specific times for mood-based playlists. For example, a user can schedule calming music at bedtime or energizing tracks in the morning.

    Additional Features:

    The system supports the following functionalities:

  • Seamless Integration: Compatible with various devices, including smartphones, smart speakers, and wearables.

  • Privacy and Security: Facial recognition data is processed locally on the users device, ensuring privacy and security.

  • Offline Mode: Pre-generated mood-based playlists can be downloaded for offline use.

This proposed solution not only personalizes the users music experience but also enhances convenience and engagement by eliminating manual efforts in music selection, making it a unique and user-friendly feature.

Step 1: Once the emotion is mapped to a music category, the system generates a playlist from the apps database. The playlist is curated dynamically and adapts to the users emotional state.

Step 2: The user can also customize playlists based on their preferences within each emotion category. For example:

  1. SYSTEM ARCHITECTURE

  2. CONCLUSION

    Music Recommendation System As Per Facial Expression" is an innovative and efficient feature that is designed to enhance user experience. It simplifies the process of selecting music by dynamically analyzing facial expressions and providing personalized, mood-based playlists. This feature reduces the manual effort of searching for songs and ensures that users enjoy a seamless, engaging, and emotionally resonant listening experience.

    With the integration of real-time emotion detection with advanced recommendation algorithms, this system allows access to music according to the mood users are currently experiencing and also offers personalization and convenience. We think that this feature will change how people interact with music applications: making it intuitive, personalized, and impactful in the most user-friendly manner.

  3. REFERENCES

Kedari, P., Rengade, P., Deshmukh, S., Adsure, M.S. and Jaiswal, M.D., 2023. Machine Learning Based Music Recommendation System Using Facial Expression.

International Journal for Research in Applied Science and Engineering Technology.

  1. Malik, S., Music Recommendation based on Facial Emotion Detection. International Journal of Computer Applications, 975, p.8887.

  2. Ashwini, P., Dammalapati, P., Ramineni, N. and Adilakshmi, T., 2024, April. Facial Expression based Music Recommendation System using Deep Convolutional Neural Network. In 2024 International Conference on Expert Clouds and Applications (ICOECA) (pp. 992-999). IEEE.

  3. Sharath, P., Senthil Kumar, G. and Vishnu, B.K., 2023. Music Recommendation System Using Facial Emotions. Advances in Science and Technology, 124, pp.44-52.

  4. Wimalaweera, R. and Gammedda, L., 2023. A Review on Music recommendation system based on facial expressions, with or without face mask. Authorea Preprints.

  5. Chauhan, S., Mangrola, R. and Viji, D., 2021, April. Analysis of Intelligent movie recommender system from facial expression. In 2021 5th international conference on computing methodologies and communication (ICCMC) (pp. 1454-1461). IEEE.

  6. Sharma, V.P., Gaded, A.S., Chaudhary, D., Kumar, S. and Sharma, S., 2021, September. Emotion-based music recommendation system. In 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO) (pp. 1-5). IEEE.

  7. Bakariya, B., Singh, A., Singh, H., Raju, P., Rajpoot,

    R. and Mohbey, K.K., 2024. Facial emotion recognition and music recommendation system using CNN-based deep learning techniques. Evolving Systems, 15(2), pp.641-658.

  8. Shalini, S.K., Jaichandran, R., Leelavathy, S., Raviraghul, R., Ranjitha,

    J. and Saravanakumar, N., 2021. Facial Emotion Based Music Recommendation System using computer vision and machine learning techiniques. Turkish journal of computer and mathematics education, 12(2), pp.912-917.

  9. Patil, M. and Bodhe, H., 2024. Movie and Music Recommendation System based on Facial Expressions. Available at SSRN 4716278.

  10. Patil, M. and Bodhe, H., 2024. Movie and Music Recommendation System based on Facial Expressions. Available at SSRN 4716278.

  11. Florence, S.M. and Uma, M., 2020, August. Emotional detection and music recommendation system based on user facial expression. conference series: Materials science and engineering (Vol. 912, No. 6, p. 062007). IOP Publishing.

  12. Visnu Dharsini, S., Balaji, B. and Kirubha Hari, K.S., 2020. Music recommendation system based on facial emotion recognition. Journal of Computational and Theoretical Nanoscience, 17(4), pp.1662-1665.

  13. Parmar, P.H., 2020. Music recommendation system using facial expression recognition (Doctoral dissertation, Doctoral dissertation, BIRLA Vishvakarma Mahavidyalaya).

  14. Florence, S.M. and Uma, M., 2020, August. Emotional detection and music recommendation system based on user facial expression. In IOP conference series: Materials science and engineering (Vol. 912, No. 6, p. 062007). IOP Publishing.

  15. James, H.I., Arnold, J.J.A., Ruban, J.M.M., Tamilarasan, M. and Saranya, R., 2019. Emotion based music recommendation system. Emotion, 6(3), pp.2096-2101.

  16. Mahadik, A., Milgir, S., Patel, J., Jagan,

    V.B. and Kavathekar, V., 2021. Mood based music recommendation system.

    International Journal of Engineering research & Technology (IJERT), 10(06).

  17. Pierre E Hertzog and Arthur J Swart," The use of Whatsapp in design based modules",IEEE Xplore 2018.

  18. Ayata, D., Yaslan, Y. and Kamasak, M.E., 2018. Emotion based music recommendation system using wearable physiological sensors. IEEE transactions on consumer electronics, 64(2), pp.196-203.

  19. Mariappan, M.B., Suk, M. and Prabhakaran, B., 2012, Deember. Facefetch: A user emotion driven multimedia content recommendation system based on facial expression recognition. In 2012 IEEE International Symposium on Multimedia (pp. 84-87). IEEE.

  20. Shabu, S.J., Janaardhan, C., Bhaskar, K., Mary, A.V.A., Refonaa, J. and Dhamodaran, S., 2023, July. Music Recommendation System based on Facial Expression. In 2023 4th International Conference on Electronics and Sustainable Communication Systems (ICESC)

(pp. 908-912). IEEE.