- Open Access
- Authors : O.I. Koekina , A. E. Kuziaev
- Paper ID : IJERTV9IS120258
- Volume & Issue : Volume 09, Issue 12 (December 2020)
- Published (First Online): 06-01-2021
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
EEG Spectral Response to Listening to Musical Melodies Expressing Emotional States
O. I. Koekina
Scientific Center for Consciousness Studies Moscow, Russia
-
E. Kuziaev
Chaos Research Laboratory Moscow, Russia
Abstract:- The article presents research into the emotional influence of musical melodies on a person. As part of the development and testing of the methodology for rapid assessment of the mood of melodies, the electrical potentials of the brain were recorded while listening to them. The goal of the subsequent analysis of the recorded values was to find the correspondence of the obtained reliable changes to the algorithm for predicting the development of dominant emotions in a persons psychoemotional background under acoustic exposure.
The results showed the alternation patterns of the rhythms of brain activity, common to the emotional perception of each melody. Musical accompaniment melodies (audio track) are important not only in the developments of engineering psychology, used to increase labor productivity, the efficiency of decision-making, the development of creative ideas and intuition. The data on the mechanisms of the influence of melodies on human emotions can also be used to develop the concept of the emotional component of artificial intelligence.
Keywords Emotions, EEG, music, psychoacoustics, electroencephalogram, artificial intelligence.
-
INTRODUCTION
-
Any listening to music involves its emotional perception. To adjust mood, a person often resorts to listening to certain melodies. Indeed, listening to music evokes emotions. However, people may perceive them in different ways [1, 2, 3]. There are certain strategies for the perception of emotions in target groups [4], in individuals with different national cultures [5, 6, 7]. Therefore, an objective assessment of the emotional state of a person caused by the perception of a musical melody requires developing special techniques. By now, using the already known techniques (autonomic reactions, psychoemotional tests, behavioral reactions, and EEG recording), researchers could determine the meaning (positive or negative) and the strength of emotions [7, 8]. Nevertheless, a need remains for an objective assessment of the type of basic emotions manifested in a person. To solve this problem, the following tasks were formulated.
-
Find melodies that most adequately reflect basic emotions.
-
Determine and standardize the conditions for listening to the melodies.
-
Use objective methods of recording a person's state when listening to the melodies.
-
Based on the obtained objective data, analyze the state of a person – the operator (subject), evaluate individual and
group variability, and outline the objective parameters that reflect the characteristic features of basic emotions.
-
Assess the possibility of using the developed methodology for evaluating the emotional state of a person while listening to various melodies.
-
METHODS
The study involved a group of subjects engaged in intellectual work (13 people, including 5 women and 8
men), aged 30 to 60 years.
An acoustic exposure system was used to test and assess the emotional state. The identification of emotional meanings by the subjects was assumed to proceed at the level of auditory perception of information much more accurately than with the support, for example, on mimic or other images of emotional meanings [9]. Therefore, the preferred musical melodies were those with certain emotional content. The authors considered at the same time that music can cause two types of emotions: recognized and experienced [10]. This means that when listening to any kind of emotional music, study participants may experience ambivalent feelings. For example, if subjects become aware of the emotion of sadness while listening to sad music, this does not mean that they fall into a state of sadness and depression. On the other hand, an emotional melody can trigger the process of transition to the corresponding emotional state in the subject. Perhaps the level of transition to emotional empathy for the melody will depend on the initial development of this feeling or, in other words, empathy. It is about expanding the perceptual field.
To test the emotional perception of the subjects, musical audio tracks were used, each of which contained a dominant emotion: sadness, joy, inspiration, anxiety, and euphoria. The selection of melodies was based on the assumption that the verbal content of the song is the main indicator of its reference to the expression of some basic emotion, and the music author and the singer, sometimes all rolled into one, seeks to express this emotion in music and its sound as he/she feels and understands it. In this study, during the presentation of a song melody, its verbal plot sounded in an unfamiliar language to the test subject or was excluded, so that the subject had the only opportunity to assess the emotional content of the melody by its sound and musical structure.
As the most adequate methods for obtaining objective data on the changes in emotional states of consciousness, neurophysiological brain recording methods – electroencephalogram (EEG) – were chosen.
EEG was performed on a 24-channel NVX24 neuroimaging device, by Medical Computer Systems LLC, using standard monopolar leads following the accepted international scheme 10-20 [11].
The studies were carried out under standard conditions, the same for each subject. Brain biopotentials were recorded awake with eyes closed, with mental and muscular relaxation, and while listening to melodies selected according to the content of dominant emotions. Headphones were used to eliminate the influence of noise interference or random sound signals.
Objective data in the form of changes in brain activity were obtained during test tests – while listening to melodies that express the following emotions:
-
sadness: Creedence Clearwater Revival – Hideaway
-
joy: Creedence Clearwater Revival- Ooby Dooby
-
inspiration: Jethro Tull – Moths
-
anxiety: Sweet – No You Don't
-
euphoria: Simon & Garfunkel – El Condor Pasa
To assess EEG changes during test tests, we used the procedure of comparison with the EEG in the background state, that is, mental and muscle relaxation with closed eyes (to reduce the level of signals coming from the body and through the visual channel). In this case, the background indicators of brain activity became the starting point, as the recording conditions (lighting, noise level, air temperature, body condition, etc.) remained the same both in the background and during the tests. Therefore, the differences in indicators during the tests were attributed directly to the tested processes. As the functional variability of the EEG is also characteristic within normal limits, only statistically significant differences were taken into account.
Data processing system
The main purpose of data processing was to compare the self-reports of the subjects, the observed patterns in the brain rhythms, and the localization of the centers of the electrical activity responsible for the origin of these patterns. For this, software packages were used that provide spectral analysis of recorded signals and topographic mapping of spectral characteristics of the EEG [12]. As a result, the distribution of the power indicators of the frequency spectrum was obtained in individual surface areas of the cerebral hemispheres. Statistical analysis of the data as used to determine the reliability of the obtained both individual and group changes.
to identify the clearest reactions during individual tests, the localization and distribution of equivalent dipole sources (EDS) of electrical activity in the deep brain structures were calculated. To do this, the localization of EDSs, calculated at each time point, equal to the discrete interval of data readout when entering the computer, was determined and monitored in the human brain. The program was used to trace the emergence of centers of electrical activity in brain structures under various emotional states. The question of the correspondence of the localization of sources in the deep structures of the brain to the recorded bioelectric activity on the surface of the head was solved by a special algorithm, which considered the
electrical properties of the brain as a volumetric conductor [13, 14].
-
-
RESULTS & DISCUSSION
Auditory perception of a piece of music is a complex multi- stage process. The primarily perceived main physical parameters of music are as follows: tempo/rhythm, tonality/melody, loudness/expression/dynamics, and spectrum. They are recognized by the primary centers of auditory perception located in the temporal lobes of the cerebral cortex – cytoarchitectonic fields 41 and 42, according to Brodmann [15]. However, projections into the cortex of sensory systems are much more complicated. They include simultaneous streams of primary information of different functionality and thus the core of the auditory analyzer is not their only destination [16].
Primary responses in the cerebral cortex are recorded with a latency period of 10 to 100 ms. In this regard, the proposed changes in the spectral characteristics of the EEG are expected in the ranges of gamma (30-45 Hz), beta1.2 (13-30 Hz), and alpha (7.5-13 Hz) rhythms mainly in the zones of primary representation, both in the cerebral cortex and in the subcortex.
The auditory analyzer nucleus tends to respond to pure tones with well-tuned low-frequency ranges and tonotopic isofrequency conduction. Most of the signals come directly from the ventral thalamus. Around the centralized nucleus is the area of the auditory cortex zone; its main thalamic inputs occur in the dorsal and medial nuclei, as well as from the ventral thalamic nucleus. The subcortical structures at this level respond to specific features of acoustic stimuli, and the auditory cortex integrates them, providing recognition of the physical characteristics of sound. The third region of the parazone is ventral to the zone and is tightly connected with the zone, but has almost none connections with the nucleus. The parazone has cortical entrances of non-auditory areas, which are adjacent to the superior temporal sulcus. This scheme can play an important role in polysensory processing (e.g., audiovisual interactions). From the parazone and zone, sound signals go to the fourth level of neural processing within the temporal, parietal, and frontal lobes. This relationship implies that many areas of the brain, even those that are not strictly considered as sound processing centers, receive sound impulses, and are critical for their correct processing.
Both slow (<30 Hz) and accelerated (>50 Hz) temporal structures can be distinguished. An accelerated pace that outstrips the capacity of the neural response rate requires other strategies. With a high intensity of the arrival of signals in the temporal structures, oscillations occur in the range of milliseconds, which is important for the perception of music.
While listening to the melody, the subject's brain primarily physically perceives the psychoacoustic and musical characteristics of the audio stream tuned to the expression of emotion (in this case, sadness).
Due to a unique sequence of physical characteristics and their combinations in each melody, the brain also primarily perceives this sequence of sounds and their combinations.
These responses of the brain to sound signals are expressed by the appearance of numerous EDSs not only in the nucleus of the auditory analyzer but also in its zone and parazone. As expected, the responses and their EDSs are maximally expressed in the high-frequency range of the EEG gamma rhythm. Part 4 of Figure 1 lists not only Brodmann's fields in the cerebral cortex but also the nuclei of the thalamus, hippocampus, fornix and mammillary bodies, cingulate gyrus, amygdala, striatum, and other
formations related to the emergence and regulation of emotions. This means that sound signals arrive in several streams simultaneously into various structures of the brain, exciting the very unique space-time sequence and unique combinations of reactions to sounds that were laid down in a piece of music. For each
1. EEG frequency
2. Dipole coefficient (DC)
3. Equivalent dipole source of origin of EEG rhythms in the brain
4. Localization of equivalent dipole sources in brain structures
45-70 Hz
DC= 0.95
Brodman areas (41,42 pink),20, 21, 22, 28 Girus parahippocampalis, Hippocampus, corpus
amygdaloideum, corpus mamillaris. Nucleus lentiformis
30-45 Hz
DC =0.96
Brodman areas (41,42 pink),20, 21, 22, 28,36, 38 Putamen, Substantia nigra, Hippocampus, corpus amygdaloideum, corpus mamillaris
14-30 Hz
DC =0.97
Brodman areas (41,42 pink),17,18,19,21,23,27,29,30, 31,36,37
Mesencephalon, Putamen, Nucleus lentiformis, Hippocampus, Pulvinar thalami, Cauda Nuclei Caudati/
11-13 Hz
DC =0.98
Brodman areas 17,18,19,22,27,28, 37 Mesencephalon, Putamen, Nucleus lentiformis
Pulvinar thalami, Nucleus posterior lateralis, Nucleus posterior ventromedialis, Hippocampus
9-11 Hz
DC =0.96
Brodman areas (41,42 pink),5,7,17,18,19,27,30,31,35, 36,37
Putamen, Gyrus parahippocampalis,
Nucleus posterior lateralis, Nucleus posterior ventromedialis
7-9 Hz
DC =0.98
Brodman areas 17,18,19,21,23,24,27,28,29,30,31,36,37,
Cauda Nucleus Caudatis, Corpus Mamillare, Pulvinar thalami, Nucleus medialis dorsalis, Substantia Nigra, Nucleua Ruber, Gyrus parahippocampalis
4-7 Hz
DC =0.98
Brodman areas 11,17,18,19,22,25,28,47,Corpus mamillaris, Putamen,Cauda Nucleus Caudatis, Globus Pallidus Lateralis&medialis Hippocampus, Pulvinar thalami, Nuclei ventrolateralis, medialis dorsalis, Nucleus Ruber
0.5-4 Hz
DC =0.97
Brodman areas 3,4,10,11,17,18,20,22,25,28,31,34,35,38
Mesencephalon,Corpus mamillaris, Thalamus,
Nuclei ventrolateralis, medialis dorsalis, Globus Pallidus Lateralis&medialis, Nucleus Ruber, Corpus amygdaloideum.
Fig. 1. Distribution features of equivalent dipole sources of highly active electrical potentials in brain structures while listening to a melody tuned to the emotion of sadness (example). Figure 1.1 and Figure 1.2. – source parameters; Figure 1.3. – the distribution of sources in the brain structures (a view of the sagittal projection of the head), the primary auditory zones – Brodman area 41, 42 are marked in pink; Figure 1.4. – names of brain structures with sources of increased activity.
subject, the picture of the distribution of EEG EDS rhythms becomes, on the one hand, unique, and on the other, it does not go beyond the designated brain structures. At the same time, each EDS may be a product of forecasting the subsequent development of the script of a musical work.
The above-mentioned structures of the brain are known to have multiple neural connections, both among themselves and ascending to the cerebral cortex and descending through the spinal cord to the interoceptive fields of individual organs. Upward influences can activate the
corresponding area of the cerebral cortex, which can be accompanied by a change in the psychoemotional state. The degree of excitation of interoceptive fields by descending impulses with the help of feedback with the participation of the cortex of the left hemisphere causes a reaction to assess changes in
a person's state. This is the bodys payment according to
L.F. Barrett [2].
As expected, each subject, when listening to a melody, had clear emotional reactions, which were noted during the survey and reflected in statistically significant changes in the spectral characteristics of the EEG. In all test trials, significant changes in spectral power primarily related to
the temporal lobes and zones of auditory perception. Also, the occipital region with the zones of visual perception, the frontal regions, and the associative zones of the parietal regions participated. Listening to some dance melodies caused activation of motor and sensory zones in the central region.
The statistical analysis of the data of the group of subjects was expected to repeat the reliability of changes in the individual spectral characteristics. However, only well- defined trends were obtained.
STANDARD BETA RANGE 13-30 Hz
STANDARD GAMMA RANGE 30-45 Hz
SADNESS
F8(28-29); Fz, F4, F8 (29- 30)
T3(15-16); T5(21-22)
SADNESS
C4(36-37);Oz,O2 (41-42); T4(37-
38)
T5(36-37);T3, F7 (38-39)
JOY
JOY
F8(36-37);C4,P4 (39-40);
O2,T6(38-39);F8(41-42)
F7,Pz(39-40);T3 (42-43); T3(44-
45); F7,Oz(37-38)
INSPIRATION
F7(22-23); Oz,O2(22-23)
INSPIRATION
F7(39-40);T3,O1(42-43);Cz(35- 36); Fz(40-41).
ANXIETY
C4(22-23).
ANXIETY
C3,Cz,Pz(35-36);T4(32-33);F8(41- 42);T6(43-44)
EUPHORIA
EUPHORIA
Pz,Cz,Fz (34-35);O1,Oz,T5(34- 35); F8(41-42);
F7(33-34); T5 (39-40); Fz(25-
26);F8(37-38); F8(38-39).
+
Fig. 2. Neuro-mapping of the cerebral cortex zones, red – an increase, and blue – a decrease in auto-spectra of percentage power (% of the standard range of beta or gamma rhythms) in the narrower frequency ranges indicated in each map right. Statistically significant changes with p <= 0.05 are observed mainly in the base range of gamma frequencies. The basic beta range has no reliable group changes of frequencies when listening to individual melodies (JOY, EUPHORIA).
The group analysis, in turn, revealed statistically significant deviations of the spectral power of individual narrow-band rhythms in frequency. However, these deviations were assessed concerning the spectral power of the standard ranges of the delta, beta, and gamma rhythms, which included individual narrow-band rhythms. This statistic
reflects the percentage ratio of the spectral power of a narrowband in frequency rhythm to the spectral power of its standard range.
Beta and gamma rhythm ranges show significant changes in the percentage power. No significant changes were observed in the standard ranges of theta and alpha rhythms.
The reliability of the changes suggests that the identified narrow-band oscillations reflect direct reactions to the physical signs of sound in the melody, therefore these oscillations refer to the standard ranges of gamma and beta rhythms. Due to the stable research conditions, the results obtained when listening to melodies with different emotions showed the specificity of response to each melody. It is interesting to note that this specificity is measured as a percentage of the standard range. This means that the spectral characteristics of the standard rhythm ranges remain within certain stability limits. The overall system remains balanced.
In terms of balance, changes occur most often in the temporal and frontal areas of the cortex (the number of cases is indicated in parentheses).
For selected narrow-band oscillations of the standard range of gamma rhythm:
-
increase in percentage power T3(6), F8(4), T5(3), Pz(3), F7(2) C3(2) Cz(2), F4(2), C4(2), P4(2), Oz(2), Fz, O1,O2, T4;
-
decrease in percentage power F7(3), Oz(3), O1(4), T5(2), T6(2), F8(5), Cz(2), Fz(2), P4(2), C4, T4, O2, Pz T3, P3, C3, F3.
For selected narrow-band oscillations of the standard range of beta rhythm:
-
increase in percentage power F8, Fz, F4, F8, C4, F7;
-
decrease in percentage power T3, T5, O2, Oz
It is important to note that in the standard range of beta rhythm, melodies with negative emotions (sadness, anxiety) have a significant increase in the percentage power in narrow bands in the right hemisphere, while those with positive emotions (joy, inspiration, euphoria) in the standard beta and gamma ranges cause the same in the left hemisphere.
Thus, we can talk about the specificity of the schemes of increasing and decreasing the spectral power of a set of narrow-band rhythms for each listened melody. This allows predicting the impact of a soundtrack on a wider audience. However, the question remains, why individual listening to emotional melodies give a vivid reliable result, while group analysis shows only trends?
What matters is the initial emotional level of perception, the general set of interaction between the zones of the cortex and deeper structures in each hemisphere, as well as between the hemispheres themselves.
Each person has his/her psychophysiological and neurophysiological characteristics of perception. The task of this study is to determine the signs of the neurophysiological influence of emotional melodies, or simply the direction of the influences of basic emotions expressed in melodies, regardless of the initial psycho- emotional state.
1. Sadness
4. Anxiety
2. Joy
5. Euphoria
3. Inspiration
6.
Standard 10/20 arrangement of electrodes on the head surface
Fig. 3. Neuro-mapping of changes in the spectral power of EEG rhythms of 0.5-45 Hz while listening to melodies with different emotional content. The headings of items 1, 2, 3, 4, and 5 indicate the dominant emotion of the melody being listened to.
Section 6 presents the EEG derivation diagram. Each point presents maps of the cerebral cortex with the distribution of spectral power with a 1 Hz pitch. On the right, there is the general scale of the reliability coefficients according to the Student's t-test.
Below are the values of the confidence factor levels.
Group statistics show that listening to each musical composition causes a unique combination of activation of the cerebral cortex and vibration frequency. Tracking changes in the spectral power of EEG oscillations with a 1 Hz pitch by mapping on the entire surface of the cerebral cortex provides several advantages in evaluating data and confidence, due to their constant and sufficiently high reproducibility.
The tendencies of changes in the spectral power of the EEG on the surface of the cerebral cortex, due to the precise conditions of the study, relate directly to listening to melodies and reflect changes in the psychoemotional state of the subject. It is known that the temporal lobes and frontal areas of both hemispheres are primarily responsible for the quality of the psychoemotional effect. Other parts of the brain can additionally interact with these areas [16].
Figure 3.1 show a series of maps of the cerebral cortex, where, while listening to a melody with an expression of sadness, an increase in the power of gamma rhythms occurs in the mid- and posterior temporal zones, as well as in the frontal regions and along the sagittal line. This distribution is a hallmark of the brain's response to listening to the melody of sadness.
While listening to a melody of inspiration (Figure 3.3) or euphoria (Figure 3.5), an increase in spectral power in the anterior cerebral cortex and the left hemisphere prevails. This distribution is typical of positive emotions. When listening to a melody, conditionally calibrated as inspiration, an increase in the power of rhythms prevails, starting with 3-4 Hz and above, and for euphoria – starting from 7-8 Hz and above. The most active zones are frontotemporal F7 and F8, T3 is more on the left when listening to inspiration, visual zones O1, Oz become additionally active. The combination of auditory and visual zones gives additional nuances to the emotional perception
of melodies. Thus, there are similarities and differences in brain responses in the perception of both positive emotions. Listening to a melody expressing joy (Fig. 3.2) is accompanied by an increase in the spectral power of beta and gamma rhythms in the sensorimotor zones (C3, C4). The activation of the cortical sensorimotor zones occurs under the influence of the ascending systems of the central nuclei of the thalamus and, possibly, reflects mental movement or the desire to move. This is typical for dance tunes and some emotions such as anger, aggression, etc.
Listening to a melody expressing anxiety (Figure 3.4) is accompanied by an increase in the spectral power of delta and theta rhythms, mainly in the right hemisphere. At the same time, there is a decrease in spectral power in the ranges of beta and gamma rhythms. The distribution of the spectral power of the EEG rhythms is unique for a given melody and does not repeat with other melodies.
The results obtained showed clear and ordered tendencies of changes in the spectral characteristics of the EEG in certain areas of the cerebral cortex. At the same time, an increase in the spectral power of EEG rhythms was observed mainly in the temporal zones, and, for some musical compositions, additionally in the central (sensorimotor), occipital (visual), and frontal regions. There are clear differences in EEG characteristics when listening to different audio tracks expressing the emotional states of a person. Due to randomized grouping of subjects, the obtained similarity of neurophysiological reactions and psychoemotional reports of the perception of melodies expressing not only the sign of emotion but also selected classical emotional states; all this allows us to acknowledge the effectiveness of the developed technique. It can be used to create group statistical patterns of perception of each specific melody.
-
-
CONCLUSION
-
The changes in the spectral characteristics of the EEG when listening to musical melodies expressing the emotional states of a person such as sadness, joy, inspiration, anxiety, and euphoria was studied. The research methodology was developed.
-
The study revealed the ambiguity of individual emotions caused by the test melodies. However, in each emotional state, individual features of the perception of music were reflected, characteristic of the group as a whole, regardless of the personal level of empathy of each operator.
-
The use of group statistical analysis with neuro-mapping made it possible to identify individual algorithms of brain activity that correspond to the general signs of emotional states characteristic of the group, which arise when listening to a specific musical melody expressing a dominant emotion.
-
-
FUTURE WORK
-
Objective neurophysiological methods of studying the emotional responses of a group of subjects to musical influences confirm the possibility of developing and using an express method for predicting the brain's reaction to a musical audio track. These studies are relevant for developments in engineering psychology, psychoacoustics, and highly reliable algorithms for emotion recognition. The data obtained on the mechanisms of the influence of music on human emotions can also be used to develop the concept of the emotional component in artificial intelligence systems.
[1] |
Psychophysiological diagnostics of human emotions by EEG indicators; Lapshina T.N. Authors abstract, candidate of psychological sciences, Moscow, 2007, p. 26 ; .. , , 2007, .26 |
[2] |
How emotions are born. A revolution in understanding the brain and managing emotions; Barrett L.F., Mann, Ivanov and Ferber, LLC, 2018, p. 1168 . ; .. , «, », 2018, . 1168 |
[3] |
Human Basic Emotion Recognition from EEG Signals using IOT; Hafsa Mahin MIT, Mysore, Karnataka, India, Bi Bi Hajira MIT, Mysore, Karnataka, India, Meghana B L MIT, Mysore, Karnataka, India, Arpitha Y A Dept of ECE, MIT, Mysore, Karnataka, India, Niveditha H R Asst Professor Dept of ECE, MIT, Mysore, Karnataka, India – International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181, 2020, Volume 8, Issue 11: 47-50 Published by www.ijert.org |
[4] |
How music controls our emotions; Sviriaev A. https://a- viryaev.livejournal.com/18986.html |
[5] |
"Expression and perception of emotions in multi-system languages: experimental phonetic research based on the material of the Kyrgyz and German Languages"; Dzhandoletova B.S. Authors abstract, candidate of philological sciences, 2019, p. 215 : – ; .. , 2019, .215 |
[6] |
Ooh là là ! Music evokes 13 key emotions; Yasmin Anwar, Scientists have mapped them. Berkeley News, January 6, 2020 |
[7] |
Honor Whiteman Some emotional responses to music are universal, study finds; Ai Kawakami, Kiyoshi Furukawa, Kentaro Katahira and Kazuo Okanoya Newsletter, 2015, |
[1] |
Psychophysiological diagnostics of human emotions by EEG indicators; Lapshina T.N. Authors abstract, candidate of psychological sciences, Moscow, 2007, p. 26 ; .. , , 2007, .26 |
[2] |
How emotions are born. A revolution in understanding the brain and managing emotions; Barrett L.F., Mann, Ivanov and Ferber, LLC, 2018, p. 1168 . ; .. , «, », 2018, . 1168 |
[3] |
Human Basic Emotion Recognition from EEG Signals using IOT; Hafsa Mahin MIT, Mysore, Karnataka, India, Bi Bi Hajira MIT, Mysore, Karnataka, India, Meghana B L MIT, Mysore, Karnataka, India, Arpitha Y A Dept of ECE, MIT, Mysore, Karnataka, India, Niveditha H R Asst Professor Dept of ECE, MIT, Mysore, Karnataka, India – International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181, 2020, Volume 8, Issue 11: 47-50 Published by www.ijert.org |
[4] |
How music controls our emotions; Sviriaev A. https://a- viryaev.livejournal.com/18986.html |
[5] | <>"Expression and perception of emotions in multi-system languages: experimental phonetic research based on the material of the Kyrgyz and German Languages"; Dzhandoletova B.S. Authors abstract, candidate of philological sciences, 2019, p. 215
: – ; .. , 2019, .215 |
[6] |
Ooh là là ! Music evokes 13 key emotions; Yasmin Anwar, Scientists have mapped them. Berkeley News, January 6, 2020 |
[7] |
Honor Whiteman Some emotional responses to music are universal, study finds; Ai Kawakami, Kiyoshi Furukawa, Kentaro Katahira and Kazuo Okanoya Newsletter, 2015, |
REFERENCES
https://www.medicalnewstoday.com/articles/287680
-
EEG-indication of human emotional states; Lapshina T.N. Bulletin of Moscow State University. Ser.14 "PSYCHOLOGY". – 2004. – No. 2. – p. 101-102
; .. . C.14 "". 2004. – 2. . 101-102
-
Modality-independent role of the primary auditory cortex in time estimation; Ryota Kanai 1 , Harriet Lloyd, Domenica Bueti, Vincent Walsh Exp Brain Res . 2011 Mar;209(3):465-71.
-
Sad music induces pleasant emotion; Ai Kawakami, Kiyoshi Furukawa, Kentaro Katahira and Kazuo Okanoya, Front. Psychol., 13 June 2013|| https://doi.org/10.3389/fpsyg.2013.00311
-
Cerebral location of international 10-20 system electrode placement; Homan R.W., Herman J., Purdy P., EEG and Clinical Neurophysiology. 1987; 66: 376-382.
-
Brainsys Brain Analysis and Mapping Computer System;
Mitrofanov A.A., Statokin, 1999, p. 65
"Brainsys";
.., , 1999 ., .65
-
EEG inverse problem and clinical electroencephalography (mapping and localization of sources of the electrical activity of the brain) Gnezditskii V.V. Moscow, "MEDpress-inform", 2004, p. 624 ( ) .. , «- », 2004, .624
-
Research and numerical solution of some inverse problems of electroencephalography; Koptelov Iu.M., Authors abstract, candidate of physical and mathematical sciences, Moscow, 1988, p. 29 ; .., – , , 1988, c.29
-
Vergleichende Lokalisationslehre der Grosshirnrinde : in ihren Principien dargestellt auf Grund des Zellenbaues; Brodmann Korbinian, Leipzig: Johann Ambrosius Barth Verlag, 1909
-
Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm; Vinoo Alluri, Petri Toiviainen, P.Jääskeläinen, EnricoGlerean, MikkoSams, ElviraBrattico, NeuroImage, Volume 59 Issue 4, 15 February 2012, Pages 3677-3689
BIOGRAPHICAL NOTES
Koekina Olga, candidate of medical Sciences, Director of the Scientific center for consciousness research.
Her area of interest is EEG Signal Processing and Networking applications.
Kuziaev Aleksandr, Head of the Chaos Laboratory, author of the EMUSE emotion recognition concept.
Research interests:
-
Studying hidden patterns in chaotic flows, forecasting
-
Algorithms for recognizing emotions in music
-
Synthesis of interactive music.