References
[1] K. Hevner, “Experimental studies of the elements of expression in music,” The American Journal of Psychology, vol. 48, no. 2, pp. 246{268, 1936.
[2] J. A. Russell, “A circumplex model of affect.,” Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
[3] R. E. Thayer, The biopsychology of mood and arousal. Oxford University Press, 1990.
[4] E. Schubert, “Update of the hevner adjective checklist,” Perceptual and motor skills, vol. 96, no. 3 suppl, pp. 1117-1122, 2003.
[5] C. M. Whissell, “The dictionary of affect in language,” in The measurement of emotions, Elsevier, 1989, pp. 113{131.
[6] D. Baum, “Emomusic-classifying music according to emotion,” in Proceedings of the 7th Workshop on Data Analysis (WDA2006), Citeseer, 2006.
[7] K. C. Dewi and A. Harjoko, “Kid`s song classification based on mood parameters using k-nearest neighbor classification method and self organizing map,” in 2010 International Conference on Distributed Frameworks for Multimedia Applications, IEEE, pp. 1-5, 2010.
[8] Y.H. Yang, Y.C. Lin, Y.F. Su, and H. H. Chen, “A regression approach to music emotion recognition," IEEE Transactions on audio, speech, and language processing, vol. 16, no. 2, pp. 448-457, 2008.
[9] T. Li and M. Ogihara, “Content-based music similarity search and emotion detection,” in 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE, vol. 5, pp. V-705, 2004.
[10] D. Cabrera et al., “Psysound: A computer program for psychoacoustical analysis,” in Proceedings of the Australian Acoustical Society Conference, vol. 24, pp. 47-54, 1999.
[11] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transactions on speech and audio processing, vol. 10, no. 5, pp. 293-302, 2002.
[12] A. Sen and M. Srivastava, Regression analysis: theory, methods, and applications. Springer Science & Business Media, 2012.
[13] A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and computing, vol. 14, no. 3, pp. 199-222, 2004.
[14] D. P. Solomatine and D. L. Shrestha, “Adaboost. rt: A boosting algorithm for regression problems,” in 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), IEEE, vol. 2, pp. 1163-1168, 2004.
[15] E. Schubert, Measurement and time series analysis of emotion in music. University of New South Wales Sydney, vol. 1, 1999.
[16] A. Hanjalic and L.-Q. Xu, “Affective video content representation and modeling,” IEEE transactions on multimedia, vol. 7, no. 1, pp. 143-154, 2005.
[17] Y.H. Yang, C.C. Liu, and H. H. Chen, “Music emotion classification: A fuzzy approach,” in Proceedings of the 14th ACM international conference on Multimedia, pp. 81-84, 2006.
[18] M. D. Korhonen, D. A. Clausi, and M. E. Jernigan, “Modeling emotional content of music using system identification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 36, no. 3, pp. 588-599, 2006.
[19] B.j. Han, S. Rho, R. B. Dannenberg, and E. Hwang, “Smers: Music emotion recognition using support vector regression.,” in ISMIR, Citeseer, pp. 651-656, 2009.
[20] P. N. Juslin and J. A. Sloboda, Music and emotion: Theory and research. Oxford University Press, 2001.
[21] R. Sridharan, “Gaussian mixture models and the em algorithm,” Available in: http://people. csail. mit. edu/rameshvs/content/gmm-em. pdf, 2014.
[22] C.C. Chang and C.J. Lin, “Libsvm: A library for support vector machines,” ACM transactions on intelligent systems and technology (TIST), vol. 2, no. 3, pp. 1-27, 2011.
[23] Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. A. Speck, and D. Turnbull, “Music emotion recognition: A state of the art review,” in Proc. ismir, vol. 86, pp. 937-952, 2010.
[24] A. Mehrabian and J. A. Russell, An approach to environmental psychology. the MIT Press, 1974.
[25] M. M. Bradley and P. J. Lang, “Affective norms for English words (anew): Instruction manual and affective ratings,” Technical report C-1, the center for research in psychophysiology., Tech. Rep., 1999.
[26] Y. Hu, X. Chen, and D. Yang, “Lyric-based song emotion detection with affective lexicon and fuzzy clustering method.,” in ISMIR, pp. 123-128, 2009.
[27] L. Mion and G. De Poli, “Score-independent audio features for description of music expression,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 2, pp. 458-466, 2008.
[28] D. Yang and W.-S. Lee, “Disambiguating music emotion using software agents.,” in ISMIR, vol. 4, pp. 218-223, 2004.
[29] K. Bischo, C. S. Firan, R. Paiu, W. Nejdl, C. Laurier, and M. Sordo, “Music mood and theme classi_cation-a hybrid approach.,” in ISMIR, pp. 657-662, 2009.
[30] M. Soleymani, M. N. Caro, E. M. Schmidt, and Y.-H. Yang, “The mediaeval 2013 brave new task: Emotion in music.,” in MediaEval, Citeseer, 2013.
[31] F. Zhang, H. Meng, and M. Li, “Emotion extraction and recognition from music,” in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), IEEE, pp. 1728-1733. 2016.
[32] A. Aljanaki, F. Wiering, and R. C. Veltkamp, “Mirutrecht participation in mediaeval 2013: Emotion in music task.,” in MediaEval, Citeseer, 2013.
[33] J. Brownlee, Logistic Regression for Machine Learning, en-US, Mar.2016.[Online].Available:https://machinelearningmastery.com/logistic-regression-for-machine-learning