Skip to main content
Till KTH:s startsida Till KTH:s startsida

Publications by Roberto Bresin

Refereegranskade

Artiklar

[1]
Latupeirissa, A. B. & Bresin, R. (2023). PepperOSC: enabling interactive sonification of a robot's expressive movement. Journal on Multimodal User Interfaces, 17(4), 231-239.
[2]
Latupeirissa, A. B., Panariello, C. & Bresin, R. (2023). Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification. ACM Transactions on Human-Robot Interaction.
[3]
Orthmann, B., Leite, I., Bresin, R. & Torre, I. (2023). Sounding Robots : Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction. ACM Transactions on Human-Robot Interaction, 12(4).
[4]
Favero, F., Lowden, A., Bresin, R. & Ejhed, J. (2023). Study of the Effects of Daylighting and Artificial Lighting at 59° Latitude on Mental States, Behaviour and Perception. Sustainability, 15(2), 1-21.
[5]
Favero, F., Lowden, A., Bresin, R. & Ejhed, J. (2023). Study of the Effects of Daylighting and Artificial Lighting at 59° Latitude on Mental States, Behaviour and Perception. Sustainability, 15(2), 1144-1144.
[6]
Sköld, M. & Bresin, R. (2022). Sonification of Complex Spectral Structures. Frontiers in Neuroscience, 16.
[7]
Panariello, C. & Bresin, R. (2022). Sonification of Computer Processes : The Cases of Computer Shutdown and Idle Mode. Frontiers in Neuroscience, 16.
[8]
Misdariis, N., Özcan, E., Grassi, M., Pauletto, S., Barrass, S., Bresin, R. & Susini, P. (2022). Sound experts’ perspectives on astronomy sonification projects. Nature Astronomy, 6(11), 1249-1255.
[10]
Bresin, R., Mancini, M., Elblaus, L. & Frid, E. (2020). Sonification of the self vs. sonification of the other : Differences in the sonification of performed vs. observed simple hand movements. International journal of human-computer studies, 144.
[11]
Frid, E., Elblaus, L. & Bresin, R. (2019). Interactive sonification of a fluid dance movement : an exploratory study. Journal on Multimodal User Interfaces, 13(3), 181-189.
[12]
Frid, E., Moll, J., Bresin, R. & Sallnäs Pysander, E.-L. (2018). Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task. Journal on Multimodal User Interfaces, 13(4), 279-290.
[14]
Elblaus, L., Tsaknaki, V., Lewandowski, V., Bresin, R., Hwang, S., Song, J. ... Taylor, A. (2015). Demo Hour. interactions, 22(5), 6-9.
[15]
Turchet, L. & Bresin, R. (2015). Effects of interactive sonification on emotionally expressive walking styles. IEEE Transactions on Affective Computing, 6(2), 152-164.
[16]
Dubus, G. & Bresin, R. (2015). Exploration and evaluation of a system for interactive sonification of elite rowing. Sports Engineering, 18(1), 29-41.
[17]
Goebl, W., Bresin, R. & Fujinaga, I. (2014). Perception of touch quality in piano tones. Journal of the Acoustical Society of America, 136(5), 2839-2850.
[19]
[20]
Eerola, T., Friberg, A. & Bresin, R. (2013). Emotional expression in music : Contribution, linearity, and additivity of primary musical cues. Frontiers in Psychology, 4, 487.
[21]
Hansen, K. F., Dravins, C. & Bresin, R. (2012). Active Listening and Expressive Communication for Children with Hearing Loss Using Getatable Environments for Creativity. Journal of New Music Research, 41(4), 365-375.
[22]
Bresin, R., Hermann, T. & Hunt, A. (2012). Interactive sonification. Journal on Multimodal User Interfaces, 5(3-4), 85-86.
[23]
Fabiani, M., Bresin, R. & Dubus, G. (2012). Interactive sonification of expressive hand gestures on a handheld device. Journal on Multimodal User Interfaces, 6(1-2), 49-57.
[24]
Varni, G., Dubus, G., Oksanen, S., Volpe, G., Fabiani, M., Bresin, R. ... Camurri, A. (2012). Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices. Journal on Multimodal User Interfaces, 5(3-4), 157-173.
[25]
Hansen, K. F. & Bresin, R. (2012). Sonification of distance between stations in train journeys. TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden, 52(1), 13-14.
[26]
Bolíbar, J. & Bresin, R. (2012). Sound feedback for the optimization of performance in running. TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden, 52(1), 39-40.
[27]
Hansen, K. F., Dubus, G. & Bresin, R. (2012). Using modern smartphones to create interactive listening experiences for hearing impaired. TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden, 52(1), 42.
[28]
Hansen, K. F., Fabiani, M. & Bresin, R. (2011). Analysis of the acoustics and playing strategies of turntable scratching. Acta Acoustica united with Acustica, 97(2), 303-314.
[29]
Bresin, R. & Friberg, A. (2011). Emotion rendering in music : Range and characteristic values of seven musical variables. Cortex, 47(9), 1068-1081.
[30]
Burger, B. & Bresin, R. (2010). Communication of Musical Expression by Means of Mobile Robot Gestures. Journal on Multimodal User Interfaces, 3(1), 109-118.
[31]
Hansen, K. F. & Bresin, R. (2010). The Skipproof virtual turntable for high-level control of scratching. Computer music journal, 34(2), 39-50.
[32]
Visell, Y., Fontana, F., Giordano, B. L., Nordahl, R., Serafin, S. & Bresin, R. (2009). Sound design and perception in walking interactions. International journal of human-computer studies, 67(11), 947-959.
[33]
Mancini, M., Bresin, R. & Pelachaud, C. (2007). A virtual head driven by music expressivity. IEEE Transactions on Audio, Speech, and Language Processing, 15(6), 1833-1841.
[34]
Serra, X., Bresin, R. & Camurri, A. (2007). Sound and music computing : Challenges and strategies. Journal of New Music Research, 36(3), 185-190.
[35]
Friberg, A., Bresin, R. & Sundberg, J. (2006). Overview of the KTH rule system for musical performance. Advances in Cognitive Psychology, 2(2-3), 145-161.
[36]
Laukka, P., Juslin, P. N. & Bresin, R. (2005). A dimensional approach to vocal expression of emotion. Cognition & Emotion, 19(5), 633-653.
[37]
Schoonderwaldt, E. & Bresin, R. (2005). Book Review : Freedom and Constraints in Timing and Ornamentation: Investigations of Music Performance. Psychology of Music, 33(1), 122-128.
[38]
Goebl, W., Bresin, R. & Galembo, A. (2005). Touch and temporal behavior of grand piano actions. Journal of the Acoustical Society of America, 118(2), 1154-1165.
[39]
Hansen, K. F. & Bresin, R. (2004). Analysis of a genuine scratch performance. Lecture Notes in Computer Science, 2915, 477-478.
[40]
Sundberg, J., Friberg, A. & Bresin, R. (2003). Attempts to reproduce a pianist's expressive timing with director musices performance rules. Journal of New Music Research, 32(3), 317-325.
[41]
Lindström, E., Juslin, P. N., Bresin, R. & Williamon, A. (2003). Expressivity comes from within your soul: A questionnaire study of students´ perspectives on musical expressivity. Research Studies in Music Education, 20, 23-47.
[42]
Goebl, W. & Bresin, R. (2003). Measurement and reproduction accuracy of computer-controlled grand pianos. Journal of the Acoustical Society of America, 114(4), 2273-2283.
[43]
Rocchesso, D., Bresin, R. & Fernström, M. (2003). Sounding objects. IEEE Multimedia, 10(2), 42-52.
[44]
Schoonderwaldt, E., Friberg, A., Bresin, R. & Juslin, P. N. (2002). A system for improving the communication of emotion in music performance by feedback learning. Journal of the Acoustical Society of America, 111(5), 2471.
[45]
Juslin, P. N., Friberg, A. & Bresin, R. (2002). Toward a computational model of expression in music performance: The GERM model. Musicae scientiae, Special Issue 2001-2002, 63-122.
[47]
Bresin, R. & Friberg, A. (2000). Emotional coloring of computer controlled music performance. Computer music journal, 24(4), 44-61.
[48]
Bresin, R. & Friberg, A. (2000). Emotional coloring of computer controlled music performance. Computer music journal, 24(4), 44-63.
[49]
Bresin, R. & Widmer, G. (2000). Production of staccato articulation in Mozart sonatas played on a grand piano. : Preliminary results. Speech Music and Hearing Quarterly Progress and Status Report, 41(4), 001-006.
[50]
Bresin, R. (1998). Artificial neural networks based models for automatic performance of musical scores. Journal of New Music Research, 27(3), 239-270.
[51]
Friberg, A., Bresin, R., Frydén, L. & Sundberg, J. (1998). Musical punctuation on the microlevel : Automatic identification and performance of small melodic units. Journal of New Music Research, 27(3), 271-292.
[52]
Friberg, A., Bresin, R., Fryden, L. & Sundberg, J. (1998). Musical punctuation on the microlevel : Automatic identification and performance of small melodic units. Journal of New Music Research, 27(3), 271-292.

Konferensbidrag

[53]
Telang, S., Marques, M., Latupeirissa, A. B., Bresin, R. (2023). Emotional Feedback of Robots : Comparing the perceived emotional feedback by an audience between masculine and feminine voices in robots in popular media. In HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction. (pp. 434-436). Association for Computing Machinery (ACM).
[54]
Zhang, B. J., Orthmann, B., Torre, I., Bresin, R., Fick, J., Leite, I., Fitter, N. T. (2023). Hearing it Out : Guiding Robot Sound Design through Design Thinking. In 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN. (pp. 2064-2071). Institute of Electrical and Electronics Engineers (IEEE).
[55]
Rafi, A. K., Murdeshwar, A., Latupeirissa, A. B., Bresin, R. (2023). Investigating the Role of Robot Voices and Sounds in Shaping Perceived Intentions. In HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction. (pp. 425-427). Association for Computing Machinery (ACM).
[56]
Goina, M., Bresin, R., Rodela, R. (2023). Our sound space (oss) : an installation for participatory and interactive exploration of soundscapes. In SMC 2023: Proceedings of the Sound and Music Computing Conference 2023. (pp. 255-260). Sound and Music Computing Network.
[57]
Zojaji, S., Latupeirissa, A. B., Leite, I., Bresin, R., Peters, C. (2023). Persuasive polite robots in free-standing conversational groups. In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023). (pp. 1-8). Institute of Electrical and Electronics Engineers (IEEE).
[58]
Maranhao, T., Berrez, P., Kihl, M., Bresin, R. (2023). What is the color of choro? : Color preferences for an instrumental brazilian popular music genre. In SMC 2023: Proceedings of the Sound and Music Computing Conference 2023. (pp. 370-376). Sound and Music Computing Network.
[59]
van den Broek, G., Bresin, R. (2022). Concurrent sonification of different percentage values : the case of database values about statistics of employee engagement. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022..
[60]
Larson Holmgren, D., Särnell, A., Bresin, R. (2022). Facilitating reflection on climate change using interactive sonification. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022..
[61]
Kantan, P. R., Dahl, S., Spaich, E. G., Bresin, R. (2022). Sonifying Walking : A Perceptual Comparison of Swing Phase Mapping Schemes. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022..
[62]
Bresin, R., Frid, E., Latupeirissa, A. B., Panariello, C. (2021). Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound. Presented at Workshop on Sound in Human-Robot Interaction at HRI 2021.
[63]
Myresten, E., Larson Holmgren, D., Bresin, R. (2021). Sonification of Twitter Hashtags Using Earcons Based on the Sound of Vowels. In Proceedigns of the 2nd Nordic Sound and Music Computing Conference. Zenodo.
[64]
Latupeirissa, A. B., Panariello, C., Bresin, R. (2020). Exploring emotion perception in sonic HRI. In 17th Sound and Music Computing Conference. (pp. 434-441). Torino: Zenodo.
[65]
Bresin, R., Pauletto, S., Laaksolahti, J., Gandini, E. (2020). Looking for the soundscape of the future : preliminary results applying the design fiction method. In Sound and Music Computing Conference 2020..
[66]
Latupeirissa, A. B., Bresin, R. (2020). Understanding non-verbal sound of humanoid robots in films. Presented at Workshop on Mental Models of Robots at HRI 2020 in Cambridge, UK, Mar 23rd 2020.
[67]
Panariello, C., Mattias, S., Frid, E., Bresin, R. (2019). From vocal sketching to sound models by means of a sound-based musical transcription system. In Proceedings of the Sound and Music Computing Conferences. (pp. 167-173). CERN.
[68]
Han, X., Bresin, R. (2019). Performance of piano trills: effects of hands, fingers, notes and emotions. In Combined proceedings of the Nordic Sound and Music Computing Conference 2019 and the Interactive Sonification Workshop 2019. (pp. 9-15). Stockholm.
[69]
Latupeirissa, A. B., Frid, E., Bresin, R. (2019). Sonic characteristics of robots in films. In Proceedings of the 16th Sound and Music Computing Conference. (pp. 1-6). Malaga, Spain.
[70]
Frid, E., Lindetorp, H., Hansen, K. F., Elblaus, L., Bresin, R. (2019). Sound Forest - Evaluation of an Accessible Multisensory Music Installation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. (pp. 1-12). ACM.
[71]
Hansen, K. F., Bresin, R., Holzapfel, A., Pauletto, S., Gulz, T., Lindetorp, H., Misgeld, O., Mattias, S. (2019). Student involvement in sound and music computing research : Current practices at KTH and KMH. In Combined proceedings of the Nordic Sound and Music Computing Conference 2019 and the Interactive Sonification Workshop 2019. (pp. 36-42). Stockholm.
[72]
Serafin, S., Dahl, S., Bresin, R., Jensenius, A. R., Unnthorsson, R., Välimäki, V. (2018). NordicSMC : A nordic university hub on sound and music computing. In Proceedings of the 15th Sound and Music Computing Conference: Sonic Crossings, SMC 2018. (pp. 124-128). Sound and music Computing network.
[73]
Frid, E., Bresin, R., Alexanderson, S. (2018). Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids. In Proceedings of the 15th Sound and Music Computing Conference. Limassol, Cyprus.
[74]
Frid, E., Bresin, R., Sallnäs Pysander, E.-L., Moll, J. (2017). An Exploratory Study On The Effect Of Auditory Feedback On Gaze Behavior In a Virtual Throwing Task With and Without Haptic Feedback. In Proceedings of the 14th Sound and Music Computing Conference. (pp. 242-249). Espoo, Finland.
[75]
Paloranta, J., Lundström, A., Elblaus, L., Bresin, R., Frid, E. (2016). Interaction with a large sized augmented string instrument intended for a public setting. In Sound and Music Computing 2016. (pp. 388-395). Hamburg: Zentrum für Mikrotonale Musik und Multimediale Komposition (ZM4).
[76]
Singh, A., Tajadura-Jimez, A., Bianchi-Berthouze, N., Marquardt, N., Tentori, M., Bresin, R., Kulic, D. (2016). Mind the Gap: A SIG on Bridging the Gap in Research on Body Sensing, Body Perception and Multisensory Feedback. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. (pp. 1092-1095). New York, NY, USA.
[77]
Bresin, R., Elblaus, L., Frid, E., Favero, F., Annersten, L., Berner, D., Morreale, F. (2016). SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT. In Sound and Music Computing 2016. (pp. 79-84). SMC Sound&Music Computing NETWORK.
[78]
Frid, E., Elblaus, L., Bresin, R. (2016). Sonification of fluidity -
An exploration of perceptual connotations of a particular movement feature. In Proceedings of ISon 2016, 5th Interactive Sonification Workshop. (pp. 11-17). Bielefeld, Germany.
[79]
Elblaus, L., Tsaknaki, V., Lewandowski, V., Bresin, R. (2015). Nebula: An Interactive Garment Designed for Functional Aesthetics. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. (pp. 275-278). New York, NY, USA: ACM.
[80]
Goina, M., Robitaille, M.-A., Bresin, R. (2014). Interactive sonification in circus performance at Uniarts and KTH : ongoing research. In Proceedings of the Sound and Music Computing Sweden Conference 2014. (pp. 23-24). KTH Royal Institute of Technology.
[81]
Elblaus, L., Goina, M., Robitaille, M. A., Bresin, R. (2014). Modes of sonic interaction in circus : Three proofs of concept. In Proceedings of Sound and Music Computing Conference 2014. (pp. 1698-1706). Athens.
[82]
Bresin, R., Elblaus, L., Falkenberg Hansen, K., Månsson, L., Tardat, B. (2014). Musikcyklarna/Music bikes : An installation for enabling children to investigate the relationship between expressive music performance and body motion. In Proceedings of the Sound and Music Computing Sweden Conference 2014. (pp. 1-2). KTH Royal Institute of Technology.
[83]
Elblaus, L., Hansen, K. F., Bresin, R. (2014). NIME Design and Contemporary Music Practice : Benefits and Challenges. Presented at Workshop on Practice-Based Research in New Interfaces for Musical Expression, NIME 2014.
[84]
Frid, E., Bresin, R., Moll, J., Sallnäs Pysander, E.-L. (2014). Sonification of haptic interaction in a virtual scene. In Sound and Music Computing Sweden 2014, Stockholm, December 4-5, 2014. (pp. 14-16).
[85]
Dubus, G., Hansen, K. F., Bresin, R. (2012). An overview of sound and music applications for Android available on the market. In Proceedings of the 9th Sound and Music Computing Conference, SMC 2012. (pp. 541-546). Sound and music Computing network.
[86]
Hansen, K. F., Bresin, R. (2012). Use of soundscapes for providing information about distance left in train journeys. In Proceedings of the 9th Sound and Music Computing Conference, SMC 2012. (pp. 79-84). Sound and music Computing network.
[87]
Hansen, K. F., Dravins, C., Bresin, R. (2011). Ljudskrapan/The Soundscraper : Sound exploration for children with complex needs, accommodating hearing aids and cochlear implants. In Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. (pp. 70-76). Sound and Music Computing Network.
[88]
Fabiani, M., Dubus, G., Bresin, R. (2011). MoodifierLive: Interactive and collaborative music performance on mobile devices. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME11)..
[89]
Dubus, G., Bresin, R. (2011). Sonification of physical quantities throughout history: a meta-study of previous mapping strategies. In Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary: OPAKFI Egyesület.
[90]
Bresin, R., de Witt, A., Papetti, S., Civolani, M., Fontana, F. (2010). Expressive sonification of footstep sounds. In Proceedings of ISon 2010: 3rd Interactive Sonification Workshop. (pp. 51-54). Stockholm, Sweden: KTH Royal Institute of Technology.
[91]
Eriksson, M., Bresin, R. (2010). Improving running mechanics by use of interactive sonification. In Proceedings of the Interaction Sonification workshop (ISon) 2010. (pp. 95-98). Stockholm, Sweden: KTH Royal Institute of Technology.
[92]
Fabiani, M., Dubus, G., Bresin, R. (2010). Interactive sonification of emotionally expressive gestures by means of music performance. In Proceedings of ISon 2010, 3rd Interactive Sonification Workshop. (pp. 113-116). Stockholm, Sweden: KTH Royal Institute of Technology.
[93]
Dubus, G., Bresin, R. (2010). Sonification of sculler movements, development of preliminary methods. In Proceedings of ISon 2010, 3rd Interactive Sonification Workshop. (pp. 39-43). Stockholm, Sweden: KTH Royal Institute of Technology.
[94]
Camurri, A., Bevilacqua, F., Bresin, R., Maestre, E., Penttinen, H., Seppänen, J., Välimäki, V., Volpe, G., Warusfel, O. (2009). Embodied music listening and making in context-aware mobile applications : the EU-ICT SAME Project. Presented at The 8th International Gesture Workshop. Bielefeld, Germany. Feb 25-27, 2009.
[95]
Friberg, A., Bresin, R., Hansen, K. F., Fabiani, M. (2009). Enabling emotional expression and interaction with new expressive interfaces. In Front. Hum. Neurosci. Conference Abstract: Tuning the Brain for Music..
[96]
Bresin, R., Delle Monache, S., Fontana, F., Papetti, S., Polotti, P., Visell, Y. (2008). Auditory feedback through continuous control of crumpling sound synthesis. In Proceedings of Sonic Interaction Design: Sound, Information and Experience. A CHI 2008 Workshop organized by COST Action IC0601. (pp. 23-28). IUAV University of Venice.
[97]
Hansen, K. F., Bresin, R., Friberg, A. (2008). Describing the emotional content of hip-hop DJ recordings. In The Neurosciences and Music III. (p. 565). Montreal: New York Academy of Sciences.
[98]
Vitale, R., Bresin, R. (2008). Emotional cues in knocking sounds. In Proc. of the 10th International Conference on Music Perception and Cognition. (p. 276).
[99]
Bresin, R., Friberg, A. (2008). Influence of Acoustic Cues on the Expressive Performance of Music. In Proceedings of the 10th International Conference on Music Perception and Cognition. Sapporo, Japan.
[100]
Rocchesso, D., Serafin, S., Behrendt, F., Bernardini, N., Bresin, R., Eckel, G., Franinovic, K., Hermann, T., Pauletto, S., Susini, P., Visell, Y. (2008). Sonic Interaction Design : Sound, Information and Experience. In Conference on Human Factors in Computing Systems - Proceedings. (pp. 3969-3972). New York, NY, USA: ACM.
[101]
Bjurling, J., Bresin, R. (2008). Timing in piano music : Testing a model of melody lead. In Proc. of the 10th International Conference on Music Perception and Cognition. Sapporo, Japan.
[102]
Hansen, K. F., Bresin, R. (2008). Verbal Description of DJ Recordings. In Proc. of the 10th International Conference on Music Perception and Cognition. (p. 20). Sapporo.
[103]
Burger, B., Bresin, R. (2007). Displaying expression in musical performance by means of a mobile robot. In Affective Computing And Intelligent Interaction, Proceedings. (pp. 753-754).
[104]
Castellano, G., Bresin, R., Camurri, A., Volpe, G. (2007). Expressive Control of Music and Visual Media by Full-Body Movement. In Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07. (pp. 390-391). New York, NY, USA: ACM Press.
[105]
De Witt, A., Bresin, R. (2007). Sound design for affective interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics. (pp. 523-533).
[106]
Lindström, M., Ståhl, A., Höök, K., Sundström, P., Laaksolathi, J., Combetto, M., Taylor, A., Bresin, R. (2006). Affective diary : designing for bodily expressiveness and self-reflection. In CHI 2006 · Work-in-Progress. (pp. 1037-1042). New York, NY, USA: ACM Press.
[107]
Puiggròs, M., Gómez, E., Ramírez, R., Serra, X., Bresin, R. (2006). Automatic characterization of ornamentation from bassoon recordings for expressive synthesis. In 9th International Conference on Music Perception & Cognition. (pp. 1533-1538). Bologna: Bonomia University Press.
[108]
Mancini, M., Bresin, R., Pelachaud, C. (2006). From acoustic cues to an expressive agent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). (pp. 280-291).
[109]
Luis, I. F., Bresin, R. (2006). Influence of expressive music on the perception of short text messages. In Proceedings of the 9th International Conference on MusicPerception & Cognition (ICMPC9). (p. 739). Bologna: Bonomia University Press (abstract).
[110]
Hansen, K. F., Bresin, R. (2006). Mapping strategies in DJ scratching. In Proc. of the Conference on New Interfaces for Musical Expression. (pp. 188-191). IRCAM.
[111]
Hansen, K. F., Bresin, R., Friberg, A. (2006). Principles for expressing emotional content in turntable scratching. In Proc. 9th International Conference on Music Perception & Cognition. (pp. 532-533). Bologna: Bonomia University Press.
[112]
Hiraga, R., Bresin, R., Katayose, H. (2006). Rencon 2005. In Proceeding of the 20th Annual Conference of the Japanese Society for Artficial Intelligence. (pp. 1D2-1).
[113]
Giordano, B., Bresin, R. (2006). Walking and playing: What's the origin of emotional expressiveness in music?. In Proceedings of the 9th International Conference on Music Perception & Cognition (ICMPC9), Bologna/Italy, August 22-26 2006. (p. 436). Bologna: Bononia University Press.
[114]
Mancini, M., Pelachaud, C., Bresin, R. (2005). Greta Listening to Expressive Music. Presented at Gathering of Animated Lifelike Agents - GALA 2005. IVA.
[115]
Bresin, R. (2005). What is the color of that music performance?. In Proceedings of the International Computer Music Conference - ICMC 2005. (pp. 367-370). Barcelona.
[116]
Rocchesso, D., Avanzini, F., Rath, M., Bresin, R., Serafin, S. (2004). Contact sounds for continuous feedback. In Proceedings of International Workshop on Interactive Sonification: (Human Interaction with Auditory Displays)..
[117]
Goebl, W., Bresin, R., Galembo, A. (2004). Once again: The perception of piano touch and tone : Can touch audibly change piano sound independently of intensity?. In Proceedings of the International Symposium on Musical Acoustics, March 31st to April 3rd 2004 (ISMA2004), Nara, Japan. (pp. 332-335). Nara, Japan: The Acoustical Society of Japan, CD-ROM.
[118]
Bresin, R. (2004). Real-time visualization of musical expression. In Proceedings of Network of Excellence HUMAINE Workshop "From Signals to Signs of Emotion and Vice Versa". (pp. 19-23).
[119]
Hiraga, R., Bresin, R., Hirata, K., Katayose, H. (2004). Rencon 2004: Turing Test for Musical Expression. In Proceedings of the 4th international conference on New interfaces for musical expression. (pp. 120-123). Hamamatsu, Shizuoka, Japan: National University of Singapore.
[120]
Hansen, K. F., Bresin, R. (2003). DJ scratching performance techniques : Analysis and synthesis. In Proc. Stockholm Music Acoustics Conference. (pp. 693-696).
[121]
Bresin, R., Hansen, K. F., Dahl, S. (2003). The Radio Baton as configurable musical instrument and controller. In Proc. Stockholm Music Acoustics Conference. (pp. 689-691).
[122]
Friberg, A., Schoonderwaldt, E., Juslin, P. N., Bresin, R. (2002). Automatic real-time extraction of musical expression. In Proceedings of the International Computer Music Conference, ICMC 2002. (pp. 365-367).
[123]
Bresin, R., Friberg, A., Sundberg, J. (2002). Director musices : The KTH performance rules system. In Proceedings of SIGMUS-46. (pp. 43-48). Information Processing Society of Japan,.
[124]
Bresin, R., Friberg, A. (2001). Expressive musical icons. In Proceedings of the International Conference on Auditory Display - ICAD 2001. (pp. 141-143).
[125]
Bresin, R., Friberg, A., Dahl, S. (2001). Toward a new model for sound control. In Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 200. (pp. 45-49).
[126]
Bresin, R., Friberg, A. (2000). Rule-based emotional colouring of music performance. In Proceedings of the International Computer Music Conference - ICMC 2000. (pp. 364-367). San Francisco: ICMA.
[127]
Bresin, R., Friberg, A. (2000). Software tools for musical expression. In Proceedings of the InternationalComputer Music Conference 2000. (pp. 499-502). San Francisco, USA: Computer Music Association.
[128]
Bresin, R., Friberg, A. (1999). Synthesis and decoding of emotionally expressive music performance. In Proceedings of the IEEE 1999 Systems, Man and Cybernetics Conference - SMC’99. (pp. 317-322).
[129]
Bresin, R., Friberg, A. (1997). A multimedia environment for interactive music performance. In Proceedings of KANSEI - The Technology of Emotion, AIMI International Workshop. (pp. 64-67).
[130]
Friberg, A., Bresin, R. (1997). Automatic musical punctuation : A rule system and a neural network approach. In Proceedings of KANSEI - The Technology of Emotion, AIMI Intl Workshop. (pp. 159-163).
[131]
Battel, G. U., Bresin, R. (1993). Analysis by synthesis in piano performance - A study on the theme of the Brahms’ "Variations on a Theme of Paganini€", op. 35. In Proceedings of SMAC 93 (Stockholm Music Acoustic Conference). (pp. 69-73). Stockholm: KTH Royal Institute of Technology.
[132]
Battel, G. U., Bresin, R., De Poli, G., Vidolin, A. (1993). Automatic performance of musical scores by mean of neural nerworks : evaluation with listening tests. In X CIM Colloquium on Musical Informatics. (pp. 97-101).
[133]
Bresin, R., De Poli, G., Torelli, G. (1991). Applicazione delle reti neurali alla classificazione dei registri dell’organo a canne. In Colloquio di Informatica Musicale - IX CIM. (pp. 112-114).
[134]
Bresin, R., Manduchi, R. (1989). Una sorgente di melodie con controllo di entropia. Presented at VIII Colloquio di Informatica Musicale. (pp. 213-215). Cagliari, Italy.
[135]
Bresin, R., Manduchi, R. (1989). Una sorgente di melodie con controllo di entropia. In Colloquio di Informatica Musicale - VIII CIM. (pp. 213-215).

Kapitel i böcker

[136]
Falkenberg, K., Bresin, R., Holzapfel, A. & Pauletto, S. (2021). Musikkommunikation och ljudinteraktion. In Pernilla Falkenberg Josefsson, Mikael Wiberg (Ed.), Introduktion till medieteknik (pp. 155-166). Lund: Studentlitteratur AB.
[137]
Pauletto, S. & Bresin, R. (2021). Sonification Research and Emerging Topics. In Michael Filimowicz (Ed.), Doing Research in Sound Design (pp. 238-254). Routledge.
[138]
Friberg, A., Bresin, R. & Sundberg, J. (2014). Analysis by synthesis. In Thompson, W. F. (Ed.), Music in the Social and Behavioral Sciences. Los Angeles: Sage Publications.
[139]
Friberg, A., Bresin, R. & Sundberg, J. (2014). Expressive timing. In Thompson, W. F. (Ed.), Music in the Social and Behavioral Sciences (pp. 440-442). Los Angeles: Sage Publications.
[140]
Bresin, R. & Friberg, A. (2013). Evaluation of computer systems for expressive music performance. In Kirke, Alexis; Miranda, Eduardo R. (Ed.), Guide to Computing for Expressive Music Performance (pp. 181-203). Springer.
[141]
Giordano, B. L., Susini, P. & Bresin, R. (2013). Perceptual evaluation of sound-producing objects. In Franinovic, Karmen; Serafin, Stefania (Ed.), Sonic Interaction Design (pp. 151-197). Boston, MA: MIT Press.
[142]
Fabiani, M., Friberg, A. & Bresin, R. (2013). Systems for Interactive Control of Computer Generated Music Performance. In Kirke, A., & Miranda, E. (Ed.), Guide to Computing for Expressive Music Performance (pp. 49-73). Springer Berlin/Heidelberg.
[143]
Giordano, B. L., Susini, P. & Bresin, R. (2012). Experimental methods for the perceptual evaluation of sound-producing objects and interfaces. In Franinovic, Karmen; Serafin, Stefania (Ed.), Sonic Interaction Design. Boston, MA: MIT Press.
[144]
Dahl, S., Bevilacqua, F., Bresin, R., Clayton, M., Leante, L., Poggi, I. & Rasamimanana, N. (2009). Gestures in performance. In Godøy, Rolf Inge; Leman, Marc (Ed.), Musical Gestures: Sound, Movement, and Meaning (pp. 36-68). New York: Routledge.
[145]
Camurri, A., Volpe, G., Vinet, H., Bresin, R., Fabiani, M., Dubus, G. ... Seppanen, J. (2009). User-centric context-aware mobile applications for embodied music listening. In Akan, Ozgur; Bellavista, Paolo; Cao, Jiannong; Dressler, Falko; Ferrari, Domenico; Gerla, Mario; Kobayashi, Hisashi; Palazzo, Sergio; Sahni, Sartaj; Shen, Xuemin (Sherman); Stan, Mircea; Xiaohua, Jia; Zomaya, Albert; Coulson, Geoffrey; Daras, Petros; Ibarra, Oscar Mayora (Ed.), User Centric Media (pp. 21-30). Heidelberg: Springer Berlin.
[146]
Bresin, R., Hansen, K. F., Karjalainen, M., Mäki-Patola, T., Kanerva, A., Huovilainen, A. ... Rocchesso, D. (2008). Controlling sound production. In Polotti, Pietro; Rocchesso, Davide (Ed.), Sound to Sense, Sense to Sound: A state of the art in Sound and Music Computing (pp. 447-486). Berlin: Logos Verlag.
[147]
Friberg, A. & Bresin, R. (2008). Real-time control of music performance. In Polotti, Pietro; Rocchesso, Davide (Ed.), Sound to Sense - Sense to Sound: A state of the art in Sound and Music Computing (pp. 279-302). Berlin: Logos Verlag.
[148]
Goebl, W., Dixon, S., De Poli, G., Friberg, A., Bresin, R. & Widmer, G. (2008). Sense in expressive music performance: Data acquisition, computational studies, and models. In Polotti, Pietro; Rocchesso, Davide (Ed.), Sound to Sense - Sense to Sound: A state of the art in Sound and Music Computing (pp. 195-242). Berlin: Logos Verlag.
[149]
Rocchesso, D. & Bresin, R. (2007). Emerging sounds for disappearing computers. In Streitz, Norbert; Kameas, Achilles; Mavrommati, Irene (Ed.), The Disappearing Computer (pp. 233-254). Berlin / Heidelberg: Springer.
[150]
Castellano, G., Bresin, R., Camurri, A. & Volpe, G. (2007). User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements. In Paiva, Ana; Prada, Rui; Picard, Rosalind W. (Ed.), Affective Computing and Intelligent Interaction (pp. 501-510). Berlin / Heidelberg: Springer Berlin/Heidelberg.
[151]
Falkenberg Hansen, K. & Bresin, R. (2003). Complex Gestural Audio Control: The Case of Scratching. In Rocchesso, D., & Fontana, F. (Ed.), The Sounding Object (pp. 221-269). Mondo Estremo.
[152]
Hansen, K. F. & Bresin, R. (2003). Complex gestural audio control : The case of scratching. In Rocchesso, Davide; Fontana, Federico (Ed.), The Sounding Object (pp. 221-269). Mondo Estremo.
[153]
Bresin, R., Falkenberg Hansen, K., Dahl, S., Rath, M., Marshall, M. & Moynihan, B. (2003). Devices for manipulation and control of sounding objects: The Vodhran and the Invisiball. In Rocchesso, D., & Fontana, F. (Ed.), The Sounding Object (pp. 271-295). Mondo Estremo.

Icke refereegranskade

Artiklar

[154]
Yang, J., Hermann, T. & Bresin, R. (2019). Introduction to the special issue on interactive sonification. Journal on Multimodal User Interfaces, 13(3), 151-153.
[155]
Bresin, R., Askenfelt, A., Friberg, A., Hansen, K. & Ternström, S. (2012). Sound and Music Computing at KTH. Trita-TMH, 52(1), 33-35.
[156]
Bresin, R. & Friberg, A. (1998). Emotional expression in music performance : synthesis and decoding. TMH-QPSR, 39(4), 085-094.
[157]
Bresin, R. & Friberg, A. (1997). A multimedia environment for interactive music performance. TMH-QPSR, 38(2-3), 029-032.

Konferensbidrag

[158]
Bresin, R., Falkenberg, K., Holzapfel, A., Pauletto, S. (2021). KTH Royal Institute of Technology - Sound and Music Computing (SMC) Group. In Proceedings of the Sound and Music Computing Conferences 2021. (pp. xxv-xxvi). Sound and Music Computing Network.

Kapitel i böcker

[159]
Dahl, S., Bevilacqua, F., Bresin, R., Clayton, M., Leante, L., Poggi, I. & Rasamimanana, N. (2010). Gestures in Performance. In Musical Gestures: Sound, Movement, and Meaning (pp. 36-68). Taylor and Francis.

Avhandlingar

[160]
Bresin, R. (2000). Virtual virtuosity (Doctoral thesis , KTH, Stockholm, Trita-TMH 2000:9). Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3049.

Proceedings (redaktörskap)

[161]
Bresin, R. (Ed.). (2014). SMC Sweden 2014 : Sound and Music Computing: Bridging science, art, and industry. Stockholm: KTH Royal Institute of Technology.
[162]
Bresin, R., Hermann, T., Hunt, A. (Eds.). (2010). Proceedings of ISon 2010 - Interactive Sonification Workshop : Human Interaction with Auditory Displays. Stockholm: KTH School of Computer Science and Communication (CSC).

Övriga

[164]
Latupeirissa, A. B., Murdeshwar, A., Bresin, R. (). Semiotic analysis of robot sounds in films: implications for sound design in social robotics. (Manuscript).
Senaste synkning med DiVA:
2024-05-05 01:17:28