Hoppa till huvudinnehållet
Till KTH:s startsida

Publikationer av Gabriel Skantze

Refereegranskade

Artiklar

[1]
B. Irfan, S. Kuoppamäki och G. Skantze, "Recommendations for designing conversational companion robots with older adults through foundation models," Frontiers in Robotics and AI, vol. 11, 2024.
[2]
C. Mishra et al., "Does a robot's gaze aversion affect human gaze aversion?," Frontiers in Robotics and AI, vol. 10, 2023.
[3]
C. Mishra et al., "Real-time emotion generation in human-robot dialogue using large language models," Frontiers in Robotics and AI, vol. 10, 2023.
[4]
[5]
P. Blomsma, G. Skantze och M. Swerts, "Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication Partners," Frontiers in Artificial Intelligence, vol. 5, 2022.
[6]
S. Ahlberg et al., "Co-adaptive Human-Robot Cooperation : Summary and Challenges," Unmanned Systems, vol. 10, no. 02, s. 187-203, 2022.
[7]
G. Skantze och B. Willemsen, "CoLLIE : Continual Learning of Language Grounding from Language-Image Embeddings," The journal of artificial intelligence research, vol. 74, s. 1201-1223, 2022.
[8]
A. Axelsson, H. Buschmeier och G. Skantze, "Modeling Feedback in Interaction With Conversational Agents—A Review," Frontiers in Computer Science, vol. 4, 2022.
[9]
A. Axelsson och G. Skantze, "Multimodal User Feedback During Adaptive Robot-Human Presentations," Frontiers in Computer Science, vol. 3, 2022.
[10]
G. Skantze, "Turn-taking in Conversational Systems and Human-Robot Interaction : A Review," Computer speech & language (Print), vol. 67, 2021.
[11]
G. Skantze, "Real-Time Coordination in Human-Robot Interaction Using Face and Voice," AI Magazine, vol. 37, no. 4, s. 19-31, 2016.
[12]
H. Cuayahuitl, K. Komatani och G. Skantze, "Introduction for Speech and language for interactive robots," Computer speech & language (Print), vol. 34, no. 1, s. 83-86, 2015.
[13]
R. Meena, G. Skantze och J. Gustafsson, "Data-driven models for timing feedback responses in a Map Task dialogue system," Computer speech & language (Print), vol. 28, no. 4, s. 903-922, 2014.
[14]
G. Skantze, A. Hjalmarsson och C. Oertel, "Turn-taking, feedback and joint attention in situated human-robot interaction," Speech Communication, vol. 65, s. 50-66, 2014.
[15]
N. Mirnig et al., "Face-To-Face With A Robot : What do we actually talk about?," International Journal of Humanoid Robotics, vol. 10, no. 1, s. 1350011, 2013.
[16]
S. Al Moubayed, G. Skantze och J. Beskow, "The Furhat Back-Projected Humanoid Head-Lip Reading, Gaze And Multi-Party Interaction," International Journal of Humanoid Robotics, vol. 10, no. 1, s. 1350005, 2013.
[17]
G. Skantze och A. Hjalmarsson, "Towards incremental speech generation in conversational systems," Computer speech & language (Print), vol. 27, no. 1, s. 243-262, 2013.
[18]
D. Schlangen och G. Skantze, "A General, Abstract Model of Incremental Dialogue Processing," Dialogue and Discourse, vol. 2, no. 1, s. 83-111, 2011.
[19]
G. Skantze, "Exploring human error recovery strategies : Implications for spoken dialogue systems," Speech Communication, vol. 45, no. 3, s. 325-341, 2005.

Konferensbidrag

[20]
A. Borg, I. Parodis och G. Skantze, "Creating Virtual Patients using Robots and Large Language Models : A Preliminary Study with Medical Students," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 273-277.
[21]
S. Ashkenazi et al., "Goes to the Heart: Speaking the User's Native Language," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 214-218.
[22]
Y. Wang et al., "How Much Does Nonverbal Communication Conform to Entropy Rate Constancy?: A Case Study on Listener Gaze in Interaction," i 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Proceedings of the Conference, 2024, s. 3533-3545.
[23]
K. Inoue et al., "Multilingual Turn-taking Prediction Using Voice Activity Projection," i 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings, 2024, s. 11873-11883.
[24]
A. Axelsson et al., "Robots in autonomous buses: Who hosts when no human is there?," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 1278-1280.
[25]
E. Ekstedt et al., "Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis," i Interspeech 2023, 2023, s. 5481-5485.
[26]
C. Figueroa, M. Ochs och G. Skantze, "Classification of Feedback Functions in Spoken Dialog Using Large Language Models and Prosodic Features," i 27th Workshop on the Semantics and Pragmatics of Dialogue, 2023, s. 15-24.
[27]
T. Offrede et al., "Do Humans Converge Phonetically When Talking to a Robot?," i Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, s. 3507-3511.
[28]
A. Axelsson och G. Skantze, "Do you follow? : A fully automated system for adaptive robot presenters," i HRI 2023 : Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, s. 102-111.
[29]
A. M. Kamelabad och G. Skantze, "I Learn Better Alone! Collaborative and Individual Word Learning With a Child and Adult Robot," i Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, s. 368-377.
[30]
C. Figueroa, Š. Beňuš och G. Skantze, "Prosodic Alignment in Different Conversational Feedback Functions," i Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, s. 154-1518.
[31]
B. Willemsen, L. Qian och G. Skantze, "Resolving References in Visually-Grounded Dialogue via Text Generation," i Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue, 2023, s. 457-469.
[32]
B. Jiang, E. Ekstedt och G. Skantze, "Response-conditioned Turn-taking Prediction," i Findings of the Association for Computational Linguistics, ACL 2023, 2023, s. 12241-12248.
[33]
E. Ekstedt och G. Skantze, "Show & Tell : Voice Activity Projection and Turn-taking," i Interspeech 2023, 2023, s. 2020-2021.
[34]
G. Skantze och A. S. Doğruöz, "The Open-domain Paradox for Chatbots : Common Ground as the Basis for Human-like Dialogue," i Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2023, s. 605-614.
[35]
K. Inoue et al., "Towards Objective Evaluation of Socially-Situated Conversational Robots : Assessing Human-Likeness through Multimodal User Behaviors," i ICMI 2023 Companion : Companion Publication of the 25th International Conference on Multimodal Interaction, 2023, s. 86-90.
[36]
A. Axelsson och G. Skantze, "Using Large Language Models for Zero-Shot Natural Language Generation from Knowledge Graphs," i Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023), 2023, s. 39-54.
[37]
B. Jiang, E. Ekstedt och G. Skantze, "What makes a good pause? Investigating the turn-holding effects of fillers," i Proceedings 20th International Congress of Phonetic Sciences (ICPhS), 2023.
[38]
M. P. Aylett et al., "Why is my Agent so Slow? Deploying Human-Like Conversational Turn-Taking," i HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, 2023, s. 490-492.
[39]
C. Figueroa et al., "Annotation of Communicative Functions of Short Feedback Tokens in Switchboard," i 2022 Language Resources and Evaluation Conference, LREC 2022, 2022.
[40]
B. Willemsen, D. Kalpakchi och G. Skantze, "Collecting Visually-Grounded Dialogue with A Game Of Sorts," i Proceedings of the 13th Conference on Language Resources and Evaluation, 2022, s. 2257-2268.
[41]
M. Elgarf et al., "CreativeBot : a Creative Storyteller robot to stimulate creativity in children," i ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction, 2022, s. 540-548.
[42]
E. Ekstedt och G. Skantze, "How Much Does Prosody Help Turn-taking?Investigations using Voice Activity Projection Models," i Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2022, s. 541-551.
[43]
G. Skantze och C. Mishra, "Knowing where to look : A planning-based architecture to automate the gaze behavior of social robots," i 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2022.
[44]
E. Ekstedt och G. Skantze, "Voice Activity Projection: Self-supervised Learning of Turn-taking Events," i INTERSPEECH 2022, 2022, s. 5190-5194.
[45]
G. Skantze, "Conversational interaction with social robots," i ACM/IEEE International Conference on Human-Robot Interaction, 2021.
[46]
A. S. Dogruoz och G. Skantze, "How "open" are the conversations with open-domain chatbots? : A proposal for Speech Event based evaluation," i SIGDIAL 2021 : 22Nd Annual Meeting Of The Special Interest Group On Discourse And Dialogue (Sigdial 2021), 2021, s. 392-402.
[47]
M. Elgarf, G. Skantze och C. Peters, "Once Upon a Story : Can a Creative Storyteller Robot Stimulate Creativity in Children?," i Proceedings of the 21st ACM international conference on intelligent virtual agents (IVA), 2021, s. 60-67.
[48]
E. Ekstedt och G. Skantze, "Projection of Turn Completion in Incremental Spoken Dialogue Systems," i SIGDIAL 2021 : 22ND ANNUAL MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE (SIGDIAL 2021), 2021, s. 431-437.
[49]
O. Ibrahim och G. Skantze, "Revisiting robot directed speech effects in spontaneous Human-Human-Robot interactions," i Human Perspectives on Spoken Human-Machine Interaction, 2021.
[50]
E. Ekstedt och G. Skantze, "TurnGPT : a Transformer-based Language Model for Predicting Turn-taking in Spoken Dialog," i Findings of the Association for Computational Linguistics : EMNLP 2020, 2020, s. 2981-2990.
[51]
N. Axelsson och G. Skantze, "Using knowledge graphs and behaviour trees for feedback-aware presentation agents," i Proceedings of Intelligent Virtual Agents 2020, 2020.
[52]
T. Shore och G. Skantze, "Using lexical alignment and referring ability to address data sparsity in situated dialog reference resolution," i Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, 2020, s. 2288-2297.
[53]
P. Jonell et al., "Crowdsourcing a self-evolving dialog graph," i CUI '19: Proceedings of the 1st International Conference on Conversational User Interfaces, 2019.
[55]
T. Shore, T. Androulakaki och G. Skantze, "KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue," i LREC 2018 - 11th International Conference on Language Resources and Evaluation, 2019, s. 768-775.
[56]
N. Axelsson och G. Skantze, "Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees," i 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue : Proceedings of the Conference, 2019, s. 345-352.
[57]
D. Kontogiorgos et al., "The Effects of Embodiment and Social Eye-Gaze in Conversational Agents," i Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci), 2019.
[58]
D. Kontogiorgos et al., "A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction," i Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018, s. 119-127.
[59]
C. Li et al., "Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality," i Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018, s. 191-196.
[60]
C. Peters et al., "Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality," i Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, s. 2247-2249.
[61]
M. Roddy, G. Skantze och N. Harte, "Investigating speech features for continuous turn-taking prediction using LSTMs," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, s. 586-590.
[62]
M. Roddy, G. Skantze och N. Harte, "Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs," i ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction, 2018, s. 186-190.
[63]
D. Kontogiorgos et al., "Multimodal reference resolution in collaborative assembly tasks," i Multimodal reference resolution in collaborative assembly tasks, 2018.
[64]
C. Peters et al., "Towards the use of mixed reality for hri design via virtual robots," i HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020, 2018.
[65]
J. Lopes, O. Engwall och G. Skantze, "A First Visit to the Robot Language Café," i Proceedings of the ISCA workshop on Speech and Language Technology in Education, 2017.
[66]
R. Johansson, G. Skantze och A. Jönsson, "A psychotherapy training environment with virtual patients implemented using the furhat robot platform," i 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, s. 184-187.
[67]
V. Avramova et al., "A virtual poster presenter using mixed reality," i 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, s. 25-28.
[68]
T. Shore och G. Skantze, "Enhancing reference resolution in dialogue using participant feedback," i Proc. GLU 2017 International Workshop on Grounding Language Understanding, 2017, s. 78-82.
[69]
G. Skantze, "Predicting and Regulating Participation Equality in Human-robot Conversations : Effects of Age and Gender," i Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, s. 196-204.
[70]
G. Skantze, "Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using LSTM Recurrent Neural Networks," i Proceedings of SIGDIAL 2017 - 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2017.
[71]
M. Johansson et al., "Making Turn-Taking Decisions for an Active Listening Robot for Memory Training," i SOCIAL ROBOTICS, (ICSR 2016), 2016, s. 940-949.
[72]
[73]
G. Skantze, M. Johansson och J. Beskow, "A Collaborative Human-Robot Game as a Test-bed for Modelling Multi-party, Situated Interaction," i INTELLIGENT VIRTUAL AGENTS, IVA 2015, 2015, s. 348-351.
[74]
R. Meena et al., "Automatic Detection of Miscommunication in Spoken Dialogue Systems," i Proceedings of 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015, s. 354-363.
[75]
J. Lopes et al., "Detecting Repetitions in Spoken Dialogue Systems Using Phonetic Distances," i INTERSPEECH-2015, 2015, s. 1805-1809.
[76]
G. Skantze, M. Johansson och J. Beskow, "Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects," i Proceedings of the 2015 ACM International Conference on Multimodal Interaction, 2015.
[77]
G. Skantze och M. Johansson, "Modelling situated human-robot interaction using IrisTK," i Proceedings of the SIGDIAL 2015 Conference, 2015, s. 165-167.
[78]
M. Johansson och G. Skantze, "Opportunities and obligations to take turns in collaborative multi-party human-robot interaction," i SIGDIAL 2015 - 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2015, s. 305-314.
[79]
M. Johansson, G. Skantze och J. Gustafson, "Comparison of human-human and human-robot Turn-taking Behaviour in multi-party Situated interaction," i UM3I '14 : Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions, 2014, s. 21-26.
[80]
R. Meena et al., "Crowdsourcing Street-level Geographic Information Using a Spoken Dialogue System," i Proceedings of the SIGDIAL 2014 Conference, 2014, s. 2-11.
[81]
S. Al Moubayed et al., "Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue," i 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 2014.
[82]
S. Al Moubayed, J. Beskow och G. Skantze, "Spontaneous spoken dialogues with the Furhat human-like robot head," i HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 2014, s. 326.
[83]
S. Al Moubayed et al., "Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor," i 9th International Summer Workshop on Multimodal Interfaces, Lisbon, Portugal, 2014.
[84]
S. Al Moubayed et al., "UM3I 2014 : International workshop on understanding and modeling multiparty, multimodal interactions," i ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction, 2014, s. 537-538.
[85]
G. Skantze, C. Oertel och A. Hjalmarsson, "User Feedback in Human-Robot Dialogue : Task Progression and Uncertainty," i Proceedings of the HRI Workshop on Timing in Human-Robot Interaction, 2014.
[86]
R. Meena et al., "Using a Spoken Dialogue System for Crowdsourcing Street-level Geographic Information," i 2nd Workshop on Action, Perception and Language, SLTC 2014, 2014.
[87]
R. Meena, G. Skantze och J. Gustafson, "A Data-driven Model for Timing Feedback in a Map Task Dialogue System," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, s. 375-383.
[88]
G. Skantze, A. Hjalmarsson och C. Oertel, "Exploring the effects of gaze and pauses in situated human-robot interaction," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue : SIGDIAL 2013, 2013.
[89]
M. Johansson, G. Skantze och J. Gustafson, "Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions," i Social Robotics : 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings, 2013, s. 351-360.
[90]
R. Meena, G. Skantze och J. Gustafson, "Human Evaluation of Conceptual Route Graphs for Interpreting Spoken Route Descriptions," i Proceedings of the 3rd International Workshop on Computational Models of Spatial Language Interpretation and Generation (CoSLI), 2013, s. 30-35.
[91]
S. Al Moubayed, J. Beskow och G. Skantze, "The Furhat Social Companion Talking Head," i Interspeech 2013 - Show and Tell, 2013, s. 747-749.
[92]
R. Meena, G. Skantze och J. Gustafson, "The Map Task Dialogue System : A Test-bed for Modelling Human-Like Dialogue," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, s. 366-368.
[93]
G. Skantze, C. Oertel och A. Hjalmarsson, "User feedback in human-robot interaction : Prosody, gaze and timing," i Proceedings of Interspeech 2013, 2013, s. 1901-1905.
[94]
R. Meena, G. Skantze och J. Gustafson, "A Chunking Parser for Semantic Interpretation of Spoken Route Directions in Human-Robot Dialogue," i Proceedings of the 4th Swedish Language Technology Conference (SLTC 2012), 2012, s. 55-56.
[95]
G. Skantze, "A Testbed for Examining the Timing of Feedback using a Map Task," i Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, 2012.
[96]
R. Meena, G. Skantze och J. Gustafson, "A data-driven approach to understanding spoken route directions in human-robot dialogue," i 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, 2012, s. 226-229.
[98]
S. Al Moubayed et al., "Furhat : A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction," i Cognitive Behavioural Systems : COST 2102 International Training School, Dresden, Germany, February 21-26, 2011, Revised Selected Papers, 2012, s. 114-130.
[99]
G. Skantze et al., "Furhat at Robotville : A Robot Head Harvesting the Thoughts of the Public through Multi-party Dialogue," i Proceedings of the Workshop on Real-time Conversation with Virtual Agents IVA-RCVA, 2012.
[100]
[101]
G. Skantze och S. Al Moubayed, "IrisTK : A statechart-based toolkit for multi-party face-to-face interaction," i ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction, 2012, s. 69-75.
[102]
S. Al Moubayed, G. Skantze och J. Beskow, "Lip-reading : Furhat audio visual intelligibility of a back projected animated face," i Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, s. 196-203.
[103]
S. Al Moubayed et al., "Multimodal Multiparty Social Interaction with the Furhat Head," i 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA, 2012, s. 293-294.
[104]
S. Al Moubayed och G. Skantze, "Perception of Gaze Direction for Situated Interaction," i Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, Gaze-In 2012, 2012.
[105]
S. Al Moubayed och G. Skantze, "Effects of 2D and 3D Displays on Turn-taking Behavior in Multiparty Human-Computer Dialog," i SemDial 2011 : Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue, 2011, s. 192-193.
[106]
M. Johnson-Roberson et al., "Enhanced Visual Scene Understanding through Human-Robot Dialog," i 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, s. 3342-3348.
[107]
S. Al Moubayed och G. Skantze, "Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue : Effects of 2D and 3D Displays," i Proceedings of the International Conference on Audio-Visual Speech Processing 2011, 2011, s. 99-102.
[108]
M. Johansson, G. Skantze och J. Gustafson, "Understanding route directions in human-robot dialogue," i Proceedings of SemDial, 2011, s. 19-27.
[109]
M. Johnson-Roberson et al., "Enhanced visual scene understanding through human-robot dialog," i Dialog with Robots : AAAI 2010 Fall Symposium, 2010.
[110]
D. Schlangen et al., "Middleware for Incremental Processing in Conversational Agents," i Proceedings of SIGDIAL 2010 : the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2010, s. 51-54.
[111]
G. Skantze och A. Hjalmarsson, "Towards Incremental Speech Generation in Dialogue Systems," i Proceedings of the SIGDIAL 2010 Conference : 11th Annual Meeting of the Special Interest Group onDiscourse and Dialogue, 2010, s. 1-8.
[112]
D. Schlangen och G. Skantze, "A general, abstract model of incremental dialogue processing," i EACL 2009 - 12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings, 2009, s. 710-718.
[113]
G. Skantze och J. Gustafson, "Attention and interaction control in a human-human-computer dialogue setting," i Proceedings of SIGDIAL 2009 : the 10th Annual Meeting of the Special Interest Group in Discourse and Dialogue, 2009, s. 310-313.
[114]
G. Skantze och D. Schlangen, "Incremental dialogue processing in a micro-domain," i Proceedings of the 12th Conference of the European Chapter of the ACL, 2009, s. 745-753.
[115]
G. Skantze och J. Gustafson, "Multimodal interaction control in the MonAMI Reminder," i Proceedings of DiaHolmia : 2009 Workshop on the Semantics and Pragmatics of Dialogue, 2009, s. 127-128.
[116]
J. Beskow et al., "The MonAMI Reminder : a spoken dialogue system for face-to-face interaction," i Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009, 2009, s. 300-303.
[117]
J. Beskow et al., "Innovative interfaces in MonAMI : The Reminder," i Perception In Multimodal Dialogue Systems, Proceedings, 2008, s. 272-275.
[118]
G. Skantze, "Making grounding decisions : Data-driven estimation of dialogue costs and confidence thresholds," i Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, 2007, s. 206-210.
[119]
G. Skantze, J. Edlund och R. Carlson, "Talking with Higgins : Research challenges in a spoken dialogue system," i PERCEPTION AND INTERACTIVE TECHNOLOGIES, PROCEEDINGS, 2006, s. 193-196.
[120]
Å. Wallers, J. Edlund och G. Skantze, "The effect of prosodic features on the interpretation of synthesised backchannels," i Perception And Interactive Technologies, Proceedings, 2006, s. 183-187.
[121]
G. Skantze, D. House och J. Edlund, "User Responses to Prosodic Variation in Fragmentary Grounding Utterances in Dialog," i INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, 2006, s. 2002-2005.
[122]
G. Skantze, "Galatea: a discourse modeller supporting concept-level error handling in spoken dialogue systems," i 6th SIGdial Workshop on Discourse and Dialogue, 2005, s. 178-189.
[123]
J. Edlund, D. House och G. Skantze, "The effects of prosodic features on the interpretation of clarification ellipses," i Proceedings of Interspeech 2005 : Eurospeech, 2005, s. 2389-2392.
[124]
G. Skantze och J. Edlund, "Early error detection on word level," i Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.
[125]
J. Edlund, G. Skantze och R. Carlson, "Higgins : a spoken dialogue system for investigating error handling techniques," i Proceedings of the International Conference on Spoken Language Processing, ICSLP 04, 2004, s. 229-231.
[126]
G. Skantze och J. Edlund, "Robust interpretation in the Higgins spoken dialogue system," i Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.

Kapitel i böcker

[127]
G. Skantze, J. Gustafson och J. Beskow, "Multimodal Conversational Interaction with Robots," i The Handbook of Multimodal-Multisensor Interfaces, Volume 3 : Language Processing, Software, Commercialization, and Emerging Directions, Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Krüger red., : ACM Press, 2019.
[128]
J. Beskow et al., "Multimodal Interaction Control," i Computers in the Human Interaction Loop, Waibel, Alexander; Stiefelhagen, Rainer red., Berlin/Heidelberg : Springer Berlin/Heidelberg, 2009, s. 143-158.
[129]
G. Skantze, "Galatea : A discourse modeller supporting concept-level error handling in spoken dialogue systems," i Recent Trends in Discourse and Dialogue, Dybkjær, L.; Minker, W. red., Dordrecht : Springer Science + Business Media B.V, 2008.

Icke refereegranskade

Artiklar

[130]
D. Traum et al., "Special issue on multimodal processing and robotics for dialogue systems (Part II)," Advanced Robotics, vol. 38, no. 4, s. 193-194, 2024.
[131]
D. Traum et al., "Special Issue on Multimodal processing and robotics for dialogue systems (Part 1)," Advanced Robotics, vol. 37, no. 21, s. 1347-1348, 2023.

Konferensbidrag

[132]
S. Al Moubayed et al., "UM3I 2014 chairs' welcome," i UM3I 2014 - Proceedings of the 2014 ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, Co-located with ICMI 2014, 2014, s. iii.
[133]
S. Al Moubayed et al., "Talking with Furhat - multi-party interaction with a back-projected robot head," i Proceedings of Fonetik 2012, 2012, s. 109-112.
[134]
J. Beskow et al., "Speech technology in the European project MonAMI," i Proceedings of FONETIK 2008, 2008, s. 33-36.
[135]
G. Skantze, D. House och J. Edlund, "Grounding and prosody in dialog," i Working Papers 52 : Proceedings of Fonetik 2006, 2006, s. 117-120.
[136]
R. Carlson et al., "Towards human-like behaviour in spoken dialog systems," i Proceedings of Swedish Language Technology Conference (SLTC 2006), 2006.
[137]
J. Edlund, D. House och G. Skantze, "Prosodic Features in the Perception of Clarification Ellipses," i Proceedings of Fonetik 2005 : The XVIIIth Swedish Phonetics Conference, 2005, s. 107-110.

Avhandlingar

[138]
G. Skantze, "Error Handling in Spoken Dialogue Systems : Managing Uncertainty, Grounding and Miscommunication," Doktorsavhandling Stockholm : KTH, Trita-CSC-A, 2007:14, 2007.
Senaste synkning med DiVA:
2024-10-10 00:08:13