Skip to main content
Till KTH:s startsida Till KTH:s startsida

Publications by Gabriel Skantze

Refereegranskade

Artiklar

[1]
C. Mishra et al., "Does a robot's gaze aversion affect human gaze aversion?," Frontiers in Robotics and AI, vol. 10, 2023.
[2]
C. Mishra et al., "Real-time emotion generation in human-robot dialogue using large language models," Frontiers in Robotics and AI, vol. 10, 2023.
[3]
[4]
P. Blomsma, G. Skantze and M. Swerts, "Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication Partners," Frontiers in Artificial Intelligence, vol. 5, 2022.
[5]
S. Ahlberg et al., "Co-adaptive Human-Robot Cooperation : Summary and Challenges," Unmanned Systems, vol. 10, no. 02, pp. 187-203, 2022.
[6]
G. Skantze and B. Willemsen, "CoLLIE : Continual Learning of Language Grounding from Language-Image Embeddings," The journal of artificial intelligence research, vol. 74, pp. 1201-1223, 2022.
[7]
A. Axelsson, H. Buschmeier and G. Skantze, "Modeling Feedback in Interaction With Conversational Agents—A Review," Frontiers in Computer Science, vol. 4, 2022.
[8]
A. Axelsson and G. Skantze, "Multimodal User Feedback During Adaptive Robot-Human Presentations," Frontiers in Computer Science, vol. 3, 2022.
[9]
G. Skantze, "Turn-taking in Conversational Systems and Human-Robot Interaction : A Review," Computer speech & language (Print), vol. 67, 2021.
[10]
G. Skantze, "Real-Time Coordination in Human-Robot Interaction Using Face and Voice," AI Magazine, vol. 37, no. 4, pp. 19-31, 2016.
[11]
H. Cuayahuitl, K. Komatani and G. Skantze, "Introduction for Speech and language for interactive robots," Computer speech & language (Print), vol. 34, no. 1, pp. 83-86, 2015.
[12]
R. Meena, G. Skantze and J. Gustafsson, "Data-driven models for timing feedback responses in a Map Task dialogue system," Computer speech & language (Print), vol. 28, no. 4, pp. 903-922, 2014.
[13]
G. Skantze, A. Hjalmarsson and C. Oertel, "Turn-taking, feedback and joint attention in situated human-robot interaction," Speech Communication, vol. 65, pp. 50-66, 2014.
[14]
N. Mirnig et al., "Face-To-Face With A Robot : What do we actually talk about?," International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350011, 2013.
[15]
S. Al Moubayed, G. Skantze and J. Beskow, "The Furhat Back-Projected Humanoid Head-Lip Reading, Gaze And Multi-Party Interaction," International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350005, 2013.
[16]
G. Skantze and A. Hjalmarsson, "Towards incremental speech generation in conversational systems," Computer speech & language (Print), vol. 27, no. 1, pp. 243-262, 2013.
[17]
D. Schlangen and G. Skantze, "A General, Abstract Model of Incremental Dialogue Processing," Dialogue and Discourse, vol. 2, no. 1, pp. 83-111, 2011.
[18]
G. Skantze, "Exploring human error recovery strategies : Implications for spoken dialogue systems," Speech Communication, vol. 45, no. 3, pp. 325-341, 2005.

Konferensbidrag

[19]
A. Borg, I. Parodis and G. Skantze, "Creating Virtual Patients using Robots and Large Language Models: A Preliminary Study with Medical Students," in HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, pp. 273-277.
[20]
S. Ashkenazi et al., "Goes to the Heart: Speaking the User's Native Language," in HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, pp. 214-218.
[21]
A. Axelsson et al., "Robots in autonomous buses: Who hosts when no human is there?," in HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, pp. 1278-1280.
[22]
E. Ekstedt et al., "Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis," in Interspeech 2023, 2023, pp. 5481-5485.
[23]
C. Figueroa, M. Ochs and G. Skantze, "Classification of Feedback Functions in Spoken Dialog Using Large Language Models and Prosodic Features," in 27th Workshop on the Semantics and Pragmatics of Dialogue, 2023, pp. 15-24.
[24]
T. Offrede et al., "Do Humans Converge Phonetically When Talking to a Robot?," in Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, pp. 3507-3511.
[25]
A. Axelsson and G. Skantze, "Do you follow? : A fully automated system for adaptive robot presenters," in HRI 2023 : Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, pp. 102-111.
[26]
A. M. Kamelabad and G. Skantze, "I Learn Better Alone! Collaborative and Individual Word Learning With a Child and Adult Robot," in Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, pp. 368-377.
[27]
C. Figueroa, Š. Beňuš and G. Skantze, "Prosodic Alignment in Different Conversational Feedback Functions," in Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, pp. 154-1518.
[28]
B. Willemsen, L. Qian and G. Skantze, "Resolving References in Visually-Grounded Dialogue via Text Generation," in Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue, 2023, pp. 457-469.
[29]
B. Jiang, E. Ekstedt and G. Skantze, "Response-conditioned Turn-taking Prediction," in Findings of the Association for Computational Linguistics : ACL 2023, 2023, pp. 12241-12248.
[30]
E. Ekstedt and G. Skantze, "Show & Tell : Voice Activity Projection and Turn-taking," in Interspeech 2023, 2023, pp. 2020-2021.
[31]
G. Skantze and A. S. Doğruöz, "The Open-domain Paradox for Chatbots: Common Ground as the Basis for Human-like Dialogue," in Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2023, pp. 605-614.
[32]
K. Inoue et al., "Towards Objective Evaluation of Socially-Situated Conversational Robots : Assessing Human-Likeness through Multimodal User Behaviors," in ICMI 2023 Companion : Companion Publication of the 25th International Conference on Multimodal Interaction, 2023, pp. 86-90.
[33]
A. Axelsson and G. Skantze, "Using Large Language Models for Zero-Shot Natural Language Generation from Knowledge Graphs," in Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023), 2023, pp. 39-54.
[34]
B. Jiang, E. Ekstedt and G. Skantze, "What makes a good pause? Investigating the turn-holding effects of fillers," in Proceedings 20th International Congress of Phonetic Sciences (ICPhS), 2023.
[35]
M. P. Aylett et al., "Why is my Agent so Slow? Deploying Human-Like Conversational Turn-Taking," in HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, 2023, pp. 490-492.
[36]
C. Figueroa et al., "Annotation of Communicative Functions of Short Feedback Tokens in Switchboard," in 2022 Language Resources and Evaluation Conference, LREC 2022, 2022.
[37]
B. Willemsen, D. Kalpakchi and G. Skantze, "Collecting Visually-Grounded Dialogue with A Game Of Sorts," in Proceedings of the 13th Conference on Language Resources and Evaluation, 2022, pp. 2257-2268.
[38]
M. Elgarf et al., "CreativeBot : a Creative Storyteller robot to stimulate creativity in children," in ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction, 2022, pp. 540-548.
[39]
E. Ekstedt and G. Skantze, "How Much Does Prosody Help Turn-taking?Investigations using Voice Activity Projection Models," in Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 541–551, Edinburgh, UK. Association for Computational Linguistics., 2022, pp. 541-551.
[40]
G. Skantze and C. Mishra, "Knowing where to look : A planning-based architecture to automate the gaze behavior of social robots," in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2022.
[41]
E. Ekstedt and G. Skantze, "Voice Activity Projection: Self-supervised Learning of Turn-taking Events," in INTERSPEECH 2022, 2022, pp. 5190-5194.
[42]
G. Skantze, "Conversational interaction with social robots," in ACM/IEEE International Conference on Human-Robot Interaction, 2021.
[43]
A. S. Dogruoz and G. Skantze, "How "open" are the conversations with open-domain chatbots? : A proposal for Speech Event based evaluation," in SIGDIAL 2021 : 22Nd Annual Meeting Of The Special Interest Group On Discourse And Dialogue (Sigdial 2021), 2021, pp. 392-402.
[44]
M. Elgarf, G. Skantze and C. Peters, "Once Upon a Story : Can a Creative Storyteller Robot Stimulate Creativity in Children?," in Proceedings of the 21st ACM international conference on intelligent virtual agents (IVA), 2021, pp. 60-67.
[45]
E. Ekstedt and G. Skantze, "Projection of Turn Completion in Incremental Spoken Dialogue Systems," in SIGDIAL 2021 : 22ND ANNUAL MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE (SIGDIAL 2021), 2021, pp. 431-437.
[46]
O. Ibrahim and G. Skantze, "Revisiting robot directed speech effects in spontaneous Human-Human-Robot interactions," in Human Perspectives on Spoken Human-Machine Interaction, 2021.
[47]
E. Ekstedt and G. Skantze, "TurnGPT : a Transformer-based Language Model for Predicting Turn-taking in Spoken Dialog," in Findings of the Association for Computational Linguistics : EMNLP 2020, 2020, pp. 2981-2990.
[48]
N. Axelsson and G. Skantze, "Using knowledge graphs and behaviour trees for feedback-aware presentation agents," in Proceedings of Intelligent Virtual Agents 2020, 2020.
[49]
T. Shore and G. Skantze, "Using lexical alignment and referring ability to address data sparsity in situated dialog reference resolution," in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, 2020, pp. 2288-2297.
[50]
P. Jonell et al., "Crowdsourcing a self-evolving dialog graph," in CUI '19: Proceedings of the 1st International Conference on Conversational User Interfaces, 2019.
[52]
T. Shore, T. Androulakaki and G. Skantze, "KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue," in LREC 2018 - 11th International Conference on Language Resources and Evaluation, 2019, pp. 768-775.
[53]
N. Axelsson and G. Skantze, "Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees," in 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue : Proceedings of the Conference, 2019, pp. 345-352.
[54]
D. Kontogiorgos et al., "The Effects of Embodiment and Social Eye-Gaze in Conversational Agents," in Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci), 2019.
[55]
D. Kontogiorgos et al., "A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction," in Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018, pp. 119-127.
[56]
C. Li et al., "Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality," in Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018, pp. 191-196.
[57]
C. Peters et al., "Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality," in Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, pp. 2247-2249.
[58]
M. Roddy, G. Skantze and N. Harte, "Investigating speech features for continuous turn-taking prediction using LSTMs," in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, pp. 586-590.
[59]
M. Roddy, G. Skantze and N. Harte, "Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs," in ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction, 2018, pp. 186-190.
[60]
D. Kontogiorgos et al., "Multimodal reference resolution in collaborative assembly tasks," in Multimodal reference resolution in collaborative assembly tasks, 2018.
[61]
C. Peters et al., "Towards the use of mixed reality for hri design via virtual robots," in HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020, 2018.
[62]
J. Lopes, O. Engwall and G. Skantze, "A First Visit to the Robot Language Café," in Proceedings of the ISCA workshop on Speech and Language Technology in Education, 2017.
[63]
R. Johansson, G. Skantze and A. Jönsson, "A psychotherapy training environment with virtual patients implemented using the furhat robot platform," in 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, pp. 184-187.
[64]
V. Avramova et al., "A virtual poster presenter using mixed reality," in 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, pp. 25-28.
[65]
T. Shore and G. Skantze, "Enhancing reference resolution in dialogue using participant feedback," in Proc. GLU 2017 International Workshop on Grounding Language Understanding, 2017, pp. 78-82.
[66]
G. Skantze, "Predicting and Regulating Participation Equality in Human-robot Conversations : Effects of Age and Gender," in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 196-204.
[67]
G. Skantze, "Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using LSTM Recurrent Neural Networks," in Proceedings of SIGDIAL 2017 - 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2017.
[68]
M. Johansson et al., "Making Turn-Taking Decisions for an Active Listening Robot for Memory Training," in SOCIAL ROBOTICS, (ICSR 2016), 2016, pp. 940-949.
[69]
[70]
G. Skantze, M. Johansson and J. Beskow, "A Collaborative Human-Robot Game as a Test-bed for Modelling Multi-party, Situated Interaction," in INTELLIGENT VIRTUAL AGENTS, IVA 2015, 2015, pp. 348-351.
[71]
R. Meena et al., "Automatic Detection of Miscommunication in Spoken Dialogue Systems," in Proceedings of 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015, pp. 354-363.
[72]
J. Lopes et al., "Detecting Repetitions in Spoken Dialogue Systems Using Phonetic Distances," in INTERSPEECH-2015, 2015, pp. 1805-1809.
[73]
G. Skantze, M. Johansson and J. Beskow, "Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects," in Proceedings of the 2015 ACM International Conference on Multimodal Interaction, 2015.
[74]
G. Skantze and M. Johansson, "Modelling situated human-robot interaction using IrisTK," in Proceedings of the SIGDIAL 2015 Conference, 2015, pp. 165-167.
[75]
M. Johansson and G. Skantze, "Opportunities and obligations to take turns in collaborative multi-party human-robot interaction," in SIGDIAL 2015 - 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2015, pp. 305-314.
[76]
M. Johansson, G. Skantze and J. Gustafson, "Comparison of human-human and human-robot Turn-taking Behaviour in multi-party Situated interaction," in UM3I '14 : Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions, 2014, pp. 21-26.
[77]
R. Meena et al., "Crowdsourcing Street-level Geographic Information Using a Spoken Dialogue System," in Proceedings of the SIGDIAL 2014 Conference, 2014, pp. 2-11.
[78]
S. Al Moubayed et al., "Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue," in 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 2014.
[79]
S. Al Moubayed, J. Beskow and G. Skantze, "Spontaneous spoken dialogues with the Furhat human-like robot head," in HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 2014, p. 326.
[80]
S. Al Moubayed et al., "Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor," in 9th International Summer Workshop on Multimodal Interfaces, Lisbon, Portugal, 2014.
[81]
S. Al Moubayed et al., "UM3I 2014 : International workshop on understanding and modeling multiparty, multimodal interactions," in ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction, 2014, pp. 537-538.
[82]
G. Skantze, C. Oertel and A. Hjalmarsson, "User Feedback in Human-Robot Dialogue : Task Progression and Uncertainty," in Proceedings of the HRI Workshop on Timing in Human-Robot Interaction, 2014.
[83]
R. Meena et al., "Using a Spoken Dialogue System for Crowdsourcing Street-level Geographic Information," in 2nd Workshop on Action, Perception and Language, SLTC 2014, 2014.
[84]
R. Meena, G. Skantze and J. Gustafson, "A Data-driven Model for Timing Feedback in a Map Task Dialogue System," in 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, pp. 375-383.
[85]
G. Skantze, A. Hjalmarsson and C. Oertel, "Exploring the effects of gaze and pauses in situated human-robot interaction," in 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue : SIGDIAL 2013, 2013.
[86]
M. Johansson, G. Skantze and J. Gustafson, "Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions," in Social Robotics : 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings, 2013, pp. 351-360.
[87]
R. Meena, G. Skantze and J. Gustafson, "Human Evaluation of Conceptual Route Graphs for Interpreting Spoken Route Descriptions," in Proceedings of the 3rd International Workshop on Computational Models of Spatial Language Interpretation and Generation (CoSLI), 2013, pp. 30-35.
[88]
S. Al Moubayed, J. Beskow and G. Skantze, "The Furhat Social Companion Talking Head," in Interspeech 2013 - Show and Tell, 2013, pp. 747-749.
[89]
R. Meena, G. Skantze and J. Gustafson, "The Map Task Dialogue System : A Test-bed for Modelling Human-Like Dialogue," in 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, pp. 366-368.
[90]
G. Skantze, C. Oertel and A. Hjalmarsson, "User feedback in human-robot interaction : Prosody, gaze and timing," in Proceedings of Interspeech 2013, 2013, pp. 1901-1905.
[91]
R. Meena, G. Skantze and J. Gustafson, "A Chunking Parser for Semantic Interpretation of Spoken Route Directions in Human-Robot Dialogue," in Proceedings of the 4th Swedish Language Technology Conference (SLTC 2012), 2012, pp. 55-56.
[92]
G. Skantze, "A Testbed for Examining the Timing of Feedback using a Map Task," in Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, 2012.
[93]
R. Meena, G. Skantze and J. Gustafson, "A data-driven approach to understanding spoken route directions in human-robot dialogue," in 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, 2012, pp. 226-229.
[95]
S. Al Moubayed et al., "Furhat : A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction," in Cognitive Behavioural Systems : COST 2102 International Training School, Dresden, Germany, February 21-26, 2011, Revised Selected Papers, 2012, pp. 114-130.
[96]
G. Skantze et al., "Furhat at Robotville : A Robot Head Harvesting the Thoughts of the Public through Multi-party Dialogue," in Proceedings of the Workshop on Real-time Conversation with Virtual Agents IVA-RCVA, 2012.
[97]
S. Al Moubayed et al., "Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space," in Proc of LREC Workshop on Multimodal Corpora, 2012.
[98]
G. Skantze and S. Al Moubayed, "IrisTK : A statechart-based toolkit for multi-party face-to-face interaction," in ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction, 2012, pp. 69-75.
[99]
S. Al Moubayed, G. Skantze and J. Beskow, "Lip-reading : Furhat audio visual intelligibility of a back projected animated face," in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, pp. 196-203.
[100]
S. Al Moubayed et al., "Multimodal Multiparty Social Interaction with the Furhat Head," in 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA, 2012, pp. 293-294.
[101]
S. Al Moubayed and G. Skantze, "Perception of Gaze Direction for Situated Interaction," in Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, Gaze-In 2012, 2012.
[102]
S. Al Moubayed and G. Skantze, "Effects of 2D and 3D Displays on Turn-taking Behavior in Multiparty Human-Computer Dialog," in SemDial 2011 : Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue, 2011, pp. 192-193.
[103]
M. Johnson-Roberson et al., "Enhanced Visual Scene Understanding through Human-Robot Dialog," in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 3342-3348.
[104]
S. Al Moubayed and G. Skantze, "Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue : Effects of 2D and 3D Displays," in Proceedings of the International Conference on Audio-Visual Speech Processing 2011, 2011, pp. 99-102.
[105]
M. Johansson, G. Skantze and J. Gustafson, "Understanding route directions in human-robot dialogue," in Proceedings of SemDial, 2011, pp. 19-27.
[106]
M. Johnson-Roberson et al., "Enhanced visual scene understanding through human-robot dialog," in Dialog with Robots : AAAI 2010 Fall Symposium, 2010.
[107]
D. Schlangen et al., "Middleware for Incremental Processing in Conversational Agents," in Proceedings of SIGDIAL 2010 : the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2010, pp. 51-54.
[108]
G. Skantze and A. Hjalmarsson, "Towards Incremental Speech Generation in Dialogue Systems," in Proceedings of the SIGDIAL 2010 Conference : 11th Annual Meeting of the Special Interest Group onDiscourse and Dialogue, 2010, pp. 1-8.
[109]
D. Schlangen and G. Skantze, "A general, abstract model of incremental dialogue processing," in EACL 2009 - 12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings, 2009, pp. 710-718.
[110]
G. Skantze and J. Gustafson, "Attention and interaction control in a human-human-computer dialogue setting," in Proceedings of SIGDIAL 2009 : the 10th Annual Meeting of the Special Interest Group in Discourse and Dialogue, 2009, pp. 310-313.
[111]
G. Skantze and D. Schlangen, "Incremental dialogue processing in a micro-domain," in Proceedings of the 12th Conference of the European Chapter of the ACL, 2009, pp. 745-753.
[112]
G. Skantze and J. Gustafson, "Multimodal interaction control in the MonAMI Reminder," in Proceedings of DiaHolmia : 2009 Workshop on the Semantics and Pragmatics of Dialogue, 2009, pp. 127-128.
[113]
J. Beskow et al., "The MonAMI Reminder : a spoken dialogue system for face-to-face interaction," in Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009, 2009, pp. 300-303.
[114]
J. Beskow et al., "Innovative interfaces in MonAMI : The Reminder," in Perception In Multimodal Dialogue Systems, Proceedings, 2008, pp. 272-275.
[115]
G. Skantze, "Making grounding decisions : Data-driven estimation of dialogue costs and confidence thresholds," in Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, 2007, pp. 206-210.
[116]
G. Skantze, J. Edlund and R. Carlson, "Talking with Higgins : Research challenges in a spoken dialogue system," in PERCEPTION AND INTERACTIVE TECHNOLOGIES, PROCEEDINGS, 2006, pp. 193-196.
[117]
Å. Wallers, J. Edlund and G. Skantze, "The effect of prosodic features on the interpretation of synthesised backchannels," in Perception And Interactive Technologies, Proceedings, 2006, pp. 183-187.
[118]
G. Skantze, D. House and J. Edlund, "User Responses to Prosodic Variation in Fragmentary Grounding Utterances in Dialog," in INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, 2006, pp. 2002-2005.
[119]
G. Skantze, "Galatea: a discourse modeller supporting concept-level error handling in spoken dialogue systems," in 6th SIGdial Workshop on Discourse and Dialogue, 2005, pp. 178-189.
[120]
J. Edlund, D. House and G. Skantze, "The effects of prosodic features on the interpretation of clarification ellipses," in Proceedings of Interspeech 2005 : Eurospeech, 2005, pp. 2389-2392.
[121]
G. Skantze and J. Edlund, "Early error detection on word level," in Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.
[122]
J. Edlund, G. Skantze and R. Carlson, "Higgins : a spoken dialogue system for investigating error handling techniques," in Proceedings of the International Conference on Spoken Language Processing, ICSLP 04, 2004, pp. 229-231.
[123]
G. Skantze and J. Edlund, "Robust interpretation in the Higgins spoken dialogue system," in Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.

Kapitel i böcker

[124]
G. Skantze, J. Gustafson and J. Beskow, "Multimodal Conversational Interaction with Robots," in The Handbook of Multimodal-Multisensor Interfaces, Volume 3 : Language Processing, Software, Commercialization, and Emerging Directions, Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Krüger Ed., : ACM Press, 2019.
[125]
J. Beskow et al., "Multimodal Interaction Control," in Computers in the Human Interaction Loop, Waibel, Alexander; Stiefelhagen, Rainer Ed., Berlin/Heidelberg : Springer Berlin/Heidelberg, 2009, pp. 143-158.
[126]
G. Skantze, "Galatea : A discourse modeller supporting concept-level error handling in spoken dialogue systems," in Recent Trends in Discourse and Dialogue, Dybkjær, L.; Minker, W. Ed., Dordrecht : Springer Science + Business Media B.V, 2008.

Icke refereegranskade

Artiklar

[127]
D. Traum et al., "Special issue on multimodal processing and robotics for dialogue systems (Part II)," Advanced Robotics, vol. 38, no. 4, pp. 193-194, 2024.
[128]
D. Traum et al., "Special Issue on Multimodal processing and robotics for dialogue systems (Part 1)," Advanced Robotics, vol. 37, no. 21, pp. 1347-1348, 2023.

Konferensbidrag

[129]
S. Al Moubayed et al., "UM3I 2014 chairs' welcome," in UM3I 2014 - Proceedings of the 2014 ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, Co-located with ICMI 2014, 2014, p. iii.
[130]
S. Al Moubayed et al., "Talking with Furhat - multi-party interaction with a back-projected robot head," in Proceedings of Fonetik 2012, 2012, pp. 109-112.
[131]
J. Beskow et al., "Speech technology in the European project MonAMI," in Proceedings of FONETIK 2008, 2008, pp. 33-36.
[132]
G. Skantze, D. House and J. Edlund, "Grounding and prosody in dialog," in Working Papers 52 : Proceedings of Fonetik 2006, 2006, pp. 117-120.
[133]
R. Carlson et al., "Towards human-like behaviour in spoken dialog systems," in Proceedings of Swedish Language Technology Conference (SLTC 2006), 2006.
[134]
J. Edlund, D. House and G. Skantze, "Prosodic Features in the Perception of Clarification Ellipses," in Proceedings of Fonetik 2005 : The XVIIIth Swedish Phonetics Conference, 2005, pp. 107-110.

Avhandlingar

[135]
G. Skantze, "Error Handling in Spoken Dialogue Systems : Managing Uncertainty, Grounding and Miscommunication," Doctoral thesis Stockholm : KTH, Trita-CSC-A, 2007:14, 2007.
Senaste synkning med DiVA:
2024-04-28 01:04:52