Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

Publikationer av Mårten Björkman

Refereegranskade

Artiklar

[1]
T. Olugbade et al., "Human Movement Datasets : An Interdisciplinary Scoping Review," ACM Computing Surveys, vol. 55, no. 6, 2023.
[2]
W. Yin et al., "Multimodal dance style transfer," Machine Vision and Applications, vol. 34, no. 4, 2023.
[3]
A. Maki et al., "In Memoriam : Jan-Olof Eklundh," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, s. 4488-4489, 2022.
[4]
A. Ghadirzadeh et al., "Training and Evaluation of Deep Policies Using Reinforcement Learning and Generative Models," Journal of machine learning research, vol. 23, 2022.
[5]
M. M. N. Bienkiewicz et al., "Bridging the gap between emotion and joint action," Neuroscience and Biobehavioral Reviews, vol. 131, s. 806-833, 2021.
[6]
A. Czeszumski et al., "Coordinating With a Robot Partner Affects Neural Processing Related to Action Monitoring," Frontiers in Neurorobotics, vol. 15, 2021.
[7]
A. Ghadirzadeh et al., "Human-Centered Collaborative Robots With Deep Reinforcement Learning," IEEE Robotics and Automation Letters, vol. 6, no. 2, s. 566-571, 2021.
[8]
J. Bütepage et al., "Imitating by Generating : Deep Generative Models for Imitation of Interactive Tasks," Frontiers in Robotics and AI, vol. 7, 2020.
[11]
V. Högman et al., "A sensorimotor learning framework for object categorization," IEEE Transactions on Cognitive and Developmental Systems, vol. 8, no. 1, s. 15-25, 2016.
[12]
M. Björkman, N. Bergström och D. Kragic, "Detecting, segmenting and tracking unknown objects using multi-label MRF inference," Computer Vision and Image Understanding, vol. 118, s. 111-127, 2014.
[13]
B. Rasolzadeh et al., "An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World," The international journal of robotics research, vol. 29, no. 2-3, s. 133-154, 2010.
[14]
M. Björkman och J.-O. Eklundh, "Vision in the real world : Finding, attending and recognizing objects," International journal of imaging systems and technology (Print), vol. 16, no. 5, s. 189-208, 2006.
[15]
J.-O. Eklundh och M. Björkman, "Recognition of Objects in the Real World from a Systems Perspective," Kuenstliche Intelligenz, vol. 19, no. 2, s. 12-17, 2005.
[16]
D. Kragic et al., "Vision for robotic object manipulation in domestic settings," Robotics and Autonomous Systems, vol. 52, no. 1, s. 85-100, 2005.
[17]
M. Björkman och J.-O. Eklundh, "Real-time epipolar geometry estimation of binocular stereo heads," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 3, s. 425-432, 2002.
[18]
F. Dahlgren, P. Stenström och M. Björkman, "Reducing the Read-Miss Penalty for Flat COMA Protocols," The Computer Journal, vol. 40, no. 4, s. 208-219, 1997.

Konferensbidrag

[19]
W. Yin et al., "Scalable Motion Style Transfer with Constrained Diffusion Generation," i The 38th Annual AAAI Conference on Artificial Intelligence, February 20-27, 2024, Vancouver, Canada, 2024.
[20]
P. Khanna, M. Björkman och C. Smith, "A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers," i 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, 2023, s. 1843-1850.
[21]
T. Rastogi och M. Björkman, "Automated Construction of Time-Space Diagrams for Traffic Analysis Using Street-View Video Sequences," i 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, 2023, s. 2282-2288.
[22]
J. Fu et al., "Component atention network for multimodal dance improvisation recognition," i PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, 2023, s. 114-118.
[23]
W. Yin et al., "Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 1102-1108.
[24]
W. Yin et al., "Dance Style Transfer with Cross-modal Transformer," i 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, s. 5047-5056.
[25]
N. Rajabi et al., "Detecting the Intention of Object Handover in Human-Robot Collaborations : An EEG Study," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 549-555.
[26]
F. Yang et al., "Diffusion-Based Time Series Data Imputation for Cloud Failure Prediction at Microsoft 365," i ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2023, s. 2050-2055.
[27]
P. Khanna et al., "Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration," i 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, 2023, s. 1829-1836.
[28]
P. Khanna et al., "How do Humans take an Object from a Robot : Behavior changes observed in a User Study," i HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, 2023, s. 372-374.
[29]
N. Rajabi et al., "Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface," i Augmented Cognition : 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings, 2023, s. 26-45.
[30]
S. Sabzevari et al., "PG-3DVTON : Pose-Guided 3D Virtual Try-on Network," i VISIGRAPP 2023 - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Volume 4, 2023, s. 819-829.
[31]
M. Tarle et al., "Safe Reinforcement Learning for Mitigation of Model Errors in FACTS Setpoint Control," i 2023 International Conference on Smart Energy Systems and Technologies, SEST 2023, 2023.
[32]
X. Zhu et al., "Surface Defect Detection with Limited Training Data : A Case Study on Crown Wheel Surface Inspection," i 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, 2023, s. 1333-1338.
[33]
X. Zhu et al., "Towards sim-to-real industrial parts classification with synthetic dataset," i Proceedings : 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, 2023, s. 4454-4463.
[34]
M. Gamba et al., "Are All Linear Regions Created Equal?," i Proceedings 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022, 2022.
[35]
J. Styrud et al., "Combining Planning and Learning of Behavior Trees for Robotic Assembly," i 2022 International Conference on Robotics and Automation (ICRA), 2022, s. 11511-11517.
[36]
P. Khanna, M. Björkman och C. Smith, "Human Inspired Grip-Release Technique for Robot-Human Handovers," i 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), 2022, s. 694-701.
[37]
J. R. Baldvinsson et al., "IL-GAN : Rare Sample Generation via Incremental Learning in GANs," i 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, s. 621-626.
[38]
S. U. Demir Kanik et al., "Improving EEG-based Motor Execution Classification for Robot Control," i Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022 : Social Computing and Social Media: Design, User Experience and Impact, 2022, s. 65-82.
[39]
A. Ghadirzadeh et al., "Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms," i 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, s. 1274-1280.
[40]
W. Yin et al., "Graph-based Normalizing Flow for Human Motion Generation and Reconstruction," i 2021 30th IEEE international conference on robot and human interactive communication (RO-MAN), 2021, s. 641-648.
[41]
S. Chen et al., "Monte Carlo Filtering Objectives," i IJCAI International Joint Conference on Artificial Intelligence, 2021, s. 2256-2262.
[42]
X. Chen et al., "Adversarial Feature Training for Generalizable Robotic Visuomotor Control," i 2020 International Conference on Robotics And Automation (ICRA), 2020, s. 1142-1148.
[43]
S. Chen et al., "Amortized Variational Inference for Road Friction Estimation," i 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, s. 1777-1784.
[44]
F. Yang et al., "Group Behavior Recognition Using Attention- and Graph-Based Neural Networks," i ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020.
[45]
F. Yang et al., "Impact of Trajectory Generation Methods on Viewer Perception of Robot Approaching Group Behaviors," i 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, 2020, s. 509-516.
[46]
E. Sibirtseva et al., "Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality," i Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, 2019, s. 108-123.
[47]
X. Chen et al., "Meta-Learning for Multi-objective Reinforcement Learning," i Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, 2019, s. 977-983.
[48]
M. Gamba et al., "On the geometry of rectifier convolutional neural networks," i Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019, 2019, s. 793-797.
[49]
X. Chen et al., "Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments," i 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
[50]
A. Ghadirzadeh et al., "Deep predictive policy training using reinforcement learning," i 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, 2017, s. 2351-2358.
[51]
A. Ghadirzadeh et al., "A sensorimotor reinforcement learning framework for physical human-robot interaction," i IEEE International Conference on Intelligent Robots and Systems, 2016, s. 2682-2688.
[52]
A. Ghadirzadeh et al., "Self-learning and adaptation in a sensorimotor framework," i Proceedings - IEEE International Conference on Robotics and Automation, 2016, s. 551-558.
[53]
A. Ghadirzadeh, A. Maki och M. Björkman, "A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination," i IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, 2015, s. 4969-4975.
[54]
X. Gratal et al., "Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation," i IEEE-RAS International Conference on Humanoid Robots, 2015, s. 240-245.
[55]
I. Lundberg, M. Björkman och P. Ögren, "Intrinsic camera and hand-eye calibration for a robot vision system using a point marker," i IEEE-RAS International Conference on Humanoid Robots, 2015, s. 59-66.
[56]
F. T. Pokorny et al., "Grasp Moduli Spaces, Gaussian Processes and Multimodal Sensor Data," i RSS 2014 Workshop: Information-based Grasp and Manipulation Planning, July 13, 2014, Berkeley, California, 2014.
[57]
M. Björkman och Y. Bekiroglu, "Learning to Disambiguate Object Hypotheses through Self-Exploration," i 14th IEEE-RAS International Conference onHumanoid Robots, 2014.
[58]
A. Ghadirzadeh et al., "Learning visual forward models to compensate for self-induced image motion," i 23rd IEEE International Conference on Robot and Human Interactive Communication : IEEE RO-MAN, 2014, s. 1110-1115.
[59]
M. Björkman et al., "Enhancing Visual Perception of Shape through Tactile Glances," i Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, 2013, s. 3180-3186.
[60]
V. Högman, M. Björkman och D. Kragic, "Interactive object classification using sensorimotor contingencies," i 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, s. 2799-2805.
[61]
L. Nalpantidis, M. Björkman och D. Kragic, "YES - YEt another object Segmentation : exploiting camera movement," i Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, s. 2116-2121.
[62]
J. Bohg et al., "Acting and Interacting in the Real World," i European Robotics Forum 2011: RGB-D Workshop on 3D Perception in Robotics. Västerås, Sweden. April 8, 2011, 2011.
[63]
N. Bergström, M. Björkman och D. Kragic, "Generating Object Hypotheses in Natural Scenes through Human-Robot Interaction," i 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2011, s. 827-833.
[64]
N. Bergström et al., "Scene Understanding through Autonomous Interactive Perception," i Computer Vision Systems : Lecture Notes in Computer Science, 2011, s. 153-162.
[65]
M. Björkman och D. Kragic, "Active 3D Segmentation through Fixation of Previously Unseen Objects," i British Machine Vision Conference (BMVC), Aberystwyth, UK, 2010, s. 119.1-119.11.
[66]
M. Björkman och D. Kragic, "Active 3D scene segmentation and detection of unknown objects," i IEEE International Conference on Robotics and Automation (ICRA), Anchorage, USA, 2010, s. 3114-3120.
[67]
N. Bergström et al., "Active Scene Analysis," i RSS Workshop on Towards Closing the Loop: Active Learning for Robotics. Univeridad de Zaragoza, Spain. June 27th 2010, 2010.
[68]
M. Johnson-Roberson et al., "Attention-based Active 3D Point Cloud Segmentation," i IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, s. 1165-1170.
[69]
X. Gratal et al., "Scene Representation and Object Grasping Using Active Vision," i IROS’10 Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics, Taipei, Taiwan, 2010., 2010.
[70]
J. Bohg et al., "Strategies for Multi-Modal Scene Exploration," i IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, s. 4509-4515.
[71]
K. Hübner et al., "Integration of visual and shape attributes for object action complexes," i Computer Vision Systems, Proceedings, 2008, s. 13-22.
[72]
P. Jensfelt et al., "A framework for vision based bearing only 3D SLAM," i Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006 : Vols 1-10, 2006, s. 1944-1950.
[73]
B. Rasolzadeh, M. Björkman och J.-O. Eklundh, "An attentional system combining top-down and bottom-up influences," i 2nd International Cognitive Vision Workshop (ICVW '06), Graz, Austria, May 2006, 2006.
[74]
A. Tavakoli Targhi et al., "Real-time texture detection using the LU-transform," i Real-time texture detection using the LU-transform, 2006.
[75]
D. Kragic och M. Björkman, "Strategies for object manipulation using foveal and peripheral vision," i International Conference on Computer Vision Systems (ICVS), New York, USA, 2006, s. 50.
[76]
M. Björkman och J.-O. Eklundh, "Foveated Figure-Ground Segmentation and Its Role in Recognition," i BMVC 2005 - Proceedings of the British Machine Vision Conference 2005, 2005, s. 819-828.
[77]
M. Björkman och J.-O. Eklundh, "Attending, Foveating and Recognizing Objects in Real World Scenes," i British Machine Vision Conference (BMVC), London, UK, 2004, s. 227-236.
[78]
M. Björkman och D. Kragic, "Combination of foveal and peripheral vision for object recognition and pose estimation," i 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, s. 5135-5140.
[79]
M. Björkman och J.-O. Eklundh, "Visual Cues for a Fixating Active Agent," i Proc. Robot Vision, 2001.
[80]
M. Björkman och J.-O. Eklundh, "A Real-Time System for Epipolar Geometry and Ego-Motion Estimation," i Proc. IEEE Computer Vision and Pattern Recognition (CVPR’00), 2000, s. 506-513.
[81]
M. Björkman och J.-O. Eklundh, "Real-Time Epipolar Geometry Estimation and Disparity," i Proc. International Conference on Computer Vision (ICCV’99), 1999, s. 234-141.
[82]
M. Björkman, F. Dahlgren och P. Stenström, "Using Hints to Reduce the Read Miss Penalty for Flat COMA Protocols," i Proc. of the 28th Hawaii International Conference on System Sciences, 1995, s. 242-251.

Icke refereegranskade

Artiklar

[83]
A. L. Gert et al., "COORDINATING WITH A ROBOT PARTNER AFFECTS ACTION MONITORING RELATED NEURAL PROCESSING," Psychophysiology, vol. 58, s. S60-S60, 2021.
Senaste synkning med DiVA:
2024-04-21 01:43:31