Publications by Mårten Björkman
Peer reviewed
Articles
[1]
P. Khanna et al., "Hand it to me formally! Data-driven control for human-robot handovers with signal temporal logic," IEEE Robotics and Automation Letters, vol. 9, no. 10, pp. 9039-9046, 2024.
[2]
M. Gamba et al., "Deep Double Descent via Smooth Interpolation," Transactions on Machine Learning Research, vol. 4, 2023.
[3]
T. Olugbade et al., "Human Movement Datasets : An Interdisciplinary Scoping Review," ACM Computing Surveys, vol. 55, no. 6, 2023.
[4]
W. Yin et al., "Multimodal dance style transfer," Machine Vision and Applications, vol. 34, no. 4, 2023.
[5]
A. Maki et al., "In Memoriam : Jan-Olof Eklundh," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4488-4489, 2022.
[6]
A. Ghadirzadeh et al., "Training and Evaluation of Deep Policies Using Reinforcement Learning and Generative Models," Journal of machine learning research, vol. 23, 2022.
[7]
M. M. N. Bienkiewicz et al., "Bridging the gap between emotion and joint action," Neuroscience and Biobehavioral Reviews, vol. 131, pp. 806-833, 2021.
[8]
A. Czeszumski et al., "Coordinating With a Robot Partner Affects Neural Processing Related to Action Monitoring," Frontiers in Neurorobotics, vol. 15, 2021.
[9]
A. Ghadirzadeh et al., "Human-Centered Collaborative Robots With Deep Reinforcement Learning," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 566-571, 2021.
[10]
J. Bütepage et al., "Imitating by Generating : Deep Generative Models for Imitation of Interactive Tasks," Frontiers in Robotics and AI, vol. 7, 2020.
[11]
G. Z. Gandler et al., "Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration," Robotics and Autonomous Systems, vol. 126, 2020.
[12]
Y. Bekiroglu et al., "Visual and tactile 3D point cloud data from real robots for shape modeling and completion," Data in Brief, vol. 30, 2020.
[13]
V. Högman et al., "A sensorimotor learning framework for object categorization," IEEE Transactions on Cognitive and Developmental Systems, vol. 8, no. 1, pp. 15-25, 2016.
[14]
M. Björkman, N. Bergström and D. Kragic, "Detecting, segmenting and tracking unknown objects using multi-label MRF inference," Computer Vision and Image Understanding, vol. 118, pp. 111-127, 2014.
[15]
B. Rasolzadeh et al., "An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World," The international journal of robotics research, vol. 29, no. 2-3, pp. 133-154, 2010.
[16]
M. Björkman and J.-O. Eklundh, "Vision in the real world : Finding, attending and recognizing objects," International journal of imaging systems and technology (Print), vol. 16, no. 5, pp. 189-208, 2006.
[17]
J.-O. Eklundh and M. Björkman, "Recognition of Objects in the Real World from a Systems Perspective," Kuenstliche Intelligenz, vol. 19, no. 2, pp. 12-17, 2005.
[18]
D. Kragic et al., "Vision for robotic object manipulation in domestic settings," Robotics and Autonomous Systems, vol. 52, no. 1, pp. 85-100, 2005.
[19]
M. Björkman and J.-O. Eklundh, "Real-time epipolar geometry estimation of binocular stereo heads," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 425-432, 2002.
[20]
F. Dahlgren, P. Stenström and M. Björkman, "Reducing the Read-Miss Penalty for Flat COMA Protocols," The Computer Journal, vol. 40, no. 4, pp. 208-219, 1997.
Conference papers
[21]
A. Longhini et al., "Cloth-Splatting : 3D Cloth State Estimation from RGB Supervision," in 8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany, 2024.
[22]
W. Yin et al., "Scalable Motion Style Transfer with Constrained Diffusion Generation," in Proceedings of the 38th AAAI Conference on Artificial Intelligence, 2024, pp. 10234-10242.
[23]
Y. Zhang et al., "Will You Participate? Exploring the Potential of Robotics Competitions on Human-Centric Topics," in Human-Computer Interaction - Thematic Area, HCI 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Proceedings, 2024, pp. 240-255.
[24]
P. Khanna, M. Björkman and C. Smith, "A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers," in 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, 2023, pp. 1843-1850.
[25]
T. Rastogi and M. Björkman, "Automated Construction of Time-Space Diagrams for Traffic Analysis Using Street-View Video Sequences," in 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, 2023, pp. 2282-2288.
[26]
J. Fu et al., "Component atention network for multimodal dance improvisation recognition," in PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, 2023, pp. 114-118.
[27]
W. Yin et al., "Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models," in 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, pp. 1102-1108.
[28]
W. Yin et al., "Dance Style Transfer with Cross-modal Transformer," in 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, pp. 5047-5056.
[29]
N. Rajabi et al., "Detecting the Intention of Object Handover in Human-Robot Collaborations : An EEG Study," in 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, pp. 549-555.
[30]
F. Yang et al., "Diffusion-Based Time Series Data Imputation for Cloud Failure Prediction at Microsoft 365," in ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2023, pp. 2050-2055.
[31]
P. Khanna et al., "Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration," in 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, 2023, pp. 1829-1836.
[32]
P. Khanna et al., "How do Humans take an Object from a Robot : Behavior changes observed in a User Study," in HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, 2023, pp. 372-374.
[33]
N. Rajabi et al., "Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface," in Augmented Cognition : 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings, 2023, pp. 26-45.
[34]
M. Gamba, H. Azizpour and M. Björkman, "On the Lipschitz Constant of Deep Networks and Double Descent," in Proceedings 34th British Machine Vision Conference 2023, 2023.
[35]
S. Sabzevari et al., "PG-3DVTON : Pose-Guided 3D Virtual Try-on Network," in VISIGRAPP 2023 - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Volume 4, 2023, pp. 819-829.
[36]
M. Tarle et al., "Safe Reinforcement Learning for Mitigation of Model Errors in FACTS Setpoint Control," in 2023 International Conference on Smart Energy Systems and Technologies, SEST 2023, 2023.
[37]
X. Zhu et al., "Surface Defect Detection with Limited Training Data : A Case Study on Crown Wheel Surface Inspection," in 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, 2023, pp. 1333-1338.
[38]
X. Zhu et al., "Towards sim-to-real industrial parts classification with synthetic dataset," in Proceedings : 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, 2023, pp. 4454-4463.
[39]
M. Gamba et al., "Are All Linear Regions Created Equal?," in Proceedings 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022, 2022.
[40]
J. Styrud et al., "Combining Planning and Learning of Behavior Trees for Robotic Assembly," in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 11511-11517.
[41]
P. Khanna, M. Björkman and C. Smith, "Human Inspired Grip-Release Technique for Robot-Human Handovers," in 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), 2022, pp. 694-701.
[42]
J. R. Baldvinsson et al., "IL-GAN : Rare Sample Generation via Incremental Learning in GANs," in 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, pp. 621-626.
[43]
S. U. Demir Kanik et al., "Improving EEG-based Motor Execution Classification for Robot Control," in Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022 : Social Computing and Social Media: Design, User Experience and Impact, 2022, pp. 65-82.
[44]
A. Ghadirzadeh et al., "Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 1274-1280.
[45]
W. Yin et al., "Graph-based Normalizing Flow for Human Motion Generation and Reconstruction," in 2021 30th IEEE international conference on robot and human interactive communication (RO-MAN), 2021, pp. 641-648.
[46]
S. Chen et al., "Monte Carlo Filtering Objectives," in IJCAI International Joint Conference on Artificial Intelligence, 2021, pp. 2256-2262.
[47]
X. Chen et al., "Adversarial Feature Training for Generalizable Robotic Visuomotor Control," in 2020 International Conference on Robotics And Automation (ICRA), 2020, pp. 1142-1148.
[48]
S. Chen et al., "Amortized Variational Inference for Road Friction Estimation," in 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 1777-1784.
[49]
F. Yang et al., "Group Behavior Recognition Using Attention- and Graph-Based Neural Networks," in ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020.
[50]
F. Yang et al., "Impact of Trajectory Generation Methods on Viewer Perception of Robot Approaching Group Behaviors," in 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, 2020, pp. 509-516.
[51]
E. Sibirtseva et al., "Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality," in Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, 2019, pp. 108-123.
[52]
X. Chen et al., "Meta-Learning for Multi-objective Reinforcement Learning," in Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, 2019, pp. 977-983.
[53]
M. Gamba et al., "On the geometry of rectifier convolutional neural networks," in Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019, 2019, pp. 793-797.
[54]
X. Chen et al., "Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
[55]
A. Ghadirzadeh et al., "Deep predictive policy training using reinforcement learning," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, 2017, pp. 2351-2358.
[56]
A. Ghadirzadeh et al., "A sensorimotor reinforcement learning framework for physical human-robot interaction," in IEEE International Conference on Intelligent Robots and Systems, 2016, pp. 2682-2688.
[57]
A. Ghadirzadeh et al., "Self-learning and adaptation in a sensorimotor framework," in Proceedings - IEEE International Conference on Robotics and Automation, 2016, pp. 551-558.
[58]
A. Ghadirzadeh, A. Maki and M. Björkman, "A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination," in IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, 2015, pp. 4969-4975.
[59]
X. Gratal et al., "Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation," in IEEE-RAS International Conference on Humanoid Robots, 2015, pp. 240-245.
[60]
I. Lundberg, M. Björkman and P. Ögren, "Intrinsic camera and hand-eye calibration for a robot vision system using a point marker," in IEEE-RAS International Conference on Humanoid Robots, 2015, pp. 59-66.
[61]
F. T. Pokorny et al., "Grasp Moduli Spaces, Gaussian Processes and Multimodal Sensor Data," in RSS 2014 Workshop: Information-based Grasp and Manipulation Planning, July 13, 2014, Berkeley, California, 2014.
[62]
M. Björkman and Y. Bekiroglu, "Learning to Disambiguate Object Hypotheses through Self-Exploration," in 14th IEEE-RAS International Conference onHumanoid Robots, 2014.
[63]
A. Ghadirzadeh et al., "Learning visual forward models to compensate for self-induced image motion," in 23rd IEEE International Conference on Robot and Human Interactive Communication : IEEE RO-MAN, 2014, pp. 1110-1115.
[64]
M. Björkman et al., "Enhancing Visual Perception of Shape through Tactile Glances," in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, 2013, pp. 3180-3186.
[65]
V. Högman, M. Björkman and D. Kragic, "Interactive object classification using sensorimotor contingencies," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 2799-2805.
[66]
L. Nalpantidis, M. Björkman and D. Kragic, "YES - YEt another object Segmentation : exploiting camera movement," in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, pp. 2116-2121.
[67]
J. Bohg et al., "Acting and Interacting in the Real World," in European Robotics Forum 2011: RGB-D Workshop on 3D Perception in Robotics. Västerås, Sweden. April 8, 2011, 2011.
[68]
N. Bergström, M. Björkman and D. Kragic, "Generating Object Hypotheses in Natural Scenes through Human-Robot Interaction," in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2011, pp. 827-833.
[69]
N. Bergström et al., "Scene Understanding through Autonomous Interactive Perception," in Computer Vision Systems : Lecture Notes in Computer Science, 2011, pp. 153-162.
[70]
M. Björkman and D. Kragic, "Active 3D Segmentation through Fixation of Previously Unseen Objects," in British Machine Vision Conference (BMVC), Aberystwyth, UK, 2010, pp. 119.1-119.11.
[71]
M. Björkman and D. Kragic, "Active 3D scene segmentation and detection of unknown objects," in IEEE International Conference on Robotics and Automation (ICRA), Anchorage, USA, 2010, pp. 3114-3120.
[72]
N. Bergström et al., "Active Scene Analysis," in RSS Workshop on Towards Closing the Loop: Active Learning for Robotics. Univeridad de Zaragoza, Spain. June 27th 2010, 2010.
[73]
M. Johnson-Roberson et al., "Attention-based Active 3D Point Cloud Segmentation," in IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, pp. 1165-1170.
[74]
X. Gratal et al., "Scene Representation and Object Grasping Using Active Vision," in IROS’10 Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics, Taipei, Taiwan, 2010., 2010.
[75]
J. Bohg et al., "Strategies for Multi-Modal Scene Exploration," in IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, pp. 4509-4515.
[76]
K. Hübner et al., "Integration of visual and shape attributes for object action complexes," in Computer Vision Systems, Proceedings, 2008, pp. 13-22.
[77]
P. Jensfelt et al., "A framework for vision based bearing only 3D SLAM," in Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006 : Vols 1-10, 2006, pp. 1944-1950.
[78]
B. Rasolzadeh, M. Björkman and J.-O. Eklundh, "An attentional system combining top-down and bottom-up influences," in 2nd International Cognitive Vision Workshop (ICVW '06), Graz, Austria, May 2006, 2006.
[79]
A. Tavakoli Targhi et al., "Real-time texture detection using the LU-transform," in Real-time texture detection using the LU-transform, 2006.
[80]
D. Kragic and M. Björkman, "Strategies for object manipulation using foveal and peripheral vision," in International Conference on Computer Vision Systems (ICVS), New York, USA, 2006, p. 50.
[81]
M. Björkman and J.-O. Eklundh, "Foveated Figure-Ground Segmentation and Its Role in Recognition," in BMVC 2005 - Proceedings of the British Machine Vision Conference 2005, 2005, pp. 819-828.
[82]
M. Björkman and J.-O. Eklundh, "Attending, Foveating and Recognizing Objects in Real World Scenes," in British Machine Vision Conference (BMVC), London, UK, 2004, pp. 227-236.
[83]
M. Björkman and D. Kragic, "Combination of foveal and peripheral vision for object recognition and pose estimation," in 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, pp. 5135-5140.
[84]
M. Björkman and J.-O. Eklundh, "Visual Cues for a Fixating Active Agent," in Proc. Robot Vision, 2001.
[85]
M. Björkman and J.-O. Eklundh, "A Real-Time System for Epipolar Geometry and Ego-Motion Estimation," in Proc. IEEE Computer Vision and Pattern Recognition (CVPR’00), 2000, pp. 506-513.
[86]
M. Björkman and J.-O. Eklundh, "Real-Time Epipolar Geometry Estimation and Disparity," in Proc. International Conference on Computer Vision (ICCV’99), 1999, pp. 234-141.
[87]
M. Björkman, F. Dahlgren and P. Stenström, "Using Hints to Reduce the Read Miss Penalty for Flat COMA Protocols," in Proc. of the 28th Hawaii International Conference on System Sciences, 1995, pp. 242-251.
Non-peer reviewed
Articles
[88]
A. L. Gert et al., "COORDINATING WITH A ROBOT PARTNER AFFECTS ACTION MONITORING RELATED NEURAL PROCESSING," Psychophysiology, vol. 58, pp. S60-S60, 2021.
Other
[89]
M. Gamba et al., "Different Faces of Model Scaling in Supervised and Self-Supervised Learning," (Manuscript).
[90]
M. Gamba et al., "When Does Self-Supervised Pre-Training Yield Robust Representations?," (Manuscript).
Latest sync with DiVA:
2024-12-06 01:02:14