Publikationer
Här listas avdelningens 50 senaste publikationer från KTH:s publikationsportal DiVA.
Länk till hela publikationslistan för RPL i DiVA hittas i botten av denna lista.
Publikationer av författare från RPL
[1]
S. Jin, R. Wang och F. T. Pokorny,
"RealCraft : Attention Control as A Tool for Zero-Shot Consistent Video Editing,"
i Neural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings, 2026, s. 137-152.
[2]
P. Khanna et al.,
"Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion,"
Robotics and Autonomous Systems, vol. 196, 2026.
[3]
A. Larsson Forsberg et al.,
"Temporal Intent-Aware Multi-agent Learning for Network Optimization,"
i Computer Safety, Reliability, and Security. SAFECOMP 2025 Workshops - CoC3CPS, DECSoS, SASSUR, SENSEI, SRToITS, and WAISE, 2025, Proceedings, 2026, s. 29-40.
[4]
R. Johansson, P. Hammer och T. Lofthouse,
"Arbitrarily Applicable Same/Opposite Relational Responding with NARS,"
i Artificial General Intelligence - 18th International Conference, AGI 2025, Proceedings, 2026, s. 314-324.
[5]
R. Lanzino et al.,
"Neural Transcoding Vision Transformers for EEG-to-fMRI Synthesis,"
i Computer Vision-Eccv 2024 Workshops, Pt Xx, 2025, s. 53-70.
[6]
I. Hakkinen et al.,
"Medical Image Segmentation with SAM-Generated Annotations,"
i Computer Vision-Eccv 2024 Workshops, Pt Xxii, 2025, s. 51-62.
[7]
Y. Ma et al.,
"Measuring User Experience Through Speech Analysis : Insights from HCI Interviews,"
i Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA 2025, 2025.
[8]
S. Jin et al.,
"PACA : Perspective-Aware Cross-Attention Representation for Zero-shot Scene Rearrangement,"
i Proceedings IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025, 2025, s. 6559-6569.
[9]
H. Azuma, Y. Matsui och A. Maki,
"ZoDi : Zero-Shot Domain Adaptation with Diffusion-Based Image Transfer,"
i Computer Vision-Eccv 2024 Workshops, Pt Xviii, 2025, s. 151-167.
[10]
Y. Ma et al.,
"Advancing User-Voice Interaction : Exploring Emotion-Aware Voice Assistants Through a Role-Swapping Approach,"
i Distributed, Ambient And Pervasive Interactions, Dapi 2025, Pt I, 2025, s. 303-320.
[11]
Y. Zhao, S. Gerard och Y. Ban,
"TS-SatFire : A Multi-Task Satellite Image Time-Series Dataset for Wildfire Detection and Prediction,"
Scientific Data, vol. 12, no. 1, 2025.
[12]
O. Zaland et al.,
"One-Shot Federated Learning with Classifier-Free Diffusion Models,"
i 2025 IEEE International Conference on Multimedia and Expo: Journey to the Center of Machine Imagination, ICME 2025 - Conference Proceedings, 2025.
[13]
F. Ahmad, J. Styrud och V. Krueger,
"Addressing Failures in Robotics Using Vision-Based Language Models (VLMs) and Behavior Trees (BT),"
i EUROPEAN ROBOTICS FORUM 2025, 2025, s. 281-287.
[14]
H. Fang och H. Azizpour,
"Leveraging Satellite Image Time Series for Accurate Extreme Event Detection,"
i 2025 Ieee/Cvf Winter Conference On Applications Of Computer Vision Workshops, Wacvw, 2025, s. 489-498.
[15]
C. Ceylan,
"Towards Unsupervised, Analysable and Scalable Node Embedding Models for Transaction Networks,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:100, 2025.
[16]
F. Zangeneh,
"Camera Relocalization through Distribution Modeling,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:106, 2025.
[17]
X. Zhu et al.,
"Towards Automated Assembly Quality Inspection with Synthetic Data and Domain Randomization,"
i Proceedings : IEEE/CVF International Conference on Computer Vision Workshop, ICCVW, 2025, 2025, s. 1395-1403.
[18]
X. Zhu,
"Towards Automated Parts Recognition in Manufacturing with Synthetic Data,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:105, 2025.
[19]
Q. Yang et al.,
"S2-Diffusion : Generalizing from Instance-level to Category-level Skills in Robot Manipulation,"
IEEE Robotics and Automation Letters, 2025.
[20]
Q. Zhang et al.,
"HiMo: High-Speed Objects Motion Compensation in Point Clouds,"
IEEE Transactions on robotics, vol. 41, s. 5896-5911, 2025.
[21]
S. Qamar et al.,
"ScaleFusionNet: transformer-guided multi-scale feature fusion for skin lesion segmentation,"
Scientific Reports, vol. 15, no. 1, 2025.
[22]
Y. Xu et al.,
"Skor-Xg : Skeleton-Oriented Expected Goal Estimation in Soccer,"
i Proceedings - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2025, 2025, s. 5957-5967.
[23]
M. Kartašev et al.,
"SMaRCSim : Maritime Robotics Simulation Modules,"
i 2025 Symposium on Maritime Informatics and Robotics, MARIS 2025, 2025.
[24]
Z. Gong et al.,
"Bridging Cultures : A Framework for Facial Expression and Empathy,"
i IEEE International Conference on Multimedia and Expo Workshops: Journey to the Center of Machine Imagination, ICMEW 2025 - Proceedings, 2025.
[25]
L. Bruns et al.,
"ACE-G : Improving Generalization of Scene Coordinate Regression Through Query Pre-Training,"
i Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, s. 26751-26761.
[26]
L. Bruns, J. Zhang och P. Jensfelt,
"Neural Graph Map : Dense Mapping with Efficient Loop Closure Integration,"
i 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025, s. 2900-2909.
[27]
L. Bruns,
"Improving Spatial Understanding Through Learning and Optimization,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:97, 2025.
[28]
P. Isaev och P. Hammer,
"NARS-GPT : An Integrated Reasoning System for Natural Language Interactions,"
i Intelligent Systems and Applications - Proceedings of the 2025 Intelligent Systems Conference IntelliSys, 2025, s. 404-420.
[29]
X. Wang et al.,
"LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model,"
i Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2025, s. 2912-2923.
[30]
H. Lu et al.,
"Grasping a Handful: Sequential Multi-Object Dexterous Grasp Generation,"
IEEE Robotics and Automation Letters, vol. 10, no. 11, s. 11880-11887, 2025.
[31]
R. Wang et al.,
"Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 3654-3661.
[32]
C. Liu et al.,
"Message from the General and Program Chairs CVPR 2025,"
i Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2025, s. ccclxxxii-ccclxxxiii.
[33]
A. Sánchez Roncero, R. I. Cabral Muchacho och P. Ögren,
"Multi-Agent Obstacle Avoidance Using Velocity Obstacles and Control Barrier Functions,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 6638-6644.
[34]
R. I. Cabral Muchacho och F. T. Pokorny,
"Adaptive Distance Functions via Kelvin Transformation,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 3015-3021.
[35]
M. Iovino et al.,
"Comparison between Behavior Trees and Finite State Machines,"
IEEE Transactions on Automation Science and Engineering, vol. 22, s. 21098-21117, 2025.
[36]
J. Styrud et al.,
"Automatic Behavior Tree Expansion with LLMs for Robotic Manipulation,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 1225-1232.
[37]
M. Vahs et al.,
"Forward Invariance in Trajectory Spaces for Safety-Critical Control,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 3926-3932.
[38]
A. Khoche et al.,
"SSF: Sparse Long-Range Scene Flow for Autonomous Driving,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 6394-6400.
[39]
X. Zhu et al.,
"Domain Randomization for Object Detection in Manufacturing Applications Using Synthetic Data: A Comprehensive Study,"
i 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, 2025, s. 16715-16721.
[40]
A. Gorm Hoffmann et al.,
"Gaussian Process Regression for Value-Censored Functional and Longitudinal Data,"
Statistics in Medicine, vol. 44, no. 20-22, 2025.
[41]
I. Leite et al.,
"A Call for Deeper Collaboration Between Robotics and Game Development,"
i Proceedings of the IEEE 2025 Conference on Games, CoG 2025, 2025.
[42]
C. R. Sidrane och J. Tumova,
"TTT : A Temporal Refinement Heuristic for Tenuously Tractable Discrete Time Reachability Problems,"
i 2025 American Control Conference, ACC 2025, 2025, s. 1288-1293.
[43]
A. Terán Espinoza et al.,
"A Consistent Dataset for Dynamic Underwater Proximity Operations,"
i OCEANS 2025 Brest, OCEANS 2025, 2025.
[44]
A. R. Asadi, Y. Zhang och H. Said,
"What do personas say about privacy & security: a systematic literature review through human-AI collaboration,"
Journal of Ambient Intelligence and Humanized Computing, 2025.
[45]
C. Nguyen et al.,
"TinyKube : A Middleware for Dynamic Resource Management in Cloud-Edge Platforms for Large-Scale Cloud Robotics,"
i Proceedings of IEEE/IFIP Network Operations and Management Symposium 2025, NOMS 2025, 2025.
[46]
K. J. D'souza, S. A. Muthukumaraswamy och G. Balasubramanian,
"On the Analysis of Swarm Robotics in Sensor-Based Environmental Monitoring for Sustainable Poultry Farming,"
i Intelligent Strategies for ICT - Proceedings of ICTCS 2024, 2025, s. 73-84.
[47]
L. F. Wu et al.,
"Airborne Underwater Vehicle Recovery System: Eagle-Inspired Trajectory Generation and Control for UAV-Assisted Recovery of AUVs,"
IEEE Access, vol. 13, s. 149087-149099, 2025.
[48]
S. Hafner et al.,
"DisasterAdaptiveNet: A robust network for multi-hazard building damage detection from very-high-resolution satellite imagery,"
International Journal of Applied Earth Observation and Geoinformation, vol. 143, 2025.
[49]
H. Ding et al.,
"Fast and Robust Visuomotor Riemannian Flow Matching Policy,"
IEEE Transactions on robotics, vol. 41, s. 5327-5343, 2025.
[50]
E. Banzuzi, J. D'Ciofalo Khodaverdian och K. Deckenbach,
"A Reproducibility Study of Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks,"
Transactions on Machine Learning Research, vol. June-2025, 2025.