Skip to main content
Till KTH:s startsida Till KTH:s startsida

Maciej Wozniak

Profile picture of Maciej Wozniak

DOCTORAL STUDENT

Details

Address
LINDSTEDTSVÄGEN 24

Researcher


About me

I am a Ph.D. student supervised by Patric Jensfelt and André Pereira. My doctoral studies are supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP).

I am working on projects focused on 3D computer vision solutions for robots perception and dynamic environment understanding. I also create VR/AR applications for improving human interaction with indoor and outdoor robots.

I am more than happy to supervise master's students who would be interested in working with me on one of my projects.

Join our workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction (VAM-HRI) ACM/IEEE HRI.

Learn more and submit your contribution through our website:

https://vam-hri.github.io/

Master Thesis and Research Projects (DD2414/DD2411):

I feel extremely grateful for the opportunity to work and collaborate with my master's students. Feel free to reach out and join our group as a research project student DD2414/DD2411 or a master thesis student.

Master students, I currently supervise:

1. Viktor Kårefjärd (karef@kth.se) Now working at:Volvo Autonomous Solutions

Thesis:Multi-Modal Fusion for 3D Object Detection

2. Mattias Hansson(matthans@kth.se) Now working at: Husqvarna Group

Thesis:Unsupervised Domain Adaptation for Generalized 3D Object Detection in Mobile Robotics

3. Isak Gamnes Sneltvedt(igsn@kth.se) Now working at:Element Logic

Thesis:Using deep learning and RGBD images to represent and interact with virtual environment

Potential Master thesis projects:

1. Understand, create and interact with robots through AR/VR environment.

Augmented and virtual reality (AR/VR) are becoming increasingly present in our daily lives. These technologies allow to interact and immerse in the same environment regardless of the physical distance (e.g., different countries) between the users. A simple scenario could be a person or a robot equipped with vision sensor(s), moving around location A, and multiple people in location B wearing AR/VR headsets. We want people in location B to feel as if they were in location A themselves.  However, many challenges have to be solved at the same time in order to create and visualize appealing real-world environments. All objects have to be detected, tracked and have their pose correctly estimated. Additionally, we cannot simply share the whole collected point cloud in real-time due to bandwidth limitations.There are different challenges you as a student could focus on.

  1. How to detect and change the style of the objects in AR. Focusing on detecting objects, segmenting their parts (e.g., chair → seat, back support, legs), and changing their style (e.g., color) [1].
  2. Efficiently re-create and visualize objects in VR/AR environment. Focusing on 3D object detection and 9DoF pose estimation of detected objects [2,3,4].
  3. Using multiple agents to collaboratively create VR/AR environment. Track objects and share detections between agents to tackle, e.g., occluded objects. Potentially, working with a robot on collaborative environment creation.

In this research project, you will try to tackle these challenges. You will start from the literature review to better understand existing algorithms and methods.
Next, you will spend a couple of weeks trying them on open-source data sets. At the end of period three, you will understand how these methods work, their limitations, and what can be improved.


In period four, you will either focus on applying these techniques in the real-world or try to improve the performance (accuracy, runtime etc.) of existing methods.
By the end of the course, you will broaden your knowledge of computer vision, robot perception and their application to VR/AR technologies.Skills required:

  •  Programming experience (Python, C++)
  •  experience with Robot Operating System (ROS)
  •  experience/interest in robots’ perception/deep learning
  • Bonus: experience/knowledge of Unity/C#

2. Improve perception abilities of the last-mile delivery robot (joined project between KTH- TUH Hamburg).

Mobile robots start to play a crucial role in our daily lives. Nevertheless, while there are
plenty of datasets out there for autonomous cars, there are not many city-oriented datasets of mobile robots. In your research, you will try to tackle various problems regarding data collection and sharing (such as online auto-anonymization of the data). Additionally, we plan to create a collection of GT datasets, mapping multiple campuses from different areas of the world. Such data can be later used for benchmarking, SLAM challenges and other projects.

Moreover, we are planning to address another problem of teleportation. You will use the data
obtained by the robot and create a real-time framework for creating approximations and
visualization of the environment so the operator can see exactly what the robot understands
about its sourroundings. It will be achieved by building or using dl driven 3D tracking and
segmentation algorithm.

Presentation about the project

More about the project (link to the news article)References:

[1] Mo, Kaichun, et al. "Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[2] Avetisyan, Armen, et al. "Scan2cad: Learning cad model alignment in rgb-d scans." Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2019.
[3] Avetisyan, Armen, Angela Dai, and Matthias Nießner. "End-to-end cad model retrieval and 9dof alignment in 3d scans." Proceedings of the IEEE/CVF International Conference on computer vision. 2019.
[4] Maninis, Kevis-Kokitsi, et al. "Vid2cad: Cad model alignment using multi-view constraints from videos." IEEE Transactions on Pattern Analysis and Machine Intelligence (2022). (edited)


Courses

Degree Project in Computer Science and Engineering, specializing in Interactive Media Technology, Second Cycle (DA232X), teacher | Course web

Engineering project in Robotics, Perception and Learning (DD2414), teacher | Course web

Introduction to Robotics (DD2410), assistant | Course web

Machine Learning (DD2421), assistant | Course web

Project Course in Robotics and Autonomous Systems (DD2419), assistant | Course web