Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

Master Thesis proposals

Evaluation of graph representations for cloth deformable objects in a contrastive learning framework

Deformable objects such as clothes are difficult to manipulate for robots due to their infinite degrees of freedom, but geometric representations can be used to focus on relevant features for manipulation such as shape and displacement. 

Representations obtained from landmarks have shown success in manipulation tasks like folding or flattening clothes [1]. However, these representations often require human annotations and poorly generalize to new classes of objects.  Other geometric representations such as graphs and skeletons, instead, can be extracted in an unsupervised way using computer vision techniques [2]. Graphs and meshes are also the most common representation in computer graphics and simulation to model deformable objects and fluids. [3, 4]

The goal of this project is to assess which type of skeleton/graph representations are best suited for downstream tasks such as classification or manipulation. The student will create different pipelines to extract skeleton and graph representations from images of cloths and test them in a downstream classification task using a graph contrastive learning framework [5]. We will use datasets of cloth items such as DeepFashion and DeepFashion2 [6, 7]. The final step will be to exploit these representations in robotic manipulation tasks like folding and flattening cloth objects.

Required qualifications: proficiency in Machine Learning and Data Science. Applicants are expected to have passed KTH courses such as Machine Learning, Project Course in Data Science, Probabilistic Graphical Models, or equivalent. Confidence in Python and graph algorithms is a merit.

Contact persons:

Marco Moletta (KTH): 

Alexander Kravchenko (KTH):

Michael Welle (KTH): 

References:

[1] T. Ziegler, J. Butepage, M. C. Welle, A. Varava, T. Novkovic and D. Kragic, "Fashion Landmark Detection and Category Classification for Robotics," 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2020, pp. 81-88, doi: 10.1109/ICARSC49921.2020.9096071.

[2] Ta-Chih Lee, Rangasami L. Kashyap, and Chong-Nam Chu. 1994. Building skeleton models via 3-D medial surface/axis thinning algorithms. CVGIP: Graph. Models Image Process. 56, 6 (Nov. 1994), 462–478. DOI:https://doi.org/10.1006/cgip.1994.1042

[3] Sanchez-Gonzalez, A., Godwin, J., Pfaff, T., Ying, R., Leskovec, J. & Battaglia, P.. (2020). “Learning to Simulate Complex Physics with Graph Networks.”, Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8459-8468 Available from https://proceedings.mlr.press/v119/sanchez-gonzalez20a.html.

[4] Weng, Paus, Varava, Yin, Asfour and Kragic, “Graph-based Task-specific Prediction Models for Interactions between Deformable and Rigid Objects”, CoRR, 2021, https://arxiv.org/abs/2103.02932.

[5] Hassani, Kaveh and Amir Hosein Khas Ahmadi. “Contrastive Multi-View Representation Learning on Graphs.” ArXiv abs/2006.05582 (2020).

[6] Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou, “DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. https://liuziwei7.github.io/projects/DeepFashion.html

[7] Ge, Zhang, Wu, Wang, Tang and Luo, ”A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images”, CVPR, 2019. https://github.com/switchablenorms/DeepFashion2


Profilbild av Marco Moletta

Portfolio

  • Master Thesis proposals