Chen T.-IWang J.-WWINSTON HSU2022-04-252022-04-25202121530858https://www.scopus.com/inward/record.uri?eid=2-s2.0-85124373186&doi=10.1109%2fIROS51168.2021.9635829&partnerID=40&md5=62153e0faac4b80dee460e8aa1423052https://scholars.lib.ntu.edu.tw/handle/123456789/607482Object detection plays a deep role in visual systems by identifying instances for downstream algorithms. In industrial scenarios, however, a slight change in manufacturing systems would lead to costly data re-collection and human annotation processes to re-train models. Existing solutions such as semi-supervised and few-shot methods either rely on numerous human annotations or suffer low performance. In this work, we explore a novel object detector based on interactive perception (ODIP), which can be adapted to novel domains in an automated manner. By interacting with a grasping system, ODIP accumulates visual observations of novel objects, learning to identify previously unseen instances without humanannotated data. Extensive experiments show ODIP outperforms both the generic object detector and state-of-the-art few-shot object detector fine-tuned in traditional manners. A demo video is provided to further illustrate the idea [1]. ? 2021 IEEE.Computer visionManufactureObject recognitionAutomatic adaptationDown-streamHuman annotationsIndustrial scenariosObject detectorsObjects detectionPerformanceSemi-supervisedTrain modelVisual systemsObject detection[SDGs]SDG9ODIP: Towards Automatic Adaptation for Object Detection by Interactive Perceptionconference paper10.1109/IROS51168.2021.96358292-s2.0-85124373186