TY - JOUR
T1 - Image Processing for Autonomous Positioning of Eye Surgery Robot in Micro-Cannulation
AU - Tayama, Takashi
AU - Kurose, Yusuke
AU - Nitta, Tatsuya
AU - Harada, Kanako
AU - Someya, Yusei
AU - Omata, Seiji
AU - Arai, Fumihito
AU - Araki, Fumiyuki
AU - Totsuka, Kiyoto
AU - Ueta, Takashi
AU - Noda, Yasuo
AU - Takao, Muneyuki
AU - Aihara, Makoto
AU - Sugita, Naohiko
AU - Mitsuishi, Mamoru
N1 - Publisher Copyright:
© 2016 The Authors. Published by Elsevier B.V.
PY - 2017
Y1 - 2017
N2 - Vitreoretinal surgery tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow, and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently, the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation accuracy in the image processing would improve the positioning accuracy and safety.
AB - Vitreoretinal surgery tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow, and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently, the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation accuracy in the image processing would improve the positioning accuracy and safety.
KW - Eye Surgery
KW - Image Processing
KW - Robotics
UR - http://www.scopus.com/inward/record.url?scp=85029710674&partnerID=8YFLogxK
U2 - 10.1016/j.procir.2017.04.036
DO - 10.1016/j.procir.2017.04.036
M3 - 会議記事
AN - SCOPUS:85029710674
SN - 2212-8271
VL - 65
SP - 105
EP - 109
JO - Procedia CIRP
JF - Procedia CIRP
T2 - 3rd CIRP Conference on BioManufacturing 2017
Y2 - 11 July 2017 through 14 July 2017
ER -