Image Processing for Autonomous Positioning of Eye Surgery Robot in Micro-Cannulation

Takashi Tayama, Yusuke Kurose, Tatsuya Nitta, Kanako Harada*, Yusei Someya, Seiji Omata, Fumihito Arai, Fumiyuki Araki, Kiyoto Totsuka, Takashi Ueta, Yasuo Noda, Muneyuki Takao, Makoto Aihara, Naohiko Sugita, Mamoru Mitsuishi

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

Vitreoretinal surgery tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow, and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently, the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation accuracy in the image processing would improve the positioning accuracy and safety.

Original languageEnglish
Pages (from-to)105-109
Number of pages5
JournalProcedia CIRP
Volume65
DOIs
StatePublished - 2017
Event3rd CIRP Conference on BioManufacturing 2017 - Chicago, United States
Duration: 2017/07/112017/07/14

Keywords

  • Eye Surgery
  • Image Processing
  • Robotics

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Industrial and Manufacturing Engineering

Fingerprint

Dive into the research topics of 'Image Processing for Autonomous Positioning of Eye Surgery Robot in Micro-Cannulation'. Together they form a unique fingerprint.

Cite this