国際交流助成受領者/国際会議参加レポート

平成30年度 国際交流助成受領者による国際会議参加レポート

受領・参加者名
PATHAK Sarthak Mahesh
(東京大学 大学院工学系研究科 精密工学専攻)
会議名
International Workshop on Advanced Image Technology (IWAIT 2019)
期日
2019年1月6日~9日
開催地
シンガポール

1. 国際会議の概要

会場の様子

The International Workshop on Advanced Image Technology 2019 (IWAIT 2019) was the 22nd time this conference was held. It is also the second time this conference was held together with the International Forum on Medical Imaging in Asia 2019 (IFMIA 2019). Usually, research in image processing and computer vision takes place in two main fields – for robotics, and engineering related applications, and another for medical imaging. Since the conferences for both fields are usually separate, it is not common to see papers for medical image processing in engineering conferences, and engineering papers in medical conferences. Hence, it was a refreshing change from usual conferences to see both being held together. There are many areas in which researchers from both fields can learn from each other, for example, the human body is a dynamic environment, which is also a problem often encountered in the robotics field. How to conduct image processing in such challenging environments is a very interesting and difficult task.

Both conferences were held in the NTU Executive Center of the Nanyang Technological University. The plenary talk for IWAIT was given by Prof. Hajime Nagahara from Osaka University. His main area of research is Coded Computational Photography. He gave several examples of how coding apertures can lead to better performance during image capture. It can avoid issues like motion blur, it can recover 3D information, and it can even result in higher frame rates. And all this can be done without adding any delay into the processing or the software side.

2. 研究テーマと討論内容

The research I presented was about a support system for infrastructure inspection. Tall bridges support large volumes of traffic which induce cyclic loads on them. A single crack or defect could widen and cause catastrophic failure. Hence, they require periodic, close-up inspection. Current inspection technologies are quite tedious as they involve cranes and large hydraulic arms that move people close to the surface. In order to solve this issue, many inspection methods based on the use of UAVs have been suggested. Equipped with a high-resolution camera and/or other sensors, they can fly close to the structures and map the surface data in order to digitize it for easier, offline inspection. For such purposes, there is a need to estimate the 3-dimensional position and orientation of the UAV on the physical structure, in order to map the data collected. GPS technology is insufficient to provide an accurate estimate of the 3D position and orientation. Hence, camera-based methods are preferred. There are many approaches that can perform 3D mapping and localization using perspective cameras. However, if a normal perspective camera is used in such cases, it will not be able to view more than a tiny section of the structure and distinguishing features would easily flow out of view.

In this research, we propose a novel distortion-resistant visual odometry technique using a spherical camera, in order to provide localization for a UAV-based, bridge inspection support system. We consider the distortion of the pixels during the calculation of the 2-frame essential matrix via feature-point correspondences. Then, we triangulate 3D points and use them for 3D registration of further frames in the sequence via a modified spherical error function. Via experiments conducted on a real bridge pillar, we demonstrate that the proposed approach greatly increases the accuracy of localization, resulting in an 8.6 times lower localization error.

3. 国際会議に出席した成果
(コミュニケーション・国際交流・感想)

発表の様子

I presented my research in a 10-minute oral presentation, with approximately 5 minutes allotted for questions and answers. The main questions I was asked in the presentation were to do with how the performance of the localization system was evaluated. I explained that it was done using optical markers visible from the camera at fixed positions. Moreover, an important point of discussion was whether the performance was enough for inspection or not. The answer really depends on the sensor used for inspection, its field of view, and the distance from the bridge. Our method could possibly use higher resolution, higher number of feature points, etc, to provide higher accuracy. The other conference presentations were very interesting. One that stood out was a paper titled, "FOE-based Regularization for Optical Flow Estimation from an In-vehicle Event Camera", from Keio University. In this paper, they use a method very similar to my research field (based on using optical flow) with an event camera - a special type of camera that can track changes in its field-of-view at a very high rate. I would really like to express my gratitude to the Marubun Foundation for providing me with assistance to attend this wonderful even to present my research and broaden my knowledge.

平成30年度 国際交流助成受領者一覧に戻る

ページの先頭へ