By Dr. Jiungyao York Huang
Since Microsoft released a video, Envisioning the Future with Windows Mixed Reality, portraying the future world with Hololens in 2016, Mixed Reality has become a hot research topic in the AR/VR society. However, even for the latest Hololens 2, it has a horizontal FoV of 43° and a vertical of 29° only and is very hard for the user to get fully immersed into the mixed reality world as shown in the video. The basic problem is that Hololens used optical see-through display technology.
This talk will be focused on our experience in creating a fully immersive mixed reality application with a deep image matting model and a video see-through headset. With the growing maturity of Deep Convolution Neural Network technology in both hardware and software, we believe that video see-through headset composed with deep image matting model is a promising solution for the true mixed reality applications.
Dr. Jiungyao York Huang
Technical deputy division director
Jiungyao Huang received his MS degree in Computer Science in 1988 from Tsing-Hua University, Taiwan, and his BS degree in 1983 from the Department of Applied Mathematics in Chung-Hsing University, Taiwan. He further received his PhD degree in Electrical and Computer Engineering from the University of Massachusetts in Amherst, USA, in 1993. Currently, he is a technical deputy division director in Division of Embedded System & SoC Technology in Information and Communications Research Laboratories of ITRI, Taiwan. He retired from National Taipei University, Taiwan in 2017. Before that, he taught in National Chung Cheng University, Taiwan, from 2003 to 2006, and Tamkang University, Taiwan, from 1993 to 2003. He has been working on AR/VR related researches for over 20 years and highly interested in applying Deep Convolution Neural Network model to AR/VR in the recent years. His research interest also includes pervasive computing, context-awareness, wearable computing and neural network.
This web page was built with Mobirise