In this paper, we present the AI Goggles system, which can instantly describe objects and scenes in the real world and retrieve visual memories about them using keywords input by the users. This is a stand-alone wearable system working on a tiny mobile computer. Also, the system can quickly learn unknown objects and scenes by teaching and learn to label and retrieve them on site, without loss of recognition ability for previously learnt ones. As the core algorithm of the system, we propose and implement a new method of multi labeling and retrieval of unconstrained real-world images. Our method outperforms the current state-of-the-art method, in terms of both accuracy and computation speed on the standard benchmark dataset. This is a major contribution to development of visual and memory assistive man-machine user interface.
Published in: 2009 Canadian Conference on Computer and Robot Vision
Embedding AI into AR goggles
The current applications of consumer-focused AR glasses seem to be confined to entertainment (watching movies and playing video games). The potential of directly overlaying computer vision outputs into our vision seems too promising to ignore.
Our team is working towards embedding AI capabilities into AR glasses for consumers.
A Great Domain Can Be The Key To Your Success