Intelligent agents gather information and perceive semantics within the environments before taking on given tasks. The agents store the collected information in the form of environment models that compactly represent the surrounding environments. The agents, however, can only conduct limited tasks without an efficient and effective environment model. Thus, such an environment model takes a crucial role for the autonomy systems of intelligent agents. We claim the following characteristics for a versatile environment model: accuracy, applicability, usability, and scalability. Although a number of researchers have attempted to develop such models that represent environments precisely to a certain degree, they lack broad applicability, intuitive usability, and satisfactory scalability. To tackle these limitations, we propose 3-D scene graph as an environment model and the 3-D scene graph construction framework. The concise and widely used graph structure readily guarantees usability as well as scalability for 3-D scene graph. We demonstrate the accuracy and applicability of the 3-D scene graph by exhibiting the deployment of the 3-D scene graph in practical applications. Moreover, we verify the performance of the proposed 3-D scene graph and the framework by conducting a series of comprehensive experiments under various conditions.
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95% and 4.06% increases in typing speed (45.57 WPM) and accuracy (95.84%), respectively over the baseline.
The automated home referred to as Smart Home is expected to offer fully customized services to its residents, reducing the amount of home labor, thus improving human beings' welfare. Service robots and Internet of Things (IoT) play the key roles in the development of Smart Home. The service provision with these two main components in a Smart Home environment requires: 1) learning and reasoning algorithms and 2) the integration of robot and IoT systems. Conventional computational intelligence-based learning and reasoning algorithms do not successfully manage dynamic changes in the Smart Home data, and the simple integrations fail to fully draw the synergies from the collaboration of the two systems. To tackle these limitations, we propose: 1) a stabilized memory network with a feedback mechanism which can learn user behaviors in an incremental manner and 2) a robot-IoT service provision framework for a Smart Home which utilizes the proposed memory architecture as a learning and reasoning module and exploits synergies between the robot and IoT systems. We conduct a set of comprehensive experiments under various conditions to verify the performance of the proposed memory architecture and the service provision framework and analyze the experiment results.