Development of AI technology for figuring out the assembly process of furniture by understanding unstructured multimodal information in the assembly manual
Development of algorithms to generate and validate the task plan for assembling furniture in the real and virtual environment
Acceleration of activating the AI-Robot convergence research by adopting open software development method and MIT License
Automatic extraction of required images and text from assembly manual images
Inference of relationship between assembly process and parts by understanding images and texts in the assembly manual
Recognition of the component type and its region of interest on the image of simulation-real-manual environment
Comparison of similarity among simulation, real, and manual environment for analyzing the relationship among environments and real-time validation of assembly completion
Task plan generation by recognizing the combining regions and the combining relationship correspoding to each component
Real-time task validation for the failure response strategy
Selection of 50 kinds of food suitable for intelligent meal assistant robot
Development of dataset for food detection system
Development of deep neural network for Korean meal assistant robot
Advancement of food type recognition algorithm
Development of food type recognition system for intelligent meal history management service
Development of intelligent meal history management service
DB server construction for the image analysis
CAD image DB construction about the feature shape
DB construction for the various arrangement of components
Construction of deep learning system for the shape recognition on the CAD image
Deep learning model development based on the region proposal method (R-CNN) and single shot detector method (YOLO, SSD)
Recognition of the ROI of the feature shape
Estimation of the type, location, and direction of feature shape
Estimation of the location of the feature shape
Development of speech detection dataset for the online and the mobile phone game
Validation of the voice activity detection on the real-time game playing environment
We are developing Sim-to Real transfer algorithm, which enables direct application of deep reinforcement learning model trained in a simulated environment to the real-world robot arm.
We are developing deep learning algorithm for automated sleep stage scoring and diagnosis sleep-apnea of syndromes.
Using single-channel EEG signal, our deep learning model called IITNet automates the sleep stage scoring by extracting and analyzing the features at sub-epoch level with the state-of-the-art performance.
We are developing a non-contact respiratory measuring algorithm which predicts the respiratory signals from UWB signals in real-time via a convolutional neural network.
A sleep management system that automatically detects sleep apnea by integrating the deep learning models for sleep staging and respiratory monitoring.
Because of increasing the number of elders, the elderly care labor force becomes a more important issue. Considering this problem, the robot with AI will help this.
Using Deep Visual Tracker, the robot will track the elder. Then, using deep CNN and LSTM, it will classify the user and judge behavior of taking medicine.
By database of user’s normal behavior, we can get the network which can have the same output with the input learned before using Deep Auto-encoder. Using this, we can classify between normal behavior and abnormal behavior.
We can judge the diseases(dementia, stroke, etc.) by the elder’s walking motion. Therefore, we have to collect the skeleton data using the Depth Camera, and then develop an Anomaly Judgement Algorithm based on Deep Auto-encoder.
We have to develop the server software which can collect and manage the database in real time. This software can make dataset to be used to Deep learning.