Research Areas
We are studying artificial intelligence systems exploit multimodal data, and benefits users by various human-computer interactions. Descriptions of our research areas are shown below.
Active Funding List
재활로봇을 위한 인공지능 기반 제어 및 사용자 보조 알고리즘, 산학연협력단지 조성사업 공동연구- (주)헥사휴먼케어, 한양대학교 ERICA 산학연단지
설계자동화 소프트웨어의 최신화 및 알고리즘 개선, 산학공동연구 - (주)세홍
감각운동기능의 모델링, 재현, 교육을 위한 인공지능 알고리즘 및 인간-컴퓨터 상호작용 시스템, 이공분야 우수신진연구 및 최초혁신연구실, 한국연구재단
실감형 문화콘텐츠 체험을 위한 사용자 맥락 기반 시촉각 인터랙션 저작 기술개발, 문화체육관광 선도기술 연구개발사업, 한국콘텐츠진흥원
다감각 Human-Computer Interaction에서의 Mood vectoring 기법, 이공신진교수 연구지원과제, 한양대학교 ERICA
한양대학교 ERICA 의료인공지능융합교육연구단 (참여), BK21+ Four 사업, 과학기술정보통신부
Human Data and Behavior Modelling
This line of studies aims to understand human behavior and how human think, feel, and responses by learning and interpreting data generated by human. All kinds of data, usage log, sensor data, measurement and even physiological data can be used in the studies. The AI model established in the studies can be directly utilized for intelligent interactions, including inference and user assistant features, using the background data--knowledge we surveyed/gathered.
Multimedia AI and Interaction Systems
We are interested in various multimodal/multimedia interaction scenarios which AI involves and benefits the users. One of trivial benefits of using multimodal data would be sensory and data redundancy, which enhances plausibility/power of AI results including inference.
For example, in AI-Driven Automatic Content Generation study, it learns and analyze audio and visual data and generates immersive multimedia experience at home by automatically-created 4D effects, applicable for the current streaming services.
Affective Computing & Interaction Systems
We are also working on affective computing and interaction system. Utilizing various data from sensors, user behaviors, and prior knowledges on human emotion, we develop artificial intelligence that aware users' emotion and interact with the users to vectoring/regulating their mood. Of course, contextual data, including user's current mental and emotional status are often inferred/extracted in such scenarios.
We expect applications on medical venue, but not limited to. Combined with Multimodal AI interaction system, we aim to develop a full stack of system that benefit the users' mental health.
Human-Centered AI and Assistive Systems
Pursuing a “warmhearted” and human-centric AI technology, we are working on AI which helps users. Based on human-computer/robot collaborative work scheme, AI-mediated motor learning (i.e., AI teaches sensorimotor skills to users). AI can mediate human-human or human-robot communication in a smooth manner.
These idea can be utilized for assistive technology. Expanding current translation algorithm by adding audiohaptic explanations of web images for blind and low vision users, and translating music to haptics for the hearing-impaired are examples.