AI Researcher & Deep Learning Engineer

Daehee Lee (이대희)

Specializing in Lifelong Learning Agents and Reinforcement Learning

I'm a Ph.D. student at Sungkyunkwan University, specializing in reinforcement learning, embodied AI agents, and lifelong learning systems. Recently completed a visiting research period at Carnegie Mellon University.

  • Lifelong Learning Agents
  • Ph.D. Student @ SKKU
  • AI/ML Researcher
Portrait of Daehee Lee
  • Email dulgi7245 [at] skku [dot] edu
  • Location Suwon, South Korea

Research Interests

  • Lifelong Learning Agents
  • Continual Learning
  • Reinforcement Learning
More areas

Primary Focus

  • Embodied AI Agents

Secondary Areas

  • Multi-modal Learning
  • Imitation Learning
  • Meta-Learning

Applications

  • Robotics
  • Skill Transfer
  • Lifelong Learning Systems

About Me

As a researcher at the intersection of artificial intelligence and robotics, I focus on developing intelligent agents that can continuously learn and adapt in dynamic environments. My work spans reinforcement learning, multi-modal intelligence, and embodied AI systems. I recently completed a successful visiting scholar program at CMU, where I led the AIER project designing conversational robot systems for edge computing.

Education & Experiences

Academic and research milestones across Sungkyunkwan University and CMU.

Now
MS-PhD Combined

Sungkyunkwan University

In progress

Computer Science and Engineering · Suwon, South Korea

2022.09 — Present

Expected Graduation · Feb 2028

2025
Visiting Scholar

Carnegie Mellon University

Completed

Software and Societal Systems Department · Pittsburgh, PA, USA

2024.09 — 2025.02

2022
Bachelor of Engineering

Sungkyunkwan University

Completed

Computer Science and Engineering · Suwon, South Korea

2019.02 — 2022.08

Publications

Selected peer-reviewed work and preprints.

Learning to Interact in World Latent for Team Coordination

Dongsu Lee , Daehee Lee , Yaru Niu , Honguk Woo , Amy Zhang , Ding Zhao

arXiv 2025 (preprint)

2025

We introduce interactive world latent (IWoL), a representation learning framework designed to improve team coordination in multi-agent reinforcement learning (MARL). By directly modeling communication protocols, IWoL captures both inter-agent relations and task-specific world information, enabling decentralized execution with implicit coordination while avoiding the shortcomings of explicit message passing. Evaluations across diverse MARL benchmarks show that IWoL consistently enhances team performance and can be integrated with existing MARL algorithms for further gains.

  • Multi-agent Reinforcement Learning
  • Team Coordination
  • Representation Learning

Policy Compatible Skill Incremental Learning via Lazy Learning Interface

Daehee Lee , Dongsu Lee , TaeYoon Kwack , Wonje Choi , Honguk Woo

NeurIPS 2025 (Spotlight Top 3.2%)

Spotlight 2025

Skill Incremental Learning (SIL) is the process by which an embodied agent expands and refines its skill set over time while maintaining policy compatibility. We introduce SIL-C, a framework that preserves compatibility between incrementally learned skills and downstream policies through a bilateral lazy learning-based mapping that aligns subtask and skill spaces. This enables complex tasks to benefit from improved skills without retraining existing policies. Across diverse SIL scenarios, SIL-C sustains compatibility and efficiency throughout the learning process.

  • Skill Incremental Learning
  • Continual Learning
  • Policy Compatibility
  • Lazy Learning

NeSyC: A Neuro-symbolic Continual Learner For Complex Embodied Tasks In Open Domains

Wonje Choi , Jiwon Park , Seungwon Ahn , Daehee Lee , Honguk Woo

ICLR 2025

2025

We present NeSyC, a neuro-symbolic continual learner that emulates the hypothetico-deductive model by continually formulating and validating knowledge from limited experiences through the combined use of LLMs. NeSyC enables embodied agents to tackle complex tasks more effectively in open-domain environments.

  • Neuro-symbolic Learning
  • Continual Learning
  • Embodied AI
  • Open-domain Tasks

Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation

Daehee Lee , Minjong Yoo , Woo Kyung Kim , Wonje Choi , Honguk Woo

NeurIPS 2024

2024

Continual Imitation Learning (CiL) involves extracting and accumulating task knowledge from demonstrations across multiple stages and tasks to achieve a multi-task policy. We present IsCiL, an adapter-based CiL framework that addresses the limitation of knowledge sharing by incrementally learning shareable skills from different demonstrations, thus enabling sample-efficient task adaptation.

  • Continual Learning
  • Imitation Learning
  • Task Adaptation
  • Reinforcement Learning

One-shot imitation in a non-stationary environment via multi-modal skill

Sangwoo Shin , Daehee Lee , Minjong Yoo , Woo Kyung Kim , Honguk Woo

ICML 2023

2023

We propose a multimodal semantic skill learning framework for one-shot imitation learning in non-stationary environments. The approach enables agents to adapt quickly to new tasks through multi-modal skill representation and transfer.

  • One-shot Learning
  • Imitation Learning
  • Multi-modal Learning
  • Non-stationary Environments
  • Skill Transfer