Daehee Lee (이대희)

Portrait of Daehee Lee

I'm a Ph.D. student at Sungkyunkwan University, specializing in reinforcement learning, embodied AI agents, and lifelong learning systems. As a researcher at the intersection of artificial intelligence and robotics, I focus on developing intelligent agents that can continuously learn and adapt in dynamic environments. My work spans reinforcement learning, multi-modal intelligence, and embodied AI systems. I recently completed a successful visiting scholar program at Carnegie Mellon University, where I led the AIER project designing conversational robot systems for edge computing.

  • Lifelong Learning Agents
  • Ph.D. Student @ SKKU
Portrait of Daehee Lee

Publications

2025

Multi-agent Coordination via Flow Matching

Dongsu Lee , Daehee Lee , Amy Zhang

arXiv 2025 (preprint)

We introduce MAC-Flow, a framework for multi-agent coordination that combines flow-based learning of joint behaviors with efficient one-step decentralized policies, achieving faster inference than diffusion-based approaches while maintaining comparable performance.

  • Multi-agent Coordination
  • Flow Matching
  • Decentralized Policies
Oral

Unifying Agent Interaction and World Information for Multi-agent Coordination

Dongsu Lee , Daehee Lee , Yaru Niu , Honguk Woo , Amy Zhang , Ding Zhao

NeurIPS 2025 ARLET workshop (Oral)

We introduce interactive world latent (IWoL), a representation learning framework designed to improve team coordination in multi-agent reinforcement learning (MARL). By directly modeling communication protocols, IWoL captures both inter-agent relations and task-specific world information, enabling decentralized execution with implicit coordination while avoiding the shortcomings of explicit message passing. Evaluations across diverse MARL benchmarks show that IWoL consistently enhances team performance and can be integrated with existing MARL algorithms for further gains.

  • Multi-agent Reinforcement Learning
  • Team Coordination
  • Representation Learning
Spotlight

Policy Compatible Skill Incremental Learning via Lazy Learning Interface

Daehee Lee , Dongsu Lee , TaeYoon Kwack , Wonje Choi , Honguk Woo

NeurIPS 2025 (Spotlight Top 3.2%)

Skill Incremental Learning (SIL) is the process by which an embodied agent expands and refines its skill set over time while maintaining policy compatibility. We introduce SIL-C, a framework that preserves compatibility between incrementally learned skills and downstream policies through a bilateral lazy learning-based mapping that aligns subtask and skill spaces. This enables complex tasks to benefit from improved skills without retraining existing policies. Across diverse SIL scenarios, SIL-C sustains compatibility and efficiency throughout the learning process.

  • Skill Incremental Learning
  • Continual Learning
  • Policy Compatibility
  • Lazy Learning

NeSyC: A Neuro-symbolic Continual Learner For Complex Embodied Tasks In Open Domains

Wonje Choi , Jiwon Park , Seungwon Ahn , Daehee Lee , Honguk Woo

ICLR 2025

We present NeSyC, a neuro-symbolic continual learner that emulates the hypothetico-deductive model by continually formulating and validating knowledge from limited experiences through the combined use of LLMs. NeSyC enables embodied agents to tackle complex tasks more effectively in open-domain environments.

  • Neuro-symbolic Learning
  • Continual Learning
  • Embodied AI
  • Open-domain Tasks

2024

Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation

Daehee Lee , Minjong Yoo , Woo Kyung Kim , Wonje Choi , Honguk Woo

NeurIPS 2024

Continual Imitation Learning (CiL) involves extracting and accumulating task knowledge from demonstrations across multiple stages and tasks to achieve a multi-task policy. We present IsCiL, an adapter-based CiL framework that addresses the limitation of knowledge sharing by incrementally learning shareable skills from different demonstrations, thus enabling sample-efficient task adaptation.

  • Continual Learning
  • Imitation Learning
  • Task Adaptation
  • Reinforcement Learning

2023

One-shot imitation in a non-stationary environment via multi-modal skill

Sangwoo Shin , Daehee Lee , Minjong Yoo , Woo Kyung Kim , Honguk Woo

ICML 2023

We propose a multimodal semantic skill learning framework for one-shot imitation learning in non-stationary environments. The approach enables agents to adapt quickly to new tasks through multi-modal skill representation and transfer.

  • One-shot Learning
  • Imitation Learning
  • Multi-modal Learning
  • Non-stationary Environments
  • Skill Transfer