Kwanggyoon Edward Seo

Hello World! I'm a Research Scientist at Flawless AI developing the next generation facial animation. I received my Ph.D. from the Graduate School of Culture Technology (GCST) at KAIST, where I was advised by Prof. Junyong Noh.

My research lies at the intersection of deep learning, computer vision, and computer graphics. Specifically, I am interested in generative AI, with a focus on animating 3D humans as well as manipulating images and videos.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo
Selected Research
PontTuset A Deep Learning-based Virtual Oculoplastic Surgery Simulator
Seonghyeon Kim, Changwook Seo, Kwanggyoon Seo, Seung Han Song, Junyong Noh
SIGGRAPH 2025; TOG 2025

A novel deep learning-based virtual oculoplastic surgery simulation system that aims to improve the accuracy and quality of simulations by considering the anatomical structure and characteristics of the eye.

PontTuset Neural Face Skinning for Mesh-agnostic Facial Expression Cloning
Sihun Cha, Serin Yoon, Kwanggyoon Seo, Junyong Noh
Eurographics 2025; CGF 2025
paper / page / video / code

A method that enables direct retargeting between two facial meshes with different shapes and mesh structures.

Mesh-Agnostic Audio-Driven 3D Facial Animation
Kwanggyoon Seo*, Sihun Cha*, Hyeonho Na, Inyup Lee, Junyong Noh
In submission
paper / video

An end-to-end approach to animating a 3D face mesh with arbitrary shape and triangulation from a given speech audio.

Emotion Manipulation for Talking-head Videos
Kwanggyoon Seo, Rene Jotham Culaway, Byeong-Uk Lee, Junyong Noh
In submission
paper / video

Propose a latent-based landmark detection and latent manipulation module to edit the emotion of portrait video that faithfully follows the original lip-synchronization or lip-contact.

Audio-Driven Emotional Talking-Head Generation
Kwanggyoon Seo, Kwan Yun, Sunjin Jung, Junyong Noh
In submission
paper / video

Use a generative prior for identity agnostic audio-driven talking-head generation with emotion manipulation while trained on a single identity audio-visual dataset.

PontTuset Speed-Aware Audio-Driven Speech Animation using Adaptive Windows
Sunjin Jung, Yeongho Seol, Kwanggyoon Seo, Hyeonho Na, Seonghyeon Kim, Vanessa Tan, Junyong Noh
SIGGRAPH Asia 2024; TOG 2024
paper / video

A novel method that can generate realistic speech animations of a 3D face from audio using multiple adaptive windows.

PontTuset LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
Soyeon Yoon*, Kwan Yun*, Kwanggyoon Seo, Sihun Cha, JungEun Yoo, Junyong Noh
CVPR 2024
paper / page / code

Generate stylized 3D face model with a single example paired mesh by finetuning a pre-trained surface deformation network.

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
Jongwoo Choi, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
CVPR 2024
paper / page / code

Splatting deep generated features of pre-trained StyleGAN for cinemagraph generation.

PontTuset StyleSketch: Stylized Sketch Extraction via Generative Prior with Limited Data
Kwan Yun*, Kwanggyoon Seo*, Changwook Seo*, Soyeon Yoon, Seongcheol Kim, Soohyun Ji, Amirsaman Ashtari, Junyong Noh
Eurographics 2024; CGF 2024
paper / page / code / data

Train a sketch generator with generated deep features of pre-trained StyleGAN to generate high-quality sketch images with limited data.

PontTuset Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
Eurographics 2023; CGF 2023
paper / page / code

Generating and completing of 3D human RGB texture from a single image using sampling and refinement process from visible region.

StylePortraitVideo: Editing Portrait Videos with Expression Optimization
Kwanggyoon Seo, Seoung Wug Oh, Jingwan (Cynthia) Lu, Joon-Young Lee, Seonghyeon Kim, Junyong Noh
Pacific Graphics 2022; CGF 2022
paper / page / code

A method to edit portrait video using a pre-trained StyleGAN using the video adaptaion and expression dynamics optimization.

PontTuset Deep Learning-Based Unsupervised Human Facial Retargeting
Seonghyeon Kim, Sunjin Jung, Kwanggyoon Seo, Roger Blanco i Ribera, Junyong Noh
Pacific Graphics 2021; CGF 2021
paper / video

A novel unsupervised learning method by reformulating the retargeting of 3D facial blendshape-based animation in the image domain.

PontTuset Virtual Camera Layout Generation using a Reference Video
JungEun Yoo*, Kwanggyoon Seo*, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh
CHI 2021
paper / video

A method that generates a virtual camera layout for both human and stylzed characters of a 3D animation scene by following the cinematic intention of a reference video.

PontTuset Neural Crossbreed: Neural Based Image Metamorphosis
Sanghun Park, Kwanggyoon Seo, Junyong Noh
SIGGRAPH Asia 2020; TOG 2020
paper / page / video / code

A feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect by distilling the information of pre-trained GAN.

Research Experience
Flawless AI
Flawless AI
Research Scientist
Jun.2024-Current

Visual Media Lab
Visual Media Lab
Research Assistance
Jan.2017-Mar.2024

Adobe Research
Adobe Research
Research Intern
Jun.2022-Aug.2022
Mar.2021-Jun.2021
NAVER Clova
NAVER Corp.
Research Intern
Dec.2019-Jun.2020

The source code of this website is from Jon Barron.

(last update: April.27 2025)