Kwanggyoon Edward Seo

Hello World! I am a PhD candiate at GCST KAIST advised by Prof. Junyong Noh. During my PhD, I was fortunate to work at Adobe Research with wonderful mentors Seoung Wug Oh, Joon-Young Lee, and Jingwan (Cynthia) Lu. Also, I spent time at Naver Clova working with Suntae Kim and Soonmin Bae.

My research lies at the intersection of deep learning, computer vision, and computer graphics. Specifically, I am interests in generative AI focusing on manipulating images/videos and animating 3D human.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo
Selected Research
Mesh-Agnostic Audio-Driven 3D Facial Animation
Kwanggyoon Seo*, Sihun Cha*, Hyeonho Na, Inyup Lee, Junyong Noh
In submission
paper / video

An end-to-end approach to animating a 3D face mesh with arbitrary shape and triangulation from a given speech audio.

Emotion Manipulation for Talking-head Videos
Kwanggyoon Seo, Rene Jotham Culaway, Byeong-Uk Lee, Junyong Noh
In submission
paper / video

Propose a latent-based landmark detection and latent manipulation module to edit the emotion of portrait video that faithfully follows the original lip-synchronization or lip-contact.

Audio-Driven Emotional Talking-Head Generation
Kwanggyoon Seo, Kwan Yun, Sunjin Jung, Junyong Noh
In submission
paper / video

Use a generative prior for identity agnostic audio-driven talking-head generation with emotion manipulation while trained on a single identity audio-visual dataset.

PontTuset Speed-Aware Audio-Driven Speech Animation using Adaptive Windows
Sunjin Jung, Yeongho Seol, Kwanggyoon Seo, Hyeonho Na, Seonghyeon Kim, Vanessa Tan, Junyong Noh
TOG 2024
paper / video

A novel method that can generate realistic speech animations of a 3D face from audio using multiple adaptive windows.

PontTuset LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
Soyeon Yoon*, Kwan Yun*, Kwanggyoon Seo, Sihun Cha, JungEun Yoo, Junyong Noh
CVPR 2024
paper / page / code

Generate stylized 3D face model with a single example paired mesh by finetuning a pre-trained surface deformation network.

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
Jongwoo Choi, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
CVPR 2024
paper / page / code

Splatting deep generated features of pre-trained StyleGAN for cinemagraph generation.

PontTuset StyleSketch: Stylized Sketch Extraction via Generative Prior with Limited Data
Kwan Yun*, Kwanggyoon Seo*, Changwook Seo*, Soyeon Yoon, Seongcheol Kim, Soohyun Ji, Amirsaman Ashtari, Junyong Noh
Eurographics 2024; CGF 2024
paper / page / code / data

Train a sketch generator with generated deep features of pre-trained StyleGAN to generate high-quality sketch images with limited data.

PontTuset Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
Eurographics 2023; CGF 2023
paper / page / code

Generating and completing of 3D human RGB texture from a single image using sampling and refinement process from visible region.

StylePortraitVideo: Editing Portrait Videos with Expression Optimization
Kwanggyoon Seo, Seoung Wug Oh, Jingwan (Cynthia) Lu, Joon-Young Lee, Seonghyeon Kim, Junyong Noh
Pacific Graphics 2022; CGF 2022
paper / page / code

A method to edit portrait video using a pre-trained StyleGAN using the video adaptaion and expression dynamics optimization.

PontTuset Deep Learning-Based Unsupervised Human Facial Retargeting
Seonghyeon Kim, Sunjin Jung, Kwanggyoon Seo, Roger Blanco i Ribera, Junyong Noh
Pacific Graphics 2021; CGF 2021
paper / video

A novel unsupervised learning method by reformulating the retargeting of 3D facial blendshape-based animation in the image domain.

PontTuset Virtual Camera Layout Generation using a Reference Video
JungEun Yoo*, Kwanggyoon Seo*, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh
CHI 2021
paper / video

A method that generates a virtual camera layout for both human and stylzed characters of a 3D animation scene by following the cinematic intention of a reference video.

PontTuset Neural Crossbreed: Neural Based Image Metamorphosis
Sanghun Park, Kwanggyoon Seo, Junyong Noh
SIGGRAPH Asia 2020; TOG 2020
paper / page / video / code

A feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect by distilling the information of pre-trained GAN.

Research Experience
PontTuset Visual Media Lab
Research Assistance
Jan.2017-Mar.2024

PontTuset Adobe Research
Research Intern
Jun.2022-Aug.2022
Mar.2021-Jun.2021
PontTuset NAVER Corp.
Research Intern
Dec.2019-Jun.2020
Project
PontTuset 3D Cinemagraph for AR Contents Creation
June.2020-Dec.2022

Develop user-friendly content production technology that enables general users to easily transform a single image into immersive AR contents where background and characters within the image move and interact with real-world objects.

project page
PontTuset Development of Camera Work Tracking Technology for Animation Production using Artificial Intelligence
May.2018-Dec.2019

Analyze cinematography properties of a reference video clip using neural networks and replicate the cinematic intention of the reference video to the 3D animation.

project page

The source code of this website is from Jon Barron.

(last update: May.25 2024)