Kwanggyoon Edward Seo

Hello World! I am a PhD candiate at GCST KAIST advised by Prof. Junyong Noh. During my PhD, I was fortunate to work at Adobe Research with wonderful mentors Seoung Wug Oh, Joon-Young Lee, and Jingwan (Cynthia) Lu. Also, I spent time at Naver Clova working with Suntae Kim and Soonmin Bae.

My research lies at the intersection of deep learning, computer vision, and computer graphics. Specifically, I am interests in generative AI focusing on synthesizing and manipulating images, video, and 3D human.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn  / 

profile photo
Selected Research
Emotion Manipulation for Talking-head Videos
Kwanggyoon Seo, Rene Jotham Culaway, Byeong-Uk Lee, Junyong Noh
In submission

Propose a latent-based landmark detection to edit the emotion of portrait video that faithfully follows the original lip-synchronization or lip-contact.

Audio-Driven Emotional Talking-Head Generation
Kwanggyoon Seo, Kwan Yun, Sunjin Jung, Junyong Noh
In submission

Use a generative prior for identity agnostic audio-driven talking-head generation with emotion manipulation while trained on a single identity audio-visual dataset.

PontTuset LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example
Soyeon Yoon*, Kwan Yun*, Kwanggyoon Seo, Sihun Cha, JungEun Yoo, Junyong Noh
CVPR 2024

Generate stylized 3D face model with a single example paired mesh by finetuning a pre-trained surface deformation network.

Landscape Cinemagraph Generation
Jongwoo Choi, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
CVPR 2024

Splatting deep generated features of pre-trained StyleGAN for cinemagraph generation.

PontTuset StyleSketch: Stylized Sketch Extraction via Generative Prior with Limited Data
Kwan Yun*, Kwanggyoon Seo*, Changwook Seo*, Soyeon Yoon, Seongcheol Kim, Soohyun Ji, Amirsaman Ashtari, Junyong Noh
Eurographics 2024; CGF 2024
paper / page / code / data

Train a sketch generator with generated deep features of pre-trained StyleGAN to generate high-quality sketch images with limited data.

StylePortraitVideo: Editing Portrait Videos with Expression Optimization
Kwanggyoon Seo, Seoung Wug Oh, Jingwan (Cynthia) Lu, Joon-Young Lee, Seonghyeon Kim, Junyong Noh
Pacific Graphics 2022; CGF 2022
paper / page / code

A method to edit portrait video using a pre-trained StyleGAN using the video adaptaion and expression dynamics optimization.

PontTuset Deep Learning-Based Unsupervised Human Facial Retargeting
Seonghyeon Kim, Sunjin Jung, Kwanggyoon Seo, Roger Blanco i Ribera, Junyong Noh
Pacific Graphics 2021; CGF 2021
paper

A novel unsupervised learning method by reformulating the retargeting of 3D facial blendshape-based animation in the image domain.

PontTuset Virtual Camera Layout Generation using a Reference Video
JungEun Yoo*, Kwanggyoon Seo*, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh
CHI 2021
paper / video

A method that generates a virtual camera layout for both human and stylzed characters of a 3D animation scene by following the cinematic intention of a reference video.

PontTuset Neural Crossbreed: Neural Based Image Metamorphosis
Sanghun Park, Kwanggyoon Seo, Junyong Noh
SIGGRAPH Asia 2020; ToG 2020
paper / page / video / code

A feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect by distilling the information of pre-trained GAN.

Research Experience
PontTuset Visual Media Lab
Research Assistance
Dec.2016-current

PontTuset Adobe Research
Research Intern
Mar.2021-Jun.2021; Jun.2022-Aug.2022
PontTuset NAVER Corp.
Research Intern
Dec.2019-Jun.2020
Project
PontTuset 3D Cinemagraph for AR Contents Creation
June.2020-Dec.2022

Develop user-friendly content production technology that enables general users to easily transform a single image into immersive AR contents where background and characters within the image move and interact with real-world objects.

project page
PontTuset Development of Camera Work Tracking Technology for Animation Production using Artificial Intelligence
May.2018-Dec.2019

Analyze cinematography properties of a reference video clip using neural networks and replicate the cinematic intention of the reference video to the 3D animation.

project page

The source code of this website is from Jon Barron.

(last update: Dec-14 2023)