CVPR 2017 |
Deep Photo Style Transfer |
Fujun Luan1
Sylvain Paris2
Eli Shechtman2
Kavita Bala1
|
1Cornell University
2Adobe
|
|
|
|
Given an input image (first column), and a reference image (second column), our approach transfers the style of the reference image onto the input image while preserving the photorealism (third column). |
|
[ Paper ]
[ Supplementary ]
[ ArXiv ]
[ Code & Data ]
[ BibTex ]
[ Press ]
[ Two Minute Papers ]
|
|
Publication
Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala, Deep Photo Style Transfer, to appear in CVPR 2017
Paper: High-resolution PDF (59 MB) | Low-resolution PDF (6.6 MB)
|
|
Abstract
This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.
|
|
Code & Data
Our code and data are available at Github.
|
|
Supplemental materials
We provide additional results and discussion in the supplemental document.
|
|
Press
[The Verge]
[PetaPixel]
[DPreview]
[Slash Gear]
[AppleInsider]
[Digital Trends]
[BGR]
[Lifeboat]
[Engadget]
[New Atlas]
[TNW]
[TechSpot]
[Ubergizmo]
[Softpedia]
|
|
Acknowledgement
We thank Leon Gatys, Frédo Durand, Aaron Hertzmann as well as the anonymous reviewers for their valuable discussions. We thank Fuzhang Wu for generating results using "Content-based colour transfer". This research is supported by a Google Faculty Research Award, and NSF awards IIS 1617861 and 1513967.
|
|