Artistic Style Transfer for Images and Videos


360 Video of Doi Suthep with Style Transfer


Our goal for this work was to generate aesthetic, computationally generated art using our computational photography skills and deep convolutional neural nets. We implement image and video style transfer using deep neural networks, optimizing for aesthetic value and fast training and transfer runtime. We initially implement single image style transfer, then add optimizations such as optical flow warping for video style transfer, before finally implementing a feed-forward neural net for pre-training styles to speed up the transfer process. Our approach combines 2 existing implementations for style transfer, one for speed and one for video stabilization, in a novel way to generate aesthetic, temporally consistent videos. The purpose of this webpage is to serve as a gallery of our favourite video and image results. For technical and implementation details please read our report.


Part 1: Still Images

Here's a brief example of how we combine a content image with a style image, to produce our stylized images. What do you get when you throw a lion into the ocean?

A Lion
The Great Wave off Kanagawa
Lion King of the Ocean

Some of our favourite results (CONTENT IMAGE + STYLE IMAGE ==> STYLIZED IMAGE):

A Bear
A Painting
Go Bears!

A Building
Murakami Art
Murakami Building

A picture taken at Lake Tahoe
The Great Wave off Kanagawa
The Great Wave off Tahoe

Afremov Painting
Yak in a lake

Miscellaneous Stylized Images:

Monet Building
Oil Painting of Zebras
Zebras Murakami

What do these works of art have in common?


They all look good on lions!

Afremov Lion
Alvin Lion
Art Lion
Doodle Lion
Kandinsky Lion
Mamafaka Lion
Murakami Lion
Oxane Lion
Portrait Lion

Part 2: Videos

We then moved on from stylizing still images to generating videos! For details on how this was done, including how we stabilized between adjacent frames using optical flow, please read our report (linked at the top of this page).

Here are some examples of style transfered onto a short clip from the back of a Songthaew in the Thai countryside:

Style used: Art

Style used: Wave

Style used: Murakami

Here are some examples of style transfered onto short clips of drone footage over UC Berkeley:

Style used: Alvin

Style used: Wave

Style used: Art

Style used: Alvin

Style used: Portrait

Part 3: 360 Videos

As seen in the video at the very top of this site, with the video of Doi Suthep (A Thai Temple) style transfered, our trained neural net is 360 video compatible! Here are some more results:

Thai Temple Style Transfered: "Alvin"

Forest Waterfall Style Transfered: "Art"

Original 360 Video of us

360 Video of us, Style Transfer: "Alvin"

Final Takeaways

Ultimately, this project was extremely difficult, time consuming and training neural nets takes forever. That being said we both had a great time and learned a lot on the technical and artistic side. We plan to continue development on this project, improving our neural nets for video transfer, making our code better suited to 360 videos (i.e. removing seams, training on higher resolution images), and then applying our work to cool other projects i.e. VR and real time applications.