USING TENSOR FLOW TO CAPTURE
AN ARTIST's BRUSHSTROKE


 


Using Beats and elasticsearch to model Artistic Style

Have been making painting robots for about ten years now. These robots use a variety of AI and Machine Learning approaches to paint original compositions with an artist’s brush on stretched canvas. The paintings we create together have gotten a lot of attention recently including awards and recognition from Google, TEDx, Microsoft, The Barbican, SXSL at The White House, robotart.org, and NPR. You can see more exhibitions and press write ups at www.cloudpainter.com/press though the quickest way to get up to speed is to check out this video.

In spite of our teams recent successes, however, there is a major systematic problem with the artwork we are creating. While the images are aesthetically pleasing, the individual brushstrokes themselves lack artistic style. They are noticeably mechanical and repetitive.

For the next iteration of our painting robot, we have found a way to teach it style with Tensor Flow. This project will put sensors on a paintbrush and record detailed metrics of the brushes movement as an artist uses it to create a painting.  We will then use Tensor Flow to train a model with the recorded metrics of the brushstrokes. Building upon Google Brain's recent success of creating artistic pastiches, a robotic arm will then use the brushstroke model to apply paint with a technique similar to the style of the human artist.

In short, we will attempt to use Deep Learning to teach our painting robots how to wield a brush like an artist. 
 

     A Deep Learning Approach

This project will rely heavily on recent state-of-the-art research by Google Brain and their ongoing success at creating pastiches, where Google’s deep learning algorithms are “painting” photographs in the style of a famous artists. While we are sure you are familiar with this work here are examples taken from their blog of several photos painted in the style of several artists.

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

The results are amazing, but when you look more closely you see a systematic problem that is similar to the problem our robots have. 

static1.squarespace-3.jpg

While Vincent Dumoulin, Jonathon Shlens, and Majunath Kudlar of the Google Brain Team have done an amazing job of transferring the color and texture from the original painting, their method fell short of capturing the brushstrokes. I offer this critique despite being deeply indebted to their work and a big fan of what they have accomplishing. Brad Pitt's face does not have the swooping strokes of the face in Munch's The Scream. The Golden Gate bridge is overly detailed when in the painting it is composed of long stark strokes. What these pastiches have done, while amazing, do not capture Munch's brushstroke. This is a problem because ultimately The Scream is a painting where the style of the brushstroke is a major contributor to the aesthetic effect.

 

Capturing the Human Element

 

A couple of years ago I first realized that I had the data required to capture and model artistic brushstrokes. It wasn't until I started experimenting with deep learning, however that I realized just how good this data was and how well it could be used to learn artistic style. The problem is that all my previous data is based on hundreds of crowdsourced paintings made with a combination of AI generated strokes and humans swiping their fingers on a touchscreen. While I have a lot of this data already, hundreds of paintings worth, this new project will add accelerometers to actual artist brushes to get even more detailed information behind subtle movements of an artist's brush.

drawing2.jpg

 

Furthermore, as I played and experimented with the data, it was also realized that I could record metrics efficiently enough to emulate the style of the artist generated strokes in real time. Imagine an interactive exhibit where anyone can attempt to paint a portrait alongside the following pair of robotic arms. As a human operator attempts to paints a subject, the robotic arms attempt to imitate the style and paint its own version of the artwork. It would be interesting to see the interaction.

 

static1.squarespace-4.jpg

 

I have no idea where this project will lead or ultimately end. I gave a TEDx Talk on Artificial Creativity six months ago and already feel the talk is obsolete.  By the time we get this project up and running in next couple of months, there are sure to be a couple new AI developments that I would want to explore with it. While uncertainty around the final exhibition remains, I do know that it will be compelling and attempt to incorporate many of the latest advances in artificial intelligence and deep learning. 

 

Thanks for taking the time to consider our application to attend the first Tensor Flow Dev Summit. We are excited to meet other deep learning enthusiasts, share ideas, and see where the state of the art stands.

Pindar Van Arman