Teaching a Robot
Artistic Style


 


Using Beats and elasticsearch to model Artistic Style

Have been making painting robots for about ten years now. These robots use a variety of AI and Machine Learning approaches to paint original compositions with an artist’s brush on stretched canvas. The paintings we create together have gotten a lot of attention recently including awards and recognition from Google, TEDx, Microsoft, The Barbican, SXSL at The White House, robotart.org, and NPR. You can see more exhibitions and press write ups at www.cloudpainter.com/press though the quickest way to get up to speed is to check out this video.

In spite of all our recent success however, there is a major systematic problem with the artwork we are creating. While the images are aesthetically pleasing, the individual brushstrokes themselves lack artistic style. They are noticeably mechanical and repetitive.

For the next iteration of my painting robot, I have found a way to teach it style. This project will put sensors on a paintbrush and record detailed metrics of the paintbrushes movement as an artist uses it to create a painting.  I will then run deep learning on the recorded metrics to model the style of the brushstrokes. A robotic arm will then use the model to create pastiches, new original compositions in the style of the human artist.

Using Deep Learning, my painting robots will attempt to imitate the technique of a human artist. 
 

     A Deep Learning Approach

This project will rely heavily on recent state-of-the-art research by Google Brain and their ongoing success at creating pastiches, where Google’s deep learning algorithms are “painting” photographs in the style of a famous artists. If you have not seen one of Google's pastiches, here is an example taken from their blog of several photos painted in the style of several artists.

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

The results are amazing, but when you look more closely you see a systematic problem that is similar to the problem our robots have. 

static1.squarespace-3.jpg

While Vincent Dumoulin, Jonathon Shlens, and Majunath Kudlar of the Google Brain Team have done an amazing job of transferring the color and texture from the original painting, they did not really capture the style of the brushstrokes. I offer this critique despite being deeply indebted to their work and a big fan of what they are accomplishing. Brad Pitt's face does not have the swooping strokes of the face in Munch's The Scream. The Golden Gate bridge is overly detailed when in the painting it is composed of long stark strokes. What these pastiches have done, while amazing, do not capture Munch's brushstroke. This is a problem because ultimately The Scream is a painting where the style of the brushstroke is a major contributor to the aesthetic effect.

 

Capturing the Human Element

 

A couple of years ago I first realized that I had the data required to capture and model artistic brushstrokes. It wasn't until I started experimenting with deep learning, however that I realized just how good this data was and how well it could be used to learn artistic style. The problem is that all my previous data is based on hundreds of crowdsourced paintings made with a combination of AI generated strokes and humans swiping their fingers on a touchscreen. This new project will add accelerometers to actual artist brushes to get even more detailed information.

drawing2.jpg

 

Furthermore, as I played and experimented with the data, it was also realized that I could record metrics efficiently enough to emulate the style of the artist generated strokes in real time. Imagine an interactive exhibit where anyone can attempt to paint a portrait alongside the following pair of robotic arms. As a human operator attempts to paints a subject, the robotic arms attempt to imitate the style and paint its own version of the artwork. It would be interesting to see the interaction.

 

static1.squarespace-4.jpg

 

I have no idea where this project will lead or ultimately end. I gave a TEDx Talk on Artificial Creativity 6 months ago and already feel the talk is obsolete.  By the time we get this project up and running in next couple of months, there are sure to be a couple new AI developments that I would want to explore with it. While I uncertainty around the final exhibition remains, I do know that it will be compelling and incorporate many of the latest advances in artificial intelligence and deep learning. Furthermore, whenever the robot is operating in public, it draws a crowd.

Am local artist providing this idea as a pitch for a Tysons Corner Exhibition.  Ideally, I imagine that it would work really well in partnership with either the Microsoft or Apple store.  If either are interested, I could set up either in or outside  either store.  Am also open to incorporating their touchscreen products. My robots can be controlled by any touchscreen as part of the interactive exhibit I am imagining.

Thanks for taking the time to explore this idea.  It will be ready in about three months and I would love to coordinate an exhibition of this concept with Tysons and its partners.

Thanks,

Pindar Van Arman