Using Intel's Curie Module To Capture The ArtistRy behind Brushstrokes


 


Using Beats and elasticsearch to model Artistic Style

Have been making painting robots for about ten years now. These robots use a variety of AI and Machine Learning approaches to paint original compositions with an artist’s brush on stretched canvas. The paintings we create together have gotten a lot of attention recently including awards and recognition from Google, TEDx, Microsoft, The Barbican, SXSL at The White House, robotart.org, and NPR. You can see more exhibitions and press write ups at www.cloudpainter.com/press though the quickest way to get up to speed is to check out this video.

In spite of all our recent success however, there is a major systematic problem with the artwork we are creating. While the images are aesthetically pleasing, the individual brushstrokes themselves lack artistic style. They are noticeably mechanical and repetitive.

For the next iteration of my painting robot, I have found a way to teach it style. It will records detailed metrics of a paintbrush being used by an artist to create a painting then use deep learning algorithms to model the style of those brushstrokes. A robotic arm will then use the model to create pastiches, new original compositions in the style of the artist.

To get into technical details, this will be done by embedding Intel’s Curie module into a set of paintbrushes. These will be capable of recording detailed information of the brushstrokes movement in time and space. Multiple Intel Real Sense cameras will also be set up facing a blank canvas. An artist will then be invited to paint an image on the canvas. Accelerometers on the Curie module will detect and record all of the brushes movements and the Real Sense cameras will detect and record all of the marks made on the canvas.  All this data will be forwarded by Beats into an elasticsearch database. When the artist completes the painting, Tensor Flow will be used to process the detailed brushstroke data into a model. This brushstroke model will then be used to direct a robot arm as it attempts to create its own original paintings. The subject of the robotic arm's painting can be anything, though ideally it would from an original image taken by one of the Real Sense cameras. Perhaps a portrait of a face that it detects?

Hopefully the robot is able to learn to paint in the artistic style of the artist. 
 

Deep Learning Approach

This project will rely heavily on recent state-of-the-art research by Google Brain and their ongoing success at creating pastiches, where Google’s deep learning algorithms are “painting” photographs in the style of a famous artists. If you have not seen one of Google's pastiches, here is an example taken from their blog of several photos painted in the style of several artists.

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

Taken from https://research.googleblog.com/2016/10/supercharging-style-transfer.html

The results are amazing, but when you look more closely you see a systematic problem that is similar to the problem our robots have. 

static1.squarespace-3.jpg

While Vincent Dumoulin, Jonathon Shlens, and Majunath Kudlar of the Google Brain Team have done an amazing job of transferring the color and texture from the original painting, they did not really capture the style of the brushstrokes. I offer this critique despite being deeply indebted to their work and a big fan of what they are accomplishing. Brad Pitt's face does not have the swooping strokes of the face in Munch's The Scream. The Golden Gate bridge is overly detailed when in the painting it is composed of long stark strokes. What these pastiches have done, while amazing, do not capture Munch's brushstroke. This is a problem because ultimately The Scream is a painting where the style of the brushstroke is a major contributor to the aesthetic effect.

Capturing the Human Element

A couple of years ago I first realized that I had the data required to capture and model artistic brushstrokes. It wasn't until I started experimenting with deep learning, however that I realized just how good this data was and how well it could be used to learn artistic style. The problem is that all my previous data is based on hundreds of crowdsourced paintings made with a combination of AI generated strokes and humans swiping their fingers on a touchscreen. This new project will add accelerometers to actual artist brushes to get even more detailed information.

drawing2.jpg

Furthermore, as I played and experimented with the data, it was also realized that I could record metrics efficiently enough to emulate the style of the artist generated strokes in real time. Imagine an interactive exhibit where anyone can attempt to paint a portrait alongside the following pair of robotic arms. As a human operator attempts to paints a subject, the robotic arms attempt to imitate the style and paint its own version of the artwork. It would be interesting to see the interaction.

static1.squarespace-4.jpg

I have no idea where this project will lead or end. I gave a TEDx Talk on Artificial Creativity 6 months ago and already feel the talk is obsolete.  By the time we get this project up and running in next couple of months, there are sure to be a couple new AI developments that I would want to explore.

Am providing this idea as a pitch to see if Intel is interested in sponsoring this project by supplying the hardware required to make this art project happen. My artwork gets a lot of media attention and is sure to get more in the coming year as I attempt to win the second annual $100,000 Robot Art Challenge.  Last year I got 2nd place though I am hopeful this new innovation has what it takes to get the top spot. In exchange for sponsorship, I will promote any hardware I am fortunate enough to be furnished with. Also I will be open to appearing at trade shows and events with the final painting robot.

Pindar Van Arman