Painting Robot Duel
250 Word Pitch
Have been building painting robots for more than a decade. Earliest robots were only capable of simple artistic tasks like connecting dots. My most recent robots, however, leverage Deep Learning, Feedback Loops, and a variety of other AI algorithms to paint with increasing artistic autonomy.
For this interactive exhibition, two of my customized painting robots will compete against each other in a portrait painting competition. Both robotic arms will be equipped with a camera, palette, and paint brushes. Each robot will begin with a blank canvas. Sharing the easel with the canvas will be two different famous paintings in the public domain. A chair will be in view of both robots.
When the chair is unoccupied, the robots will look around room until a person is detected. The robots will then motion these individuals towards the chair.
As soon as someone sits, both robots will immediately focus on them and begin a portrait duel. Each robot will attempt to paint their portrait in the artistic style of the famous painting on their easel. As they paint, they will be looking back and forth between the easel and sitter to emphasize that it is a live portrait.
Once both robots finish, the sitter will declare the winner by selecting their favorite of the two portraits. This portrait will then be given to the participant, and the losing portrait will be hung on the wall.
The duel will repeat every 15-30 minutes, though the famous artworks being imitated will change each time.
Beyond 250 Words...
I have been collaborating with my robots to make portraits for over twelve years. Database records indicate that we have probably painted over a thousand portraits though I don't have an exact count. While the style and process behind each creation has varied, they were all painted with a brush on canvas. Some of these portraits, most approximately 14"x18", can be seen above.
I am a painter, but my art is exploring creativity with code.
Have always wondered what exactly creativity was. In my attempt to better understand it, I have been dissecting my own artistic process and trying to teach it to robots. This exploration has revealed a spectrum of creative processes from simple to complex. Early on I was able to codify the easier tasks such as teaching my robots to copy my paintings. Then I was able to capture many of the mid level creative decisions that went into my portraiture. More recently, I have been using deep learning to codify higher level creativity such as stylistic abstraction.
While my robots may never be able to make truly original creative decisions, what they are capable of reveals the point where computational creativity ends and human creativity begins. It was not until I began exploring this threshold that I began to understand my own creativity and what it revealed about me as an artist.
This exploration has been recognized by multiple artistic and technical organizations over the years including NPR, Google, The Barbican, Microsoft, Elastic, and NVIDIA. I recently gave a TEDx Talk on my art in Washington DC and was just named Top Technical Contributor to the international Robot Art 2017 competition. Exhibition history, awards, and several press write ups can also be seen at http://www.cloudpainter.com/press/.
Pindar Van Arman
This installation will be debuting a pair of my seventh generation custom designed painting robots. While I am keeping the details behind these robot confidential until I find a venue to debut them, I can speak to the fact that they are far more sophisticated than the sixth generation robots used in the recent RobotArt 2017 competition. This is significant because my work with my sixth-gen robots was awarded the top technical prize of the international competition. While this installation would be compelling even if I were to use my sixth-gen system, I will be debuting what I believe will be two of the most advanced creative machines in the world.
Without going into specifics it is possible to describe some general aspects of the robots. They are approximately four feet tall. The end effector of each arm is equipped with multiple paint brushes that get paint from a palette to apply it to the canvas. Each is equipped with a stereo camera that utilizes OpenCV for quick detection of figures and faces in the exhibition space. The arms are also equipped with embedded NVIDIA GPUs to execute the processor intense calculations required by deep learning.
Even though the robots have multiple safety measures and do not have the power to do serious physical harm, the chair where portrait subjects sit will be well beyond the physical reach of the robots. This is to guarantee the safety of the audience. It would also be possible to erect a physical barrier between robots and audience if required.
The Creative Software
The software behind my robots use a variety of artificial intelligence and machine learning including Feedback Loops, OpenCV, Style Transfer, and GANs. A brief overview of my art with the robot can be seen in the August 3rd, 2017 HBO Vice Interview below as well as in my TEDx Talk from last year. Details regarding some of the more recent innovations can be seen further below.
While the process by which my robots create paintings is complex and iterative, it usually begins with a photo shoot. The robot's cameras look out into the space around it and seek out a portrait subject. Once one is found, multiple photos are taken and a favorite is selected to paint. Both of the the dueling robotic arms will take advantage of these algorithms to interact with the people in the exhibition space and encourage them to sit for a portrait.
Once a favorite photo is selected a more complex series of Deep Learning, Artificial Intelligence, and Feedback Loops begins. A simplified version of the robot's artistic process can be seen in the graphic below.
We start with robots favorite image from the photoshoot and a stylistic reference. In this example, the robot selected the portrait of my son in the top image on far left. I then supply it with a painting my son painted as seen in middle image on the far left. The robot then uses Deep Learning, specifically Style Transfer, to find the stylistic patterns in the painting and apply it to the photo. This abstracted image becomes the goal that the robot then attempts to paint with a brush on canvas. My robots then reference a database of previously created paintings for direction on how to complete each brushstroke. In this example, the robot referenced how it was instructed to paint a Picasso reproduction in the past. The new painting was then painted with brushstrokes modelled on the brushstrokes used in the reproduction. With each brushstroke, the robot's cameras utilize Feedback Loops to track progress and make adjustments as needed until the painting is complete. The final painting can be seen on the far right.
The process behind another painting that uses the same painting as a source image can be seen below.
The portrait at the top right began with the same painting by my son and multiple photographs of Elle Reeve. The robot selected its favorite photograph and then cropped it into a balanced composition. The robot's algorithms then used Style Transfer the find the visual patterns in the painting by my son and apply them to the photo. Thousands of brushstrokes and many iterative steps later, the robots completed a stylized abstract portrait of Elle Reeve. Below her completed portrait you can also see the portrait of my son as painted by a similar process.
In addition to the debut of my seventh generation robots, there is an interesting creative software enhancement that I am calling Contextual Style Transfer. Contextual Style Transfer is inspired by the work of Yamaguchi, Kato, Fukusato, and Morishimo from Waseda University that recognized the need for regional application of the Style Transfer algorithm. Style Transfer itself is the the groundbreaking work by Gatys, Eckers, and Bethge of Bethge Labs that uses deep learning to find visual patterns in a source image and apply them to a content image. This allows images to automatically be "repainted" in the style of famous artworks. In the case of my son's portrait, it enabled the robot to reimagine the photo in the style of a one of his paintings. As is, Style Transfer is one of the key algorithms that will enable the dueling robots to paint a portrait in the style of the famous painting behind their canvas, whatever that painting is, even if it is being changed between each duel.
There is a shortcoming to Style Transfer, however. The shortcoming is that the algorithm is unaware of context. Take for example the following photo of my son in San Francisco. When traditional Style Transfer is applied and it is reimagined by Deep Learning in the style of The Scream, the style is applied to all areas indiscriminately. While the color and texture is preserved, they are applied in a haphazard manner, distorting much of the image.
Compare the traditional application of Style Transfer above with the example of Contextual Style Transfer below. By identifying areas of matching context in the source and content image, then applying Style Transfer regionally, as suggested by the work out of Waseda University, the effect is much more meaningful.
My robots have contextual data for multiple famous artworks. Furthermore, new vision APIs, such as the Google VisionAPI, have become adept at using deep learning to identify and label context within images. By combining my artistic databases, emerging Vision APIs, and Style Transfer my robots will be able to apply Contextual Style Transfer to create paintings in the proposed Robot Portrait Duel.
I hope you find the collaborative work that I have been doing with my painting robots over the years interesting and consider hosting them for a portrait duel.
Pindar Van Arman