cloudpainter

Augmenting the Creativity of A Child with A.I.

The details behind my most recent painting are complex, but the impact is simple.

gallery05_image.jpg

My robots had a photoshoot with my youngest child, then painted a portrait with her in the style of one of her paintings. Straightforward A.I. by today’s standards, but who really cares how simple the process behind something is as long as the final results are emotionally relevant.

With this painting there were giggles as she painted alongside my robot and an amazed result when she saw the final piece develop over time, so it was a success.

The following image shows the inputs, my robot’s favorite photo from the shoot (top left) and a painting made by her (top middle). The A.I. then created a CNN style transfer to reimagine her face in the style of her painting (top right). As the robot worked on painting this image with feedback loops, she painted along on a touchscreen, giving the robot direction on how to create the strokes (bottom left). The robot then used her collaborative input, a variety of generative A.I., Deep Learning, and Feedback Loops to finish the painting one brushstroke at a time (bottom right).

ai_portrait.jpg

In essence, the robot was using a brush to remove the difference between an image that was dynamically changing in its memory with what it saw emerging on the canvas. A timelapse of the painting as it was being created is below…

corinne_anim_fast.gif

Pindar

Emerging Faces - A Collaboration with 3D (aka Robert Del Naja of Massive Attack)

Have been working on and off for past several months with Bristol based artist 3D (aka Robert Del Naja, the founder of Massive Attack). We have been experimenting with applying GANs, CNNs, and many of my own artificial intelligent algorithms to his artwork.  I have long been working at encapsulating my own artistic process in code.  3D and I are now exploring if we can capture parts of his artistic process.

It all started simply enough with looking at the patterns behind his images. We started creating mash-ups by using CNNs and Style Transfer to combine the textures and colors of his paintings with one another.  It was interesting to see what worked and what didn't and to figure out what about each painting's imagery became dominant as they were combined

3d_cn-incest.jpg

As cool as these looked, we were both left underwhelmed by the symbolic and emotional aspects of the mash-ups. We felt the art needed to be meaningful.  All that was really be combined was color and texture, not symbolism or context. So we thought about it some more and 3D came up with the idea of trying to use the CNNs to paint portraits of historical figures that made significant contributions to printmaking.  Couple of people came to mind as we bounced ideas back and forth before 3D suggested Martin Luther. At first I thought he was talking about Martin Luther King Jr, which left me confused. But then when I realized he was talking about the the author of The 95 Theses and it made more sense. Not sure if 3D realized I was confused, but I think I played it off well and he didn't suspect anything. We tried applying CNNs to Martin Luther's famous portrait and got the following results.

luther_cnns_rob.jpg

It was nothing all that great, but I made a couple of paintings from it to test things.  Also tried to have my robots paint a couple of other new media figures like Mark Zuckerberg.

zuckerberg.jpg

Things still were not gelling though. Good paintings, but nothing great. Then 3D and I decided to try some different approaches. 

I showed him some GANs where I was working on making my robots imagine faces. Showed him how a really neat part of the GAN occurred right at the beginning when faces emerge from nothing.  I also showed him a 5x5 grid of faces that I have come to recognize as a common visualization when implementing GANs in tutorials.  We got to talking about how as a polyptych, it recalled a common Warhol trope except that there was something different.  Warhol was all about mass produced art and how cool repeated images looked next to one another.  But these images were even cooler, because it was a new kind of mass production. They were mass produced imagery made from neural networks where each image was unique.

widegans.jpg

I started having my GANs generate tens of thousands of faces.  But I didn't want the faces in too much detail.  I like how they looked before they resolved into clear images.  It reminded me of how my own imagination worked when I tried to picture things in my mind. It is foggy and non descript.  From there I tested several of 3D's paintings to see which would best render the imagined faces.

cnngan_facialcontext.jpg
gan_3d_models.jpg


3D's Beirut (Column 2) was the most interesting, so I chose that one and put it and the GANs into the process that I have been developing over the past fifteen years. A simplified outline of the artificially creative process it became can be seen in the graphic below. 
 

My robots would begin by having the GAN imagine faces. Then I ran the Viola-Jones face detection algorithm on the GAN images until it detected a face. At that point, right when the general outlines of faces emerged, I stopped the GAN.  Then I applied a CNN Style Transfer on the nondescript faces to render them in the style of 3D's Beirut. Then my robots started painting. The brushstroke geometry was taken out of my historic database that contains the strokes of thousands of paintings, including Picassos, Van Goghs, and my own work.  Feedback loops refined the image as the robot tried to paint the faces on 11"x14" canvases.  All told, dozens of AI algorithms, multiple deep learning neural networks, and feedback loops at all levels started pumping out face after face after face. 

Thirty-two original faces later it arrived at the following polyptych which I am calling the First Sparks of Artificial Creativity.  The series itself is something I have begun to refer to as Emerging Faces. I have already made an additional eighteen based on a style transfer of my own paintings, and plan to make many more.

ghostfaces_shadow.jpg

facesbigif.gif
beirutcomparison.jpg

Above is the piece in its entirety as well as an animation of it working on an additional face at an installation in Berlin.  You can also see a comparison of 3D's Beirut to some of the faces.  An interesting aspect of the artwork, is that despite how transformative the faces are from the original painting, the artistic DNA of the original is maintained with those seemingly random red highlights.

It has been a fascinating collaboration to date. Looking forward to working with 3D to further develop many of the ideas we have discussed. Though this explanation may appear to express a lot of of artificial creativity, it only goes into his art on a very shallow level.  We are always talking and wondering about how much deeper we can actually go.

Pindar

Are My Robots Finally Creative?

After twelve years of trying to teach my robots to be more and more creative, I think I have reached a milestone. While I remain the artist of course, my robots no longer need any input from me to create unique original portraits. 

I will be releasing a short video with details shortly, but as can be seen in the slide above from a recent presentation, my robots can "imagine" faces with GANs, "imagine" a style with CANs, then paint the imagined face in the imagined style using CNNs. All the while evaluating its own work and progress with Feedback Loops. Furthermore, the Feedback Loops can use more CNNs to understand context from its historic database as well as find images in its own work and adjust painting both on a micro and macro level.

This process is so similar to how I paint portraiture, that I am beginning to question if there is any real difference between human and computational creativity. Is it art? No. But it is creative.

 

HBO Vice Piece on CloudPainter - The da Vinci Coder

Typically the pun applied to artistic robots make me cringe, but I actually liked HBO Vice's name for their segment on CloudPainter. they called me The Da Vinci Coder.  

Spent the day with them couple of weeks ago and really enjoyed their treatment of what I am trying to do with my art.  Not sure how you can access HBO Vice without HBO, but if you can it is a good description of where the state of the art is with artificial creativity.  If you can't, here are some stills from the episode and a brief description...

Hunter and I working on setting up a painting...

Screen Shot 2017-08-03 at 8.23.17 PM.png

One of my robots working on a portrait...

Elle asking some questions...

Cool shot of my paint covered hands...

One of my robots working on a portrait of Elle...

... and me walking Elle through some of the many algorithms, both borrowed and invented, that I use to get from a photograph of her to a finished stylized portrait below.

Robot Art 2017 - Top Technical Contributor

CloudPainter used deep learning, various open source AI, and some of our own custom algorithms to create 12 paintings for the 2017 Robot Art Contest. The robot and its software was awarded the Top Technical Contribution Award while the artwork it produced recieved 3rd place in the aesthetic competition.  You can see the other winners and competitors at www.robotart.org.

Below are some of the portraits we submitted.  

Portrait of Hank

Portrait of Hank

Portrait of Corinne

Portrait of Corinne

Portrait of Hunter

Portrait of Hunter

We chose to go an abstract route in this year's competition by concentrating on computational abstraction.  But not random abstraction. Each image began with a photoshoot, where CloudPainter's algorithms would then pick a favorite photo, create a balanced composition from it, and use Deep Learning to apply purposeful abstraction. The abstraction was not random but based on an attempt to learn from the abstraction of existing pieces of art whether it was from a famous piece, or from a painting by one of my children.

Full description of all the individual steps can be seen in the following video.

 

 

NVIDIA GTC 2017 Features CloudPainter's Deep Learning Portrait Algorithms

CloudPainter was recently featured in NVIDIA's GTC 2017 Keynote. As deep learning finds it way into more and more applications, this video highlight some of the more interesting applications. Our ten seconds comes around 100 seconds in, but I suggest watching the whole thing to see where the current state of the art in artificial intelligence stands.

Elastic{ON} 17

Just finished with a busy week at Elastic{ON} 17 where we had a great demo of our latest painting robot. One of the best things to come of these exhibitions is the interaction with the audience. We can get better sense of what works as part of the exhibit as well as what doesn't.

Our whole exhibition had two parts.  The first was a live interactive demo where one of our robots was tracking a live elastic index of conference attendee's wireless connections and painting them in real time. The second was an exhibition of the cloudpainter project where Hunter and I are trying to teach robots to be creative. 

A wall was set up at the conference where we hung 30 canvases. Each 20-30 minutes, a 7Bot robotic arm painted dots on a black canvas. The location of the dots were taken from the geolocation of 37 wireless access points within the building.

There are lots of ways to measure the success of an exhibit like this. The main reason we think that it got across to people, though, was the shear amount of pictures and posts to social media that was occurring. There was a constant stream of interested attendees and questions.

Also, the exhibition's sponsors and conference organizers appeared to be pleased with the final results as well as all the attention the project was getting. At the end of two days, approximately 6,000 dots had been painted on the 30 canvases..

Personal highlight for me was fact that Hunter was able to join me in San Francisco. We had lots of fun at conference and were super excited to be brought on stage during the conference's closing Q&A with the Elastic Founders.

Will leave you with a pic of Hunter signing canvases for some of our elastic colleagues.

Work Continues Mapping the Brushstrokes of Famous Masterpieces

Once I created a brushstroke map of Edvard Munch's The Scream, I thought it would be cool to have brush stroke mappings for more iconic artworks. So I googled "famous paintings" and was presented with a rather long list. Interestingly The Scream was in the top three along with da Vinci's Mona Lisa and Van Gogh's Starry Night. Well, why not do the top three.  So work has begun an creating a stroke map for The Mona Lisa.  In the following image, the AI has taken care of laying down an underpainting, or what would have been called a cartoon in da Vinci's time.

 

I am now going into it by hand and finger-swiping my best guess as to how da Vinci would have applied his brushstrokes.  Will post the final results as well as provide access to the Elasticsearch database with all the strokes as soon as it is finished. My hope is that the creation of the brushstroke mappings can be used to better understand these artists, and how artists create art in general.

Some Final Thoughts on bitPaintr

Hi again,  

Its been a year since this project was successfully launched. As such here is a recap of how the project went, insight on what I have learned about my own art, as well as a preview of where I am taking things next. This might be a long post, so sit tight.

Some quick practical matters first though. For backers still awaiting your 14"x18" portraits, it should be in the image and time lapse below. If there has been a mix-up and your portrait somehow got overlooked, just send me a message and I will straighten it out.  Also look for any other backer awards such as postcards and line art portraits in coming weeks.

A Year of bitPaintr

I can start by saying that I did not imagine the bitPaintr project doing as well as it did. And I have no problem thanking all the original backers once again - even though you are all probably tired of hearing it. But as a direct result of your support so many good things happened for me over the past year. I could tell you about all of them but that would make this post too long and too boring - so I will just concentrate on the two most significant things that resulted from this campaign.

The first is that I finally found my audience. Slowly at first, then more rapidly once the NPR piece aired, people started hearing about and reacting to my art. And the more people would hear about it, the more media would cover it, and then even more people would hear about it. And while not completely viral, it did snowball and I found myself in dozens of news articles, feature, and video pieces. Here is a list of some of my favorite. This time last year I was struggling to find an audience and would have settled for any venue to showcase my art. Today, I am able to pick and choose from multiple opportunities.

The second most significant part of all this is that I found my voice. Not sure I fully understood my own art before, well not as much as I do now. I had the opportunity to speak to, hang out with, and get feedback from you all, other artists, critics, and various members of the artificial intelligence community. All this interaction has lead me to realize that the paintings my robots produce are just artifacts of my artistic process. I once focused on making these artifacts as beautiful as possible, and while still important to me, I have come to realize that the paintings are the most boring part of this whole thing. 

The interaction, artificial creativity, processes, and time lapse videos are where all the action is. In the past year I have learned that my art is an interactive performance piece that explores creativity and asks the sometimes trite questions of "What is Art?" and "What makes me, Pindar, an Artist?" -  or anyone an artist. This is usually a cliche theme, and as such a difficult topic to address without coming off as pretentious. But I think the way my robots address it is novel and interesting. Well, at least I hope so.

Next Steps

As I close up bitPaintr, I am looking forward to the next robot project called cloudPainter. Will begin by telling you the coolest part about the project which is that I have a new partner, my son Hunter. He is helping me focus on new angles that I had not considered before. Furthermore, our weekend forays into Machine Learning, 3D printing, and experimental AI concepts have really rejuvinated my energy. Already his enthusiasm, input, and assistance has resulted in multiple hardware upgrades. While the machine in the following photo may look like your average run-of-the-mill painting robot, it has two major hardware upgrades that we have been working on.

The first can be seen in the bottom left hand corner of robot. It is the completely custom 3D printed NeuralJet Painthead. Hunter, Dante, and I have been designing and building this device for the last 4 months. It holds and operates five airbrushes and four paintbrushes for maximum painting carnage. The second major hardware improvement can be seen near the top of canvas. You will notice not one, but two fully articulated 7Bot robotic arms. So while the NeuralJet will be used for the brute application of paint and expressive marks, the two 7Bot robotic arms will handle the more delicate details. Furthermore, each robotic arm will have a camera for looking out into its environment and tracking its own progress on the paintings.

Our software is currently receiving a similar overhaul. I would go into detail, but Hunter and I are still not sure of where its going. We are taking and using all of the previous artificial creativity concepts that have gotten us this far, and adding to them. While bitPaintr was a remarkably independent artist, it did have multiple limitations. In this next iteration we are going to see how many of those limitations we can remove. We are not positive what exactly that will look like, but have given ourselves a year to figure it out.

If you would like to continue following our progress, check out our blog at cloudpainter.com. Things are just getting started on our sixth painting robot and we are pretty excited about it.


Thanks for everything,

Pindar Van Arman