Portrait

New Art Algorithm Discovered by autonymo.us

Have started a new project called autonymo.us where I have let one of my painting robots go off on its own. It experiments and tries new things, sometimes completely abstract, other times using a simple algorithm to complete a painting. Sometimes it uses different combinations of the couple dozens I have written for it.

Most of the results are horrendous. But sometimes it comes up with something really beautiful. I am putting the beautiful ones up at the website autonymo.us. But also thought I would share new algorithms discoveries here.

So the second algorithm we discovered looks really good on smiling faces. And it is really simple.

Step 1: Detect that subject has a big smile. Seriously cause this doesn’t look good otherwise.

Step 2: Isolate background and separate it from the faces.

Step 3: Quickly cover background with a mixture of teal & white paint.

Step 4: Use K-Means Clustering to organize pixels with respect to r, g, b, values and x, y coordinates.

Step 5: Paint light parts of painting in varying shades of pyrole orange.

Step 6: Paint dark parts of painting in crimsons, grays, and black.

A simple and fun algorithm to paint smiling portraits with. Here are a couple…

autonymo_us_003b.jpg
autonymo_us_004.jpg

The First Sparks of Artificial Creativity

My robots paint with dozens of AI algorithms all constantly fighting for control. I imagine that our own brains are similar and often think of Minsky's Society of Minds. Where he theorizes our brains are not one mind, but many, all working with, for, and against each other.  This has always been an interesting concept and model for creativity for me. Much of my art is trying to create this mish mosh of creative capsules all fighting against one another for control of an artificially creative process.

Some of my robots' creative capsules are traditional AI. They use k-means clustering for palette reduction, viola-jones for facial recognition, hough lines to help plan stroke paths, among many others.  On top of that there are some algorithms that I have written myself to do things like try to measure beauty and create unique compositions. But the really interesting stuff that I am working with uses neural networks.  And the more I use neural networks, the more I see parallels between how these artificial neurons generate images and how my own imagination does.

Recently I have seen an interesting similarity between how a specific type of neural network called a Generative Adversarial Network (GAN) imagines unique faces compared to how my own mind does.  Working and experimenting with it, I am coming closer and closer to thinking that this algorithm might just be a part of the initial phases of imagination, the first sparks of creativity.  Full disclosure before I go on, I say this as an artist exploring artificial creativity.  So please regard any parallels I find as an artist's take on the subject. What exactly is happening in our minds would fall under the expertise of a neuroscientist and modeling what is happening falls in the realm of computational neuroscience, both of which I dabble in, but am by no means an expert.

Now that I have made clear my level of expertise (or lack thereof), there is actually an interesting thought experiment that I have come up with that helps illustrate the similarities I am seeing between how we imagine faces compared to how GANs do. For this thought experiment I am going to ask you to imagine a familiar face, then I am going to ask you to get creative and imagine an unfamiliar face. I will then show you how GANs "imagine" faces.  You will then be able to compare what went on in your own head with what went on in the artificial neural network and decide for yourself if there are any similarities.


Simple Mental Task - Imagine a Face

So the first simple mental task is to imagine the face of a loved one. Clear your mind and imagine a blank black space.  Now pull an image of your loved out of the darkness until you can imagine a picture of them in your mind's eye. Take a mental snapshot.


Creative Mental Task - Imagine an Unfamiliar Face

The second task is to do the exact same thing, but by imagining someone you have never seen before.  This is the creative twist. I want you to try to imagine a face you have never seen.  Once again begin by clearing your mind until there is nothing.  Then out of the darkness try to pull up an image of someone you have never seen before. Take a second mental snapshot.

This may have seemed harder, but we do it all the time when we do things like imagine what the characters of a novel might look like, or when we imagine the face of someone we talk to on the phone with, but have yet to meet. We are somehow generating these images in our mind, though it is not clear how because it happens so fast.
 

How Neural Nets Imagine Unfamiliar Faces

So now that you have tried to imagine an unfamiliar face, it is neat to see how neural networks try to do this. One of the most interesting methods involves the GANs I have been telling you about. GANs are actually two neural nets competing against one another, in this case to create images of unique faces from nothing. But before I can explain how two neural nets can imagine a face, I probably have to give a quick primer on what exactly a neural net is.

The simplest way to think about an artificial neural network is to compare it to our brain activity.  The following images show actual footage of live neuronal activity in our brain (left) compared to numbers cascading through an artificial neural network (right).

Live Neuronal Activity - courtesy of Michelle Kuykendal & Gareth Guvanasen

Live Neuronal Activity - courtesy of Michelle Kuykendal & Gareth Guvanasen

Artificial Neural Network

Artificial Neural Network

Our brains are a collection of more than a billion neurons with trillions of synapses. The firing of the neurons seen in the image on the left and the cascading of electrical impulses between them is basically responsible for everything we experience, every pattern we notice, and every prediction our brain makes.

The small artificial neural networks shown on the right is a mathematical model of this brain activity. To be clear it is not a model of all brain activity, that is computational neuroscience and much more complex, but it is a simple model of at least one type of brain activity. This artificial neural network in particular, is small collection of 50 artificial neurons with 268 artificial synapses where each artificial neuron is a mathematical function and each artificial synapses is a weighted value. These neural nets simulate neuronal activity by sending numbers through the matrix of connections converting one set of numbers to another. These numbers cascade through the artificial neural net similarly to how electrical impulses cascade through our minds. In the animation on the right, instead of showing the numbers cascading, I have shown the nodes and edges lighting up and when the numbers are represented like this, one can see the similarities between live neuronal activity and artificial neural networks.

While it may seem abstract to think how this could work, the following graphic shows one if its popular applications.  In this convolutional neural network an image is converted into pixel values, these numbers then enter the artificial neural network on one side, go through a lot of linear algebra, and eventually comes out the other side as a classification.  In this example, an image of a bridge is identified as a bridge with 65% certainty.

cnnexample.jpg

With this quick neural network primer, it is now interesting to go into more details of a face creating Generative Adversarial Network, which is two opposing neural nets. When these neural nets are configured just right, they can be pretty creative. Furthermore, closely examining how they work, I can't help but wonder if some structure similar to them is in our minds at the very beginning of when we try to imagine unfamiliar faces.

So here is how these adversarial neural nets fight against each other to generate faces from nothing.

The first of the two neural nets is called a Discriminator and it has been shown thousands of faces and understands the patterns found in typical faces. This neural net would master the first simple mental task I gave you.  Same as how you could pull the face of a loved one into your imagination, this neural net knows what thousands of faces looks like. Perhaps more importantly, however, when shown a new image, it can tell you whether or not that image is a face. This is the Discriminator's most important task in a GAN. It can discriminate between images of faces and images that are not faces, and also give some hints as to why it made that determination.

The second neural nets in a GAN is called a Generator. And while the Discriminator knows what thousands of faces looks like, this Generator is dumb as a bag of hammers. It doesn't know anything. It begins as a matrix of completely random numbers.

So here they are at the very beginning of the process ready to start imagining faces.

blog01.jpg

First thing that happens is the Generator guesses at what a face looks like and asks the Discriminator if it thinks the image is a faces or not.  But remember the Generator is completely naive and filled with random weights, so it begins by creating an image of random junk.

blog02.jpg

When determining whether or not this is a face, the Discriminator is obviously not fooled. The image looks nothing like faces. So it tells the Generator that the image looks nothing like faces, but at the same time give some hints about how to makes its next attempt a little more facelike. This is one of the really important steps. Beyond just telling the Generator that it failed to make a face, the Discriminator is also telling it what parts of the image worked, and the Generator is taking this input and changing itself before making the next attempt.

The Generator adjusts the weights of its neural network and 120 tries, rejections, and hints from the Discriminator later, it is still just static, but better static...

blogslides03.jpg

But then at attempt 200. ghosts images start to emerge out of the darkness...

blogslides04.jpg

and with each guess by the Generator, the images get more and more facelike...

at attempt 400,

at attempt 400,

600,

600,

1500,

1500,

and 4000.

and 4000.

After 4,000 attempts, rejections, and corrections from the Discriminator,  the Generator actually gets pretty good at making some pretty convincing faces.  Here is an animation from the very first attempt to the 4,000th iteration in 10 seconds. Keep in mind that the Generator has never seen or been shown a face.

ganfaces_2_square.gif

So How Does this Compare to How We Imagined Faces?

Early on we did the thought experiment and I told you that there would be similarities between how this GAN imagined faces and you did.  Well hopefully the above animation is not how you imagined an unfamiliar faces. If it was, well, you are probably a robot. Humans don't think like this, at least I don't.

But let's slow things down and look at what happened with the early guessing (between the Generators 180th and 400th attempts).

faces_animation_180_to_450.gif

This animation starts with darkness as nondescript faces slowly bubble out of nothing. They merge into one another, never taking on a full identity.

I am not saying that this was the entirety of my creative process. Nor am I saying this is how the human brain generates images, though I am curious what a neuroscientist would think about this. But when I tried to imagine an unfamiliar face, I cleared my mind and an image appeared from nothing. Even though it happens fast and I can not figure out the mechanisms doing it, it has to start forming from something. This leads me to wonder if a GAN or some similar structure in my mind began by comparing random thoughts in one part of my mind to my memory of how all my friends look in another part. I wonder if from this comparison my brain was able to bring an image out of nothing and into vague blurry fog, just like in this animation.

I think this is the third time that I am making the disclaimer that I am not a neuroscientist and do not know what exactly is happening in my mind. I wonder if any neuroscientist does actually. But I do know that our brains, like my painting robots, have many different ways of performing tasks and being creative.  GANs are by no means the only way, or even the most important part of artificial creativity, but looking at it as an artist, it is a convincing model for how imagination might be getting its first sparks of inspiration.  This model applies to all manner of creative tasks beyond painting. It might be how we first start imagining a new tune, or even come up with a new poem. We start with base knowledge, try to come up with random creative thoughts, compare those to our base knowledge, and adjust as needed over and over again. Isn't this creativity?  And if this is creativity, GANs are an interesting model of the very first steps.

I will leave you here with a series of GAN inspired paintings where my robots have painted the ghostlike faces just as they were emerging from the darkness...

allfaces.gif
The First Sparks of Artificial Creativity, 110"x42", Acrylic on Canvas, Pindar Van Arman w/ CloudPainter

The First Sparks of Artificial Creativity, 110"x42", Acrylic on Canvas, Pindar Van Arman w/ CloudPainter

 

 

 

Pindar  

Are My Robots Finally Creative?

After twelve years of trying to teach my robots to be more and more creative, I think I have reached a milestone. While I remain the artist of course, my robots no longer need any input from me to create unique original portraits. 

I will be releasing a short video with details shortly, but as can be seen in the slide above from a recent presentation, my robots can "imagine" faces with GANs, "imagine" a style with CANs, then paint the imagined face in the imagined style using CNNs. All the while evaluating its own work and progress with Feedback Loops. Furthermore, the Feedback Loops can use more CNNs to understand context from its historic database as well as find images in its own work and adjust painting both on a micro and macro level.

This process is so similar to how I paint portraiture, that I am beginning to question if there is any real difference between human and computational creativity. Is it art? No. But it is creative.

 

Artobotics - Robotics Portraits

While computational creativity and deep learning has become a focus of many of my robotics paintings, sometimes I just like to make something I am calling artobotic paintings, or artobotics.  

With these paintings I have one of my robots paint relatively quick portraits, but not just one, dozens of them.  The following is a large scale portrait of a family that was painted by one of my robots over the course of a week.


Robot Art 2017 - Top Technical Contributor

CloudPainter used deep learning, various open source AI, and some of our own custom algorithms to create 12 paintings for the 2017 Robot Art Contest. The robot and its software was awarded the Top Technical Contribution Award while the artwork it produced recieved 3rd place in the aesthetic competition.  You can see the other winners and competitors at www.robotart.org.

Below are some of the portraits we submitted.  

Portrait of Hank

Portrait of Hank

Portrait of Corinne

Portrait of Corinne

Portrait of Hunter

Portrait of Hunter

We chose to go an abstract route in this year's competition by concentrating on computational abstraction.  But not random abstraction. Each image began with a photoshoot, where CloudPainter's algorithms would then pick a favorite photo, create a balanced composition from it, and use Deep Learning to apply purposeful abstraction. The abstraction was not random but based on an attempt to learn from the abstraction of existing pieces of art whether it was from a famous piece, or from a painting by one of my children.

Full description of all the individual steps can be seen in the following video.

 

 

Our First Truly Abstract Painting

Have had lots of success with Style Transfer recently.  With the addition of Style Transfer to some of our other artificially creative algorithms, I am wondering if cloudpainter has finally produced something that I feel comfortable calling a true abstract painting.  It is a portrait of Hunter.

In one sense, abstract art if easy for a computer. A program can just generate random marks and call the finished product abstract.  But that's not really an abstraction of an actual image, its the random generation of shapes and colors.  I am after true abstraction and with Style Transfer, this might just be possible.

More details to come as we refine the process, but in short the image above was created from three source images, seen in the top row below, and image of Hunter, his painting, and Picasso's Les Demoiselles d Avignon.

Style Transfer was applied to the photo of Hunter to produce the first image in the second row. The algorithm tried to paint the photo in the style of Hunter's painting. The second image in the second row is a reproduction of Picasso's painting made and recorded by one of my robots using many of its traditional algorithms and brush strokes by me.

The final painting in the final row was created by cloudpainter's attempt to paint the Style Transfer Image with the brush strokes and colors of the Picasso reproduction.

transferWithArrows.jpg

While this appears like just another pre-determined algorithm that lacks true creativity, the creation of paintings by human artists follow a remarkably similar process. They draw upon multiple sources of inspiration to create new imagery.

The further along we get with our painting robot, I am not sure if we are less creative than we think, or computers are much more so than we imagined.

Hunter's Portrait

Inspired by our trip to the National Portrait Gallery, we started thinking to ourselves, what's so impressive about making our robot's paint like a famous artist.  Sure they are inspirational and a lot can be learned from them, but when you think about it, people are more interested in the art of their loved ones.  

So this morning, Hunter and I decided to do quick portraits of each other and then run the portraits through deep neural nets to see how well they applied to a photo we took of each other. As soon as we started, Corinne joined in so here is obligatory photo of her helping out.

Also in the above photo you can see my abstract portrait in progress.

Below you can see the finished paintings and how they were applied to the photos we took. If you have been following this blog recently, you will know that the images along the top are the source images from which style is taken and applied to the photos on the left. This is all being done via Style Transfer and TensorFlow. Also I should note that the painting on left is mine, while Hunter's is on right. 

Most interesting thing about all this is that the creative agent remains Hunter and I, but still something is going on here. For example even though we were the creative agents, we drew some of our stylistic inspiration from other artist's paintings that we saw at the National Portrait Gallery yesterday. Couldn't a robot do something similar?

More work to be done.

Inspiration from the National Portrait Gallery

One of the best things about Washington D.C. is its public art museums. There are about a dozen or so world class galleries where you are allowed to take photos and use the work in your own art, because after all, we the people own the paintings. Excited by the possibilities of deep learning and how well style transfer was working, the kids and I went to the National Portrait Gallery. for some inspiration.

One of the first things that occurred to us was a little inception like. What would happen if we applied style transfer to a portrait using itself as the source image.  It didn't turn out that well, but here are a couple of those anyways.

While this was idea of a dead end, the next idea that came to us was a little more promising. Looking at the successes and failures of the style transfers we had already performed, we started noticing that when the context and composition of the paintings matched, the algorithm was a lot more successful artistically. This is of course obvious in hindsight, but we are still working to understand what is happening in the deep neural networks, and anything that can reveal anythign about that is interesting to us.  

So the idea we had, which was fun to test out, was to try and apply the style of a painting to a photo that matched the painting's composition.  We selected two famous paintings from the National Portrait Gallery to attempt this, de Kooning's JFK and Degas's Portrait of Miss Cassatt. We used JFK  on a photo of Dante with a tie on. We also  had my mother pose best she can to resemble how Cassatt was seated in her portrait.  We then let the deep neaural net do its work. The following are the results.  Photo's courtesy of the National Portrait Gallery.

jfk_orig.jpg

Farideh likes how her portrait came out, as do we, but its interesting that this only goes to further demonstrate that there is so much more to a painting than just its style, texture, and color. So what did we learn. Well we knew it already but we need to figure out how to deal with texture and context better.

Applying Style Transfer to Portraits

Hunter and I have been focusing on reverse engineering the three most famous paintings according to Google as well as a hand selected piece from the National Gallery.  These art works are The Mona Lisa, The Starry Night, The Scream, and Woman With A Parasol.

We also just recently got Style Transfer working on our own Tensor Flow system. So naturally we decided to take a moment to see how a neural net would paint using the four paintings we selected plus a second work by Van Gogh, his Self-Portrait (1889).  

Below is a grid of the results.  Across the top are the images from which style was transferred, and down the side are the images the styles were applied to. (Once again a special thanks to deepdreamgenerator.com for letting us borrow some of their processing power to get all these done.)

It is interesting to see where the algorithm did well and where it did little more than transfer the color and texture.  A good example of where it did well can be seen in the last column. Notice how the composition of the source style and the portrait it is being applied to line up almost perfectly. Well as could be expected, this resulted in a good transfer of style.

As far as failure. it is easy to notice lots of limitations. Foremost, I noticed that the photo being transferred needs to be high quality for the transfer to work well. Another problem is that the algorithm has no idea what it is doing with regards to composition.  For example, in The Scream style transfers, it paints a sunset across just about everyone's forehead.

We are still in processing of creating a step by step animation that will show one of the portraits having the style applied to it.  It will be a little while thought cause I am running it on a computer that can only generate one frame every 30 minutes.  This is super processor intensive stuff.

While processor is working on that we are going to go and see if we can't find a way to improve upon this algorithm.

 

 

 

 

TEDx Talk

So TEDx Talk went great. Below is a picture taken during my talk by the very first backer of this project, Jessie. 

 

Oh yeah, a couple other local backers also showed up for the talk, so big thanks to them! And a big thanks to all of you cause I am pretty certain I wouldn't have gotten this far without the success of this Kickstarter and all the press it has gotten. Things have snowballed since this all started and it is pretty much thanks to your backing.

The TEDx Talk is still a little surreal. I will send you all a link to the video as soon as its public. I haven't seen it yet but I didn't trip or mumble, so I think it went well.

Am continuing down list of paintings I owe to backers.  I have contacted you if you are in queue for next couple weeks.  As always if you need a portrait rushed for a special event, or just because, contact me and I will bump you to front of queue.

Thanks for making all this possible,

Another Cool Video Feature and Schedule for Next Paintings

First off wanted to share this cool feature by America's Greatest Makers.  It includes footage of my kids so its my favorite feature yet. Also like it 'cause it clearly explains a lot about what I am doing in 2 quick minutes.

https://www.americasgreatestmakers.com/video/bitpaintr/

Schedule wise, here is list of people slated for portraits in next two weeks.  I have also contacted you via private email to work out the details. The list is...

Nick, Michelle, Brian, Dave, Chris, and William.

Thanks again, and as always feel free to contact me to expedite your painting.  I am working down the list and bumping priority to anyone that needs something for birthday or special event.

Just Got First Couple of Press Shout Outs

Exciting Day.  

Just got a couple new backers and some mentions in the press.  Welcome all.

One of the press write ups was by PJ Pangburn for Vice's Creator Project and its one of my favorite write ups ever.  Pangburn makes me sound so cool.  I didn't even realize my project was as cool as the article made it sound. A link is right here

 

This Robot Wants to Paint Your Portrait

http://thecreatorsproject.vice.com/en_uk/blog/this-robot-wants-to-paint-your-portrait

Also landed 3 interviews in next week.  All this as I am only starting the press part of my campaign.

So thanks to everyone that has supported me to date.  And remember that if you email or send me the photo that you want made into a portrait, I can start them now. Even better I can have them ready to show as samples during my upcoming interviews.  I don't want to jinx any of them, but one is by a cool big international media company that you are probably a fan of.  I actually had a hard time believing they were interested but I think they are, well at least interested enough to come ask me more about it.  As soon as I have more details on each of the upcoming 3 interviews, I will let you know.  In the meantime, get your portraits in so I can be seen working on them during the interviews.

Thanks again for everything and continue spreading the word, I am depending on it.

Pindar