3D

Emerging Faces - A Collaboration with 3D (aka Robert Del Naja of Massive Attack)

Have been working on and off for past several months with Bristol based artist 3D (aka Robert Del Naja, the founder of Massive Attack). We have been experimenting with applying GANs, CNNs, and many of my own artificial intelligent algorithms to his artwork.  I have long been working at encapsulating my own artistic process in code.  3D and I are now exploring if we can capture parts of his artistic process.

It all started simply enough with looking at the patterns behind his images. We started creating mash-ups by using CNNs and Style Transfer to combine the textures and colors of his paintings with one another.  It was interesting to see what worked and what didn't and to figure out what about each painting's imagery became dominant as they were combined

3d_cn-incest.jpg

As cool as these looked, we were both left underwhelmed by the symbolic and emotional aspects of the mash-ups. We felt the art needed to be meaningful.  All that was really be combined was color and texture, not symbolism or context. So we thought about it some more and 3D came up with the idea of trying to use the CNNs to paint portraits of historical figures that made significant contributions to printmaking.  Couple of people came to mind as we bounced ideas back and forth before 3D suggested Martin Luther. At first I thought he was talking about Martin Luther King Jr, which left me confused. But then when I realized he was talking about the the author of The 95 Theses and it made more sense. Not sure if 3D realized I was confused, but I think I played it off well and he didn't suspect anything. We tried applying CNNs to Martin Luther's famous portrait and got the following results.

luther_cnns_rob.jpg

It was nothing all that great, but I made a couple of paintings from it to test things.  Also tried to have my robots paint a couple of other new media figures like Mark Zuckerberg.

zuckerberg.jpg

Things still were not gelling though. Good paintings, but nothing great. Then 3D and I decided to try some different approaches. 

I showed him some GANs where I was working on making my robots imagine faces. Showed him how a really neat part of the GAN occurred right at the beginning when faces emerge from nothing.  I also showed him a 5x5 grid of faces that I have come to recognize as a common visualization when implementing GANs in tutorials.  We got to talking about how as a polyptych, it recalled a common Warhol trope except that there was something different.  Warhol was all about mass produced art and how cool repeated images looked next to one another.  But these images were even cooler, because it was a new kind of mass production. They were mass produced imagery made from neural networks where each image was unique.

widegans.jpg

I started having my GANs generate tens of thousands of faces.  But I didn't want the faces in too much detail.  I like how they looked before they resolved into clear images.  It reminded me of how my own imagination worked when I tried to picture things in my mind. It is foggy and non descript.  From there I tested several of 3D's paintings to see which would best render the imagined faces.

cnngan_facialcontext.jpg
gan_3d_models.jpg


3D's Beirut (Column 2) was the most interesting, so I chose that one and put it and the GANs into the process that I have been developing over the past fifteen years. A simplified outline of the artificially creative process it became can be seen in the graphic below. 
 

My robots would begin by having the GAN imagine faces. Then I ran the Viola-Jones face detection algorithm on the GAN images until it detected a face. At that point, right when the general outlines of faces emerged, I stopped the GAN.  Then I applied a CNN Style Transfer on the nondescript faces to render them in the style of 3D's Beirut. Then my robots started painting. The brushstroke geometry was taken out of my historic database that contains the strokes of thousands of paintings, including Picassos, Van Goghs, and my own work.  Feedback loops refined the image as the robot tried to paint the faces on 11"x14" canvases.  All told, dozens of AI algorithms, multiple deep learning neural networks, and feedback loops at all levels started pumping out face after face after face. 

Thirty-two original faces later it arrived at the following polyptych which I am calling the First Sparks of Artificial Creativity.  The series itself is something I have begun to refer to as Emerging Faces. I have already made an additional eighteen based on a style transfer of my own paintings, and plan to make many more.

ghostfaces_shadow.jpg

facesbigif.gif
beirutcomparison.jpg

Above is the piece in its entirety as well as an animation of it working on an additional face at an installation in Berlin.  You can also see a comparison of 3D's Beirut to some of the faces.  An interesting aspect of the artwork, is that despite how transformative the faces are from the original painting, the artistic DNA of the original is maintained with those seemingly random red highlights.

It has been a fascinating collaboration to date. Looking forward to working with 3D to further develop many of the ideas we have discussed. Though this explanation may appear to express a lot of of artificial creativity, it only goes into his art on a very shallow level.  We are always talking and wondering about how much deeper we can actually go.

Pindar