“Inceptionism” might sound like the title of a summer blockbuster, but it’s actually the latest name Google has given to their Artificial Neural Network’s ability to see and understand images, with results that range from pleasantly psychedelic, to downright terrifying.

Artificial Neural Networks are related to image understanding and Google’s ongoing research into the ability of computers to learn based on experience. Based on how our own nervous systems and brains work, Google has created those networks with interconnected sets of “neurons” that determine that the “correct” output for any input they receive.

Google is trying to teach computers to see a number of images and independently determine just what the image is “about” – just like humans do. This might sound like something from Terminator Genisys, but it’s happening today. And what’s more, we can see the very first examples of how computers actually see those images.

Here are three different Artificial Neural Networks. Each is an example of a computer trying to understand a different type of image. that each one has learned to understand a different type of image. The first one understands towers and pagodas, the second various buildings, and the third birds and insects. Based on their prior knowledge, each has attempted to translate the image it sees into something that makes more sense:

Dream examples

The networks are doing what Google calls “inceptionism”, and they are left on their own to see whatever they best understand based on the type of images they have learned. Through the eyes of the Artificial Neural Networks, every object takes on a new form until it more closely resembles what the network knows.

Google’s Artificial Neural Networks consist of 10 to 30 stacked layers of artificial neurons, and each layer looks at the input it is given and detects different aspects of it. All the information it finds is passed on to the next layer, until the final layers formulates the final image:

Artificial Neural Network generated image

Above you can see what happened when a neural network that understands insects is given an image of a half-eaten doughnut. The result looks like an alien life form, or a creature from your nightmares.

While the artificial intelligence potential of this research is important, a lot of people have started experimenting with this new tool. Google has shared their code with the public, which has enabled people to explore the artistic nature of the Artificial Neural Networks. Once video is given to the networks to be “translated”,  the results can be weird to say the least:

Artificial Neural Network video example

We are living in a time of rapid advancements in artificial intelligence. But while the neural networks are intended for image recognition, they’ve already become a fantastic tool for digital artists.

You can see the live analysis of a simple static image from a neural network in the video below. See for yourself how some random pixels can take a whole new form in the eyes of an artificial intelligence mind:

Inside an artificial brain from Johan Nordberg on Vimeo.


Leave a Reply