Google Brain is a research project aiming to create a computer that's capable of thinking for itself. Needless to say, the implications of this are huge.
There’s More to Google Than a Search Engine.
Google Brain is Google’s “deep learning research project”. Beginning life in 2011, it’s a collaboration between Google’s own Jeff Dean and Greg Corrado, and Stanford University professor Andrew Ng.
The end goal appears to be to create a computer that’s capable of thinking for itself.
In 2012 Google Brain trained a cluster of 16,000 computers to mimic certain aspects of human brain activity. This network taught itself to recognise a cat, and all it had to do was cross-reference 10 million digital images caught from YouTube videos.
But that was four years ago. Things move quickly in the digital world.
Now Google is teaching its computers to make art, from scratch.
Google Brain – State of the Art
A message on Google Brain’s official site reads “even if you don’t know us, you’ll know our work.”
Indeed, you might be familiar with DeepDream. Through training Google to understand what an image is “about”, the team inadvertently created some stunning psychedelic visuals:
It’s weird, yes, and somewhat disturbing. But there is a clear benefit to teaching computers to recognise images.
At the moment, you have to use an alt tag to give Google an idea of what your image is “about”. But if computers can be trained to understand and process images at face value, this may no longer be necessary.
And a computer that can understand images can also understand words. Word Embeddings is an ongoing Google Brain project, one that aims to teach computers to understand certain semantic properties and related terms.
So in short, as computers evolve, so too will your SEO strategy. The days of strategic keyword use may be numbered. Soon Google may be able to look at your page, read your content and see your images, and use this information to develop a clear idea of what your page is about.
This deep thinking technology can already be found in certain Google products. It’s there in the Android Operating System’s speech recognition software, in the Google+ photosearch feature, and in the video recommendations you receive on YouTube.
A lot of the work Google Brain produces has some obvious applications in the world of search. But a lot of their work seems to exist solely to progress the field of artificial intelligence. And it’s here where things get really interesting.
Computer Art – Magenta
David Eck is a Google Researcher who applied certain machine learning principles to Google Play Music. Thanks to Eck, this system can monitor your listening habits and use this information to make new music recommendations.
Eck is now leading a project called Magenta. It’s due to launch on June 1, and it makes use of TensorFlow, Google’s open-source AI platform. It’s apparently capable of creating the sort of algorithms that can generate music and art.
I’m primarily looking at how to use so-called “generative” machine learning models to create engaging media. Additionally, I’m working on how to bring other aspects of the creative process into play. For example, art and music is not just about generating new pieces. It’s also about drawing one’s attention, being surprising, telling an interesting story, knowing what’s interesting in a scene, and so on.
What exactly does a good music performer add to what is already in the score? I treated this as a machine learning question: Hypothetically, if we showed a piano-playing robot a huge collection of Chopin performances— from the best in the world all the way down to that of a struggling teenage pianist—could it learn to play well by analyzing all of these examples? If so, what’s the right way to perform that analysis? In the end I learned a lot about the complexity and beauty of human music performance, and how performance relates to and extends composition.
With DeepDream, the visual art was a by-product of the research. Now, the art is the end goal.
The results of this ambitious project are yet to be seen. But Eck recently spoke at Moogfest, a music and technology festival, where he mentioned a possible Magenta App. Users will be able to take in the music and the visuals created by the project. Will they like what they see and hear? Will it have inherent artistic value beyond the novelty?
I can’t wait to find out!