Everyone here has probably seen a crime fighting show or movie, in which to catch the killer or any such person, they always get their picture but not clearly recognizable, then by using technology they zoom in and enhance to get the clear results, and identify the person.

Sometimes we’d find ourselves wishing that things worked in the same way in real life. Well the great news is, its not going to be impossible anymore. Thanks to Google Brain, an AI project by the tech giant, an impressive breakthrough has been made that can help make images better.

What is Google Brain?

Google Brain is a deep learning research project. Its work includes computer vision problems like object classification, generating captions for images in natural language and other areas such as natural language processing. Google Brain is also used to improve most of Google’s products such as search, voice search, image search etc, and it is also currently being used in Android’s speech recognition system and video recommendations in YouTube.

Google Brain is totally based Artificial Intelligence, so it keeps on learning new habits from its users and improving itself in order to give them a better experience.


While the Google Brain has already been incredible in so many ways, the ground breaking breakthrough of clearing poor pixel images to recognizable ones is finally being achieved – and it’s just a start.

The scientists working on the project came up with two ways to make this possible. They are basically two neural networks that are working on the same task on hand. One of the networks is called the “conditioning” network. What it does is that it resizes all other images in its database down to an 8×8 format and starts comparing it with the original 8×8 image. By decreasing the quality, the system tries to match the color of each pixel.

The other network is known as the “prior” network. What it does is that it studies specific sets of pictures that might include faces of celebrities or different rooms and places and try to identify some kind of patterns such as the placement of facial feature etc. It basically tries to add more details to the existing image.

This is what Google does with User Data

The end resulting images from both these networks are then combined together to form an impressive final image. Below are the few examples:

8×8 images are the ones given to the Google Brain, while the 32×32 are the samples. And the ground truth is the result generated by the Google Brain.

It is clearly evident from the pictures that the Google Brain has done an incredible job to clear up the pixels and give better results. You can find out more about the Google Project HERE.