Google clearly wants to up the Android game to the next level in almost every aspect possible. With the release of Pixel it is clearly evident that it wants to project the brand as a Manufacturer rather than a supporter for OEMs. And now Google wants to raise the bar in terms of camera quality in Android devices. For that, it has come up with a new method which employs AI (Artificial Intelligence) technology which fixes the the photos even before they’re taken.
Google, in cooperation with scientists at MIT, is using machine learning algorithms to improve images in your smartphone viewfinder in real-time. These modifications are made for each photos separately, rather than auto-adjustments that act the same way across every photo or scene (which many software applications already offer). To achieve this, the team trained neural networks on 5,000 images that had been retouched by five different photographers. This gave the AI a formula from which to work from for retouching photos, resulting in more pertinent adjustments to images.
Reportedly, the software which is used to achieve this is could run in real-time on smartphones with little in the way of latency or battery life consumption — two factors which previously would have held back the implementation of such processing on phones.
This tech stands to offer a seriouhsly worthwhile benefit to smartphone owners.
Every camera has its limitations and photos can almost always be improved with a few tweaks. This software has the potential to ensure a baseline of quality is achieved, automatically, every time.