This article is obtained from here.
by
Nov 16, 2016
Pulling up a low-quality image and telling the computer to “enhance” the resolution has long been the stuff of TV fantasy. But, thanks to machine learning, we are actually getting much better at zooming into a photo without losing picture quality. This week, Google unveiled prototype software that does exactly this that it calls RAISR, or Rapid and Accurate Image Super-Resolution.
In essence, RAISR is similar to current methods of upsampling — the process of turning a small image into a larger one by inserting new pixels into it. But while traditional upsampling methods make these images bigger by filling in new pixel values using fixed rules, RAISR adapts its methods to the type of image its looking at. The software pays particular attention to what are called “edge features” — i.e., parts of an image where the brightness or color gradient differs quickly, and which usually indicates the edge of an object. This adaptive upsampling means the resulting, zoomed images are less blurry.
In the composite image from Google below, the top section is the original, low-res picture, and the bottom is the RAISR-enhanced version. (In the image at the top of this article the original is on the left, and the RAISR version is on the right).
Now compare that to the composite image below, which shows the low-res image on the left, and the traditionally upsampled version on the right. The resulting image is less pixellated, but its edges seem blurred and out of focus:
Google isn’t the only company working on this tech, and earlier this year, Twitter bought a startup named Magic Pony that does the same sort of smart upsampling, but with video — a much harder task considering how many frames need to be quickly processed. In the future, it seems low-resolution imagery is going to become much less common, with machine learning deployed to fill in the gaps.