Artificial intelligence is learning to see in the dark

Left, a photo brightened with traditional photo editing software. Right, the same image brightened with deep learning.
Left, a photo brightened with traditional photo editing software. Right, the same image brightened with deep learning.
Image: Intel/UIUC
We may earn a commission from links on this page.

Cameras—especially phone cameras—are terrible at taking pictures in the dark. The tiny image sensors in most modern cameras can only absorb a small amount of light, which often results in dark, grainy images.

To try to solve this problem without inventing a new image sensor, researchers at Intel and the University of Illinois Urbana-Champlain taught an artificial intelligence algorithm how to take the data from darker images and reconstruct them so that they’re brighter and clearer, according to research published this month and to be presented in June at an industry conference.

To train the algorithm, the researchers showed it two version of more than 5,000 images taken in low-light scenarios: One set that was taken to be purposefully too dark, and one set that was taken with a longer exposure time, meaning the sensor is given more time to collect light and better expose the image. (To do that, you need to hold the camera extremely still for a few seconds or more, which is why it’s not practical in most picture-taking scenarios.)

The Intel and UIUC team claims the algorithm can now amplify low-light images the equivalent of up to 300 times the exposure, without the same noise and discoloration that programs like Photoshop might introduce or having to take two separate images.

While the team did build a custom algorithm to do the task, the most innovative aspect of the work is the dataset they created. In the paper, the researchers write that no dataset with low-light images at different exposures publicly exists. Chen Chen, a co-author on the paper who worked on the project as a part of an internship at Intel, says at first, they tried to get around having to take thousands of original images by printing out pictures of objects and then taking pictures of the printouts in low-light and well-lit scenarios. But in the end, that synthetic data didn’t produce good results, Chen says.

So, Chen spent two months collecting images of outdoor low-light scenarios, and a week collecting images in low-light indoor scenarios. He took photos with two kinds of consumer cameras that use different image processing methods to ensure the algorithm wouldn’t just learn to only work on one camera manufacturer’s technology.

But even though the data was generated using high-resolution digital cameras, the team found that the algorithm also improved underexposed images from an iPhone 6S—a sign that the low light capabilities of our smartphones might be only a software update away.  To make that process even faster, the team has posted code and the dataset online, which can be found on Github.