In the age of the internet, file size is key: Compressed documents, images, and videos take up less space than uncompressed ones, and can also be transmitted more quickly. That’s good news for server space and hard drive capacity, not to mention the storage on your average smartphone.
Twitter has long been interested in compression—more than 300 million tweets are sent each day, many of them with attachments. In June, Twitter acquired Magic Pony Technology, an artificial-intelligence startup focused on making pictures look better at smaller file sizes, to jumpstart its machine-learning team. That team, now known as Twitter Cortex, earlier this month announced one of its first major successes: a machine-learning algorithm that can compress a photo more efficiently than JPEG2000 (an industry standard more modern but less common than just JPEG). Put plainly, that means sharper pictures that take up less space.
“All the videos we watch on Netflix or YouTube or elsewhere are all being compressed by algorithms that were hand-designed by someone using relatively old techniques,” says Rob Bishop, a product lead at Twitter Cortex. “Because it’s humans putting these [algorithms] together, there’s a limit to how complex they can design them and still understand them.”
In the simplest sense, the algorithm looks at examples of high-quality images and creates rules for recreating an image using less information. This is called “end-to-end” learning—the algorithm builds its own methods of solving a problem in order to achieve a set result. The algorithm might pull patterns out of the data that it can express more simply, or use an inhuman quantity of steps to address complex representation of color and shape. In similar work published this summer, Google also used neural networks, the building blocks of modern AI, to beat standard JPEG compression. Twitter’s new algorithm improves upon JPEG2000 compression, which is even more sophisticated.
But obstacles remain. One problem vexing the Cortex team is how computers define “high-quality,” or rather how computers quantify what humans perceive as high-quality. After all, the AI has to know what it’s trying to emulate to do the best possible job.
“There is no consensus in the field for which metric best represents human perception,” the Google researchers wrote in their paper. Google chose a mathematic approach to measuring structural similarity of files, while Twitter opted to use real people. For its latest efforts, the Cortex team showed 24 laymen examples of successfully and poorly compressed pictures, and then asked them to judge 273 compressed images on a scale from 1-5. On average, files compressed by Twitter’s algorithm tested better than the standard compression and Google’s AI approach.
“You can show a sixth-grader two images and they can immediately pick the one which is the best quality,” Bishop says. “Some things that are easy for humans to understand are hard for computers to understand.”