Forget machine learning. Google now wants to crack machine unlearning

Training algorithms to forget what they've learned is coming in the interest of data privacy rights

We may earn a commission from links on this page.
Image for article titled Forget machine learning. Google now wants to crack machine unlearning
Photo: Andrew Kelly (Reuters)

Google has announced a machine “unlearning” competition, geared toward removing sensitive data from AI systems to make them compliant with global data regulation standards. The contest, which is open to anyone, will run from mid-July to mid-September.

Machine learning, a major subset of artificial intelligence, is known to offer solutions to complex problems, whether by creating new content, predicting outcomes, or answering complicated queries based on what it’s trained on. With machine unlearning, Google plans to introduce selective amnesia to its AI algorithms to remove all traces of a particular data set from its machine learning systems, without affecting their performance.


Machine learning contains loopholes for data privacy breaches

Although essential in this digital age, machine learning projects come with many challenges, including misuse of data by cybercriminals to bully and blackmail users, data poisoning, denial of access to online activities, tricking face recognition, and creating deepfakes.


Google believes training algorithms to forget the data they’ve already been trained on would give people more control over sensitive information. This would make it easier, for instance, for the company to serve users who apply for the right to be forgotten.

Google is partly responding (pdf) to regulation, as data regulators have the power to compel companies to destroy unlawfully obtained data. Under Europe’s General Data Protection Regulation (GDPR) guidelines, people can demand data deletion from a business if they have concerns about the information they revealed or gave to it.

Machine unlearning would make it possible for someone to remove their data from an algorithm and ensure that no one else profits from it, while also protecting themselves from the dangers of AI.