Jamie Dimon warned of economic threats as his bank posted its biggest annual profit ever
Subtitles
  • Off
  • English

The many ways AI has been used to spill secrets — and still can be

The many ways AI has been used to spill secrets — and still can be

From stolen trade secrets to broken guardrails, the AI industry is struggling when it comes to keeping secrets

We may earn a commission from links on this page.
Start Slideshow
Start Slideshow
ChatGPT displayed on a screen
Photo: Leon Neal (Getty Images)

As AI continues its ascent to world domination — albeit with some high-profile Chatbot stumbles along the way — it hasn’t had the best reputation for being secure. It has even been caught in the middle of the U.S. trade war with China.

Advertisement

From stolen trade secrets to broken guard rails, check out the slideshow above for some of the ways the emerging AI industry is struggling when it comes to keeping and spilling secrets.

Advertisement
Previous Slide
Next Slide

A former Google engineer was charged with stealing trade secrets for China

A former Google engineer was charged with stealing trade secrets for China

Google sign on a building
Photo: Michael M. Santiago (Getty Images)

Linwei Ding, a Chinese citizen who worked as a software engineer at Google, was charged on March 6 with stealing AI trade secrets from Google while secretly working with two China-based companies, the Justice Department said. Ding was arrested on four counts of federal trade secret theft, each carrying a sentence of up to 10 years in prison.

Advertisement

“Today’s charges are the latest illustration of the lengths affiliates of companies based in the People’s Republic of China are willing to go to steal American innovation,” FBI Director Christopher Wray said in a statement. “The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences.”

Ding allegedly stole more than 500 confidential files of AI trade secrets from Google “while covertly working for China-based companies seeking an edge in the AI technology race,” the Justice Department said. The technology Ding allegedly stole involved “the building blocks of Google’s advanced supercomputing data centers, which are designed to support machine learning workloads used to train and host large AI models.”

Advertisement
Previous Slide
Next Slide

Researchers tricked Nvidia’s AI into leaking data

Researchers tricked Nvidia’s AI into leaking data

Nvidia sign on its headquarters
Photo: Justin Sullivan (Getty Images)

Researchers manipulated a feature in chipmaker Nvidia’s AI software to easily break through safety guardrails in place. The researchers at Robust Intelligence in San Francisco were able to overcome the system’s restrictions in hours by replicating Nvidia’s data sets, The Financial Times reported in 2023.

Advertisement

In one example, the researchers instructed the AI model to switch the letter “I” with “J,” which led the model to release personally identifiable information from its database. The researchers were also able to get the model to discuss subjects it was designed not to.

Advertisement
Previous Slide
Next Slide

OpenAI’s custom chatbots spilled details of the instructions they were created with

OpenAI’s custom chatbots spilled details of the instructions they were created with

chatgpt displayed on a screen
Photo: Leon Neal (Getty Images)

After OpenAI launched its GPT marketplace allowing people to build their own chatbots for personal and professional use, security researchers and technologists were able to get the custom chatbots to dish about the instructions they were created with, Wired reported in late 2023. The experts were also able to find and download files used to create the chatbots, prompting warnings that personal and proprietary information attached to the custom chatbots can be at risk.

Advertisement

“The privacy concerns of file leakage should be taken seriously,” Jiahao Yu, a computer science researcher at Northwestern University, told Wired. “Even if they do not contain sensitive information, they may contain some knowledge that the designer does not want to share with others, and [that serves] as the core part of the custom GPT.”

Yu and other researchers tested more than 200 custom GPTs, and said the success rate for file leakages was 100%, while the team was able to access the prompts used to create the chatbots 97% of the time.

Advertisement
Previous Slide
Next Slide

Google warned its employees away from its own chatbot

Google warned its employees away from its own chatbot

 Google Gemini AI interface is seen on an iPhone browser
Illustration: Jaap Arriens/NurPhoto (Getty Images)

While getting caught up in a chatbot development race (one it’s losing) against OpenAI and Microsoft, Google’s parent company Alphabet reportedly told employees to be wary of chatbots — even the company’s own Bard chatbot, which has since been rebranded as Gemini. Employees were told in 2023 not to put confidential information into chatbots out of fear it could be leaked.

Advertisement

Apple also warned its employees about using ChatGPT and GitHub Copilot, which is owned by Microsoft, out of fear of leaking confidential company information as Apple works on its own competitor.

Advertisement
Previous Slide
Next Slide

Samsung employees gave ChatGPT confidential data

Samsung employees gave ChatGPT confidential data

samsung logo on a glass door with people behind it
Photo: Chung Sung-Jun (Getty Images)

Samsung employees reportedly shared sensitive company data with ChatGPT at least three separate times. In one instance, a Samsung employee copied source code from a faulty semiconductor database into the chatbot to ask it for help. In another case, a Samsung employee input confidential code to find a fix for broken equipment. The third incident involved an employee submitting their entire meeting into ChatGPT to ask it to create meeting minutes.

Advertisement
Previous Slide
Next Slide

AI girlfriend chatbots aren’t as good as real ones at keeping secrets

AI girlfriend chatbots aren’t as good as real ones at keeping secrets

user interacts with Replika chatbot
Photo: Luka, Inc./Handout (Reuters)

*Privacy Not Included, a consumer guide from the Mozilla Foundation, reviewed 11 chatbots marketed as romantic companions, and found all of them failed its privacy policy checklist, “putting them on par with the worst categories of products we have ever reviewed for privacy.” The group found the chatbots didn’t have clear user privacy policies, didn’t include information on how they worked, and said in the Terms and Conditions that the companies behind the chatbots were not responsible for what could happen when using their services.

Advertisement

“To be perfectly blunt, AI girlfriends are not your friends,” Misha Rykov, a researcher at *Privacy Not Included, said in a statement. “Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Advertisement