Microsoft's 'corporate culture' deprioritized security before China's 'preventable' hack, DHS says

The U.S. Cyber Safety Review Board found that Microsoft could have stopped Chinese state actors from hacking government emails in 2023

We may earn a commission from links on this page.
Microsoft sign shown on top of the Microsoft Theatre
Microsoft.
Photo: Mike Blake/File Photo (Reuters)

Microsoft could have prevented Chinese state actors from hacking U.S. government emails last year, a new federal government report found, in an incident it called a “cascade of security failures.”

The report from the U.S. Cyber Safety Review Board (CSRB) found that Chinese hackers, known as Storm-0558, compromised the Microsoft Exchange Online emails of 22 organizations and more 500 people around the world, including senior U.S. government officials working on national security matters. Commerce Secretary Gina Raimondo and R. Nicholas Burns, the American ambassador to China, were among the U.S. government officials who were hacked.

Advertisement

The report, released late Tuesday by the U.S. Department of Homeland Security (DHS), found that the hack was “preventable” and that a series of operational and strategic decisions collectively led to “a corporate culture that deprioritized enterprise security investments and rigorous risk management.”

Advertisement

The hackers were able to access the accounts using an acquired Microsoft account’s signing key that gave it authentication tokens, which are used to get into remote systems, to access Outlook on the web and Outlook.com, according to the report. “A single key’s reach can be enormous, and in this case the stolen key had extraordinary power,” the report said. In addition to having the key, “another flaw” in the company’s authentication system allowed the hackers “to gain full access to essentially any Exchange Online account anywhere in the world.”

Advertisement

Microsoft maintains it doesn’t know how or when the hackers obtained the key. In a blog post updated last month, Microsoft said its “leading hypothesis remains that operational errors resulted in key material leaving the secure token signing environment that was subsequently accessed in a debugging environment via a compromised engineering account.” It previously said in September that Storm-0558 may have accessed the key from a crash dump in 2021, but that it had not found a crash dump with the key material. The CSRB said in its report Microsoft’s original blog post was “inaccurate,” and that it wasn’t updated until March 12 “as the Board was concluding its review and only after the Board’s repeated questioning about Microsoft’s plans to issue a correction.”

The CSRB concluded that the company’s “security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations.”

Advertisement

“While no organization is immune to cyberattack from well-resourced adversaries, we have mobilized our engineering teams to identify and mitigate legacy infrastructure, improve processes, and enforce security benchmarks. Our security engineers continue to harden all our systems against attack and implement even more robust sensors and logs to help us detect and repel the cyber-armies of our adversaries. We will also review the final report for additional recommendations,” a spokesperson for Microsoft told Quartz.

Microsoft launched its AI cybersecurity tool, Microsoft Copilot for Security, on Monday, which it called the AI industry’s “first generative AI solution” for security and IT professionals. Copilot for Security is trained on “large-scale data and threat intelligence” including over 78 trillion security signals that the company processes daily. Security analysts reported being 22% faster with Copilot for Security, and 7% reported their work being more accurate when using the tool, according to an economic study administered by the company.

Advertisement

Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft told Quartz cyber attackers are using large language models (LLMs) to become more productive, including for reconnaissance to find security vulnerabilities and to improve password cracking. In February, Microsoft and OpenAI said they had found and shut down OpenAI accounts belonging to “five state-affiliated malicious actors” using AI tools, including ChatGPT, to carry out cyberattacks.