When Russian troops began their invasion of Ukraine on Feb. 24, it focused the world’s attention on what role cybersecurity would play in the war. Nearly four months later, a storyline of cyber conflict is beginning to emerge, and it may affect the security industry for years to come.
The Russia-Ukraine war was a prime topic of discussion among security executives, analysts and government officials who gathered in San Francisco for the annual RSA Conference. Although much of the speculation when the war began focused on Russia’s expected cyberattacks, a perhaps more surprising narrative has centered around Ukraine’s resilience.
“Russia has been trying over and over again and they’ve been failing,” Mikko Hypponen, chief research officer at WithSecure Corp., said in a private briefing during the RSA event. “Ukraine has been building their defense capability for the last eight years.”
Rise of malware wipers
Despite Ukraine’s ability to contain the damage caused by Russia’s cyberattacks, the ongoing war has provided a glimpse into the superpower’s game plan. Several different forms of destructive wiper malware have been deployed against Ukraine operations since the start of the war. Unlike ransomware, where files are merely encrypted and can potentially be retrieved, wiper malware permanently damages or erases critical data.
“We’ve seen seven different wipers in use there, which is a lot,” Kevin Mandia, chief executive officer of Mandiant Inc., said during an RSA session on Wednesday. “The most we ever saw in a year was two or three. These are wipers that evade endpoint detection, and they are specially crafted to do so.”
A key element of Russia’s approach to the conflict has been influence operations or IO. Mandiant has been tracking Secondary Infektion, a Russian-based information operation that uses fake accounts and forged documents to sow disinformation.
Ukraine has been effective in countering false narratives about the war, according to security researchers. But Russia’s campaign extends beyond the war zone. China and Iranian-linked groups have joined forces with Russia to advance anti-Western narratives through fake Twitter and Facebook accounts.
“Influence operations are not really panning out the way we would have expected on the battlefield,” said Sandra Joyce, executive vice president and head of global intelligence at Mandiant. “Where other influence operations might be working is in the rest of the world. Most people in the world are living in countries that are either neutral to Russian’s operations or actually support them.”
Criminal use of zero-days
While the Russia-Ukraine war has received a great deal of attention within the cybersecurity community, there have been a number of developments on other fronts that have drawn scrutiny. Foremost among these has been a noticeable increase in zero-day exploits, unknown vulnerabilities in the wild that can be manipulated by threat actors.
Three new zero-day exploits have surfaced in the past week. A new zero-day vulnerability discovered in Atlassian Confluence could open servers to full system takeover, according to researchers. Microsoft Corp. is currently dealing with two recent zero-days that exploit support tools in Windows.
The growth of cloud hosting, mobile platforms and “internet of things” technologies are viewed as contributing factors to the increase in zero-day exploits. Even more troubling is a shift being observed by the cybersecurity community from zero-day use by nation states to crime organizations.
“In 2019, we saw 32 zero-days, and in 2021, we’ve seen over 70,” said Mandia. “If we saw a zero-day in use, usually there was a modern nation behind it done for espionage. Forty percent of zero-days are now used by criminal actors. There’s enough money in cybercrime now, they are buying zero-days.”
An increasingly dangerous threat landscape has led to heightened interest among U.S. government agencies in working more closely with the cybersecurity community. Seven of the speakers in RSA keynote sessions alone this week were from the Department of Defense, the National Security Agency and the Cybersecurity and Infrastructure Security Agency or CISA.
Presentations by government officials and company executives demonstrated increased collaboration between the public and private sectors. However, tensions remain, as exemplified by comments from Sudhakar Ramakrishna, chief executive officer of SolarWinds Inc., whose company was at the center of a major software supply chain breach less than two years ago.
Ramakrishna expressed concern about the treatment of authentication security provider Okta Inc., which received criticism when it delayed disclosure of a data breach by the hacking group Lapsus$ earlier this year.
“It is sometimes very confusing to me as to whether our own government is an adversary or a partner,” Ramakrishna said during an RSA panel session on Wednesday. “Oftentimes there is victim shaming. Okta got berated for being late in terms of disclosure.”
A participant on the panel with the SolarWinds chief executive was Jen Easterly, director of CISA, who was confirmed to lead the agency by the Senate last year and came from the private sector.
“I am certainly very sympathetic to that, coming from Morgan Stanley,” Easterly responded. “We are not focused on naming or shaming or blaming or stabbing the wounded. We’re very sensitive to those concerns.”
Potential for AI hacks
If a war in Eastern Europe, increased threats from criminal gangs and public-private sector friction isn’t enough to worry about in cyberspace, one prominent security researcher is sounding the alarm about potential adverse consequences for society from the use of artificial intelligence.
Bruce Schneier, lecturer at the Harvard Kennedy School of Government, provided a preview at RSA of his forthcoming book on the capability of AI to hack human systems.
“AI will hack humanity unlike anything that’s come before,” Schneier said. “The hacks don’t even require major breakthroughs in AI. AI systems will hack other AI systems and humans will just be collateral damage.”
At the heart of Schneier’s concern is what has become known as the “black box” problem. Humans may be the creators of AI technology, but they have no way of knowing exactly how it ultimately makes decisions.
That has not prevented AI systems from progressing deep into the societal fabric, Schneier noted. AI currently determines sentencing decisions, screens job candidates and serves as a gatekeeper for granting loans. As AI technology gets smarter, it will develop hacks on its own and propagate those at scale, according to Schneier.
“AIs will inadvertently hack systems in ways that we won’t anticipate all of the time,” Schneier said. “Any good AIs will naturally find hacks. Once AI systems start discovering hacks, they will move at a scale we are not prepared for.”