The path to a global outbreak on Friday of a ransom-demanding computer software (“ransomware”) that crippled hospitals in Britain — forcing the rerouting of ambulances, delays in surgeries and the shutdown of diagnostic equipment — started, as it often does, with a defect in software, a bug. This is perhaps the first salvo of a global crisis that has been brewing for decades. Fixing this is possible, but it will be expensive and require a complete overhaul of how technology companies, governments and institutions operate and handle software. The alternative should be unthinkable.
Just this March, Microsoft released a patch to fix vulnerabilities in its operating systems, which run on about 80 percent of desktop computers globally. Shortly after that, a group called “Shadow Brokers” released hacking tools that took advantage of vulnerabilities that had already been fixed in these patches.
It seemed that Shadow Brokers had acquired tools the National Security Agency had used to break into computers. Realizing these tools were stolen, the N.S.A. had warned affected companies like Microsoft and Cisco so they could fix the vulnerabilities. Users were protected if they had applied the patches that were released, but with a catch: If an institution still used an older Microsoft operating system, it did not receive this patch unless it paid for an expensive “custom” support agreement.
The cash-strapped National Health Service in Britain, which provides health care to more than 50 million people, and whose hospitals still use Windows XP widely, was not among those that signed up to purchase the custom support from Microsoft.
They were out in the cold.
On May 12, a massive “ransomware” attack using one of those vulnerabilities hit hospitals in Britain, telecommunication companies in Spain, FedEx in the United States, the Russian Interior Ministry and many other institutions around the world. They had either not applied these patches to systems where it was available for free, or had not paid the extra money for older ones.
Computer after computer froze, their files inaccessible, with an ominous onscreen message asking for about $300 worth of “bitcoin” — a cryptocurrency that allows for hard-to-trace transfers of money. Ambulances headed for children’s hospitals were diverted. Doctors were unable to check on patients’ allergies or see what drugs they were taking. Labs, X-rays and diagnostic machinery and information became inaccessible. Surgeries were postponed. There was economic damage, too. Renault, the European automaker, had to halt production.
The attack was halted by a stroke of luck: the ransomware had a kill switch that a British employee in a cybersecurity firm managed to activate. Shortly after, Microsoft finally released for free the patch that they had been withholding from users that had not signed up for expensive custom support agreements.
But the crisis is far from over. This particular vulnerability still lives in unpatched systems, and the next one may not have a convenient kill switch.
While it is inevitable that software will have bugs, there are ways to make operating systems much more secure — but that costs real money. While this particular bug affected both new and old versions of Microsoft’s operating systems, the older ones like XP have more critical vulnerabilities. This is partly because our understanding of how to make secure software has advanced over the years, and partly because of the incentives in the software business. Since most software is sold with an “as is” license, meaning the company is not legally liable for any issues with it even on day one, it has not made much sense to spend the extra money and time required to make software more secure quickly. Indeed, for many years, Facebook’s mantra for its programmers was “move fast and break things.”
This isn’t all Microsoft’s fault though. Its newer operating systems, like Windows 10, are much more secure. There are many more players and dimensions to this ticking bomb.
During this latest ransomware crisis, it became clear there were many institutions that could have patched or upgraded their systems, but they had not. This isn’t just because their information technology departments are incompetent (though there are surely cases of that, too). Upgrades come with many downsides that make people reluctant to install them.
For example, the more secure Windows 10 comes with so many privacy concerns that the Electronic Frontier Foundation issued numerous alerts about it, and the European Union is still investigating it. My current Windows 10 machine is more secure but it advertises to me in the login screen. (Are they also profiling me to target advertisements? A fair question in this environment.)
Further, upgrades almost always bring unwanted features. When I was finally forced to upgrade my Outlook mail program, it took me months to get used to the new color scheme and spacing somebody in Seattle had decided was the new look. There was no option to keep things as is. Users hate this, and often are rightfully reluctant to upgrade. But they are often unaware that these unwanted features come bundled with a security update.
As an added complication, the ways companies communicate about upgrades and unilaterally change the user interface make people vulnerable to phishing, since one is never sure what is a real login or upgrade message and what is a bogus one, linking to a fake website trying to steal a login.
The problem is even worse for institutions like hospitals which run a lot of software provided by a variety of different vendors, often embedded in expensive medical equipment. For them, upgrading the operating system (a cost itself) may also mean purchasing millions of dollars worth of new software. Much of this software also comes with problems, and the “no liability” policy means that vendors can just sell the product, take the money and run. Sometimes, medical equipment is certified as it is, and an upgrade brings along re-certification questions. The machines can (as they should) last for decades; that the software should just expire and junk everything every 10 years is not a workable solution. Upgrades can also introduce new bugs. How do you test new software when the upgrade can potentially freeze your M.R.I.? Last year, a software update “bricked” Tesla cars: they could not be driven anymore until another update fixed the problem. Many large institutions are thus wary of upgrades.
The next crisis facing us is the so-called “internet of things”: devices like baby monitors, refrigerators and lighting now come with networked software. Many such devices are terribly insecure and, worse, don’t even have a mechanism for receiving updates. In the current regulatory environment, the people who write the insecure software and the companies who sold the “things” bear no liability.
If I have painted a bleak picture, it is because things are bleak. Our software evolves by layering new systems on old, and that means we have constructed entire cities upon crumbling swamps. And we live on the fault lines where more earthquakes are inevitable. All the key actors have to work together, and fast.
First, companies like Microsoft should discard the idea that they can abandon people using older software. The money they made from these customers hasn’t expired; neither has their responsibility to fix defects. Besides, Microsoft is sitting on a cash hoard estimated at more than $100 billion (the result of how little tax modern corporations pay and how profitable it is to sell a dominant operating system under monopolistic dynamics with no liability for defects).
At a minimum, Microsoft clearly should have provided the critical update in March to all its users, not just those paying extra. Indeed, “pay extra money to us or we will withhold critical security updates” can be seen as its own form of ransomware. In its defense, Microsoft probably could point out that its operating systems have come a long way in security since Windows XP, and it has spent a lot of money updating old software, even above industry norms. However, industry norms are lousy to horrible, and it is reasonable to expect a company with a dominant market position, that made so much money selling software that runs critical infrastructure, to do more.
Microsoft should spend more of that $100 billion to help institutions and users upgrade to newer software, especially those who run essential services on it. This has to be through a system that incentivizes institutions and people to upgrade to more secure systems and does not force choosing between privacy and security. Security updates should only update security, and everything else should be optional and unbundled.
The United States government has resources and institutions to help fix this. N.S.A.’s charter gives it a dual role: both offensive and defensive. That the agency discloses software vulnerabilities it finds to companies more quickly may be a good idea, but doing so doesn’t solve this problem, since finding bugs is not limited to the N.S.A. — criminals and other nations can keep finding them. Nor are bugs in limited supply, so we cannot get to the bottom of the problem by fixing them one by one. There are, however, many technical measures that can be taken to build operating systems that are structurally less vulnerable to bugs. In other words, we can’t eliminate bugs, but with careful design, we can make it so that they cannot easily wreak havoc like this. For example, Chromebooks and Apple’s iOS are structurally much more secure because they were designed from the ground up with security in mind, unlike Microsoft’s operating systems.
It is past time that the N.S.A. shifted to a defensive posture and the United States government focused on protecting its citizens and companies from malware, hacking and ransomware — rather than focusing so much on spying. This isn’t just about disclosing vulnerabilities, a hot-button topic that often distracts from deeper issues. It also means helping develop standards for higher security — something an agency devoted to finding weaknesses is very well suited to do — as well as identifying systemic cybersecurity risks and then helping fix them, rather than using them offensively, to spy on others.
There is also the thorny problem of finding money and resources to upgrade critical infrastructure without crippling it. Many institutions see information technology as an afterthought and are slow in upgrading and investing. Governments also do not prioritize software security. This is a sure road to disaster.
As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time.
It is time to consider whether the current regulatory setup, which allows all software vendors to externalize the costs of all defects and problems to their customers with zero liability, needs re-examination. It is also past time for the very profitable software industry, the institutions that depend on their products and the government agencies entrusted with keeping their citizens secure and their infrastructure functioning, step up and act decisively.
Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina, is the author of the forthcoming Twitter and Tear Gas: The Power and Fragility of Networked Protest and a contributing opinion writer.