In the 1990s, Marc Andreessen famously joked that Netscape would reduce Windows to a set of poorly debugged device drivers. By the turn of the century, critics were instead arguing that Microsoft itself had reduced its own software to a collection of security holes.
In October 2003, Microsoft CEO Steve Ballmer threw down the gauntlet. Reiterating the company's commitment to the Trustworthy Computing initiative launched the previous year by chairman and chief software architect Bill Gates, Ballmer pledged to continue to enhance security in Windows and other Microsoft software. He outlined a three-pronged approach that included an improved patch deployment process, a global education program, and new technologies to make systems more resistant to attack.
"Our goal is simple: get our customers secure and keep them secure," Ballmer said in a statement.
But has Microsoft lived up to that challenge? One year after Ballmer made his pledge, what's really changed?
Patching the holes
The big news, of course, is Windows XP service pack 2. The long-awaited upgrade, released in August, weighed in at a whopping 266MB. Its single most salient feature — a firewall that's on by default, blocking all inbound connections — means little to people already protected behind corporate DMZs, NAT routers, and personal firewalls. Even so, the SP2 firewall — and the auto-update procedure that will roll it out to tens of millions of desktops in the coming months — is a watershed event.
Attach an unprotected Windows PC to the internet, and almost before you can blink, it can be recruited into one of the armies of "zombies" that wreak havoc on the internet, launching DoS attacks and other mischief. Last month, InfoWorld reported that one such zombie network — 10,000 PCs strong — was discovered by a Norwegian ISP and then shut down by authorities in Singapore.
We have no way of knowing how many other zombie armies remain at large, but we do know with utter certainty that no internet-attached PC should lack firewall protection.
Controversially, Microsoft chose not to equip the SP2 firewall with an outbound inspection feature (egress filtering) such as those found in personal firewall products from Symantec, Zone Labs and others. This was doubtlessly a tough call. A common end user reaction to egress filters, which prompt for user approval before granting applications access to outbound ports, is simply to approve everything or to disable the outbound filter. But omission of this feature allowed critics to say, with some justification, that Microsoft had again chosen convenience over security.
Under the hood, SP2 brings other long-overdue improvements. For example, the RPC subsystem no longer accepts anonymous connections.
"That alone would have prevented Blaster," says Michael Howard, senior program manager of the security business and technology unit at Microsoft and co-author of Writing Secure Code (Microsoft Press, 2002).
But on the whole, SP2's security enhancements have drawn mixed reviews. Security expert Bruce Schneier is dismissive. "It's a whole bunch of little things," he says, "but no real change."
On the other hand, Russ Cooper, senior scientist at IT security company Cybertrust (formerly TruSecure) and longtime proprietor of the NTBugtraq mailing list, is delighted to see that the mindset of Microsoft's developers and product managers is finally changing. Cooper believes SP2 represents a milestone and hopes ISVs will also start to choose security over convenience.
Secure by default
Microsoft has been working to improve its products in other ways, as well. Historically, Windows has violated two of the basic tenets of computer security: run as few processes as necessary, and grant them as few privileges as possible. With Windows Server 2003 and IIS 6, however, these commonsense principles finally began to take hold.
For example, with Windows Server 2003, the web server isn't installed by default. And if you install it, the default configuration serves static files only. Support for ASP.Net, CGI, WebDAV, indexing and internet printing are options you have to turn on individually.
According to Microsoft's Howard, IIS 6 was also rebuilt to honour the principle of least privilege. "We broke the architecture in half," he says. "The part that does administration runs as system because it has to, but the part that does the web processing runs with very low privilege indeed."
The proof's in the pudding, Howard argues, noting that no security bulletins have targeted IIS 6 in the year and a half since its release. A search of the Computer Emergency Response Team (CERT) database confirms that claim. It lists 39 bulletins for IIS 5, zero for IIS 6.
Cybertrust's Cooper agrees with that claim but offers a slightly different interpretation. He notes that IIS 6 has had to be patched to fix several problems that also affected IIS 5.
Although it may be true that there have been no major new exploits specific to IIS 6, the problem — as security wonks like to point out — is that you can't prove a negative. The absence of a major terrorist attack on US soil since September 11 doesn't prove that new defences are working. Likewise, the absence of a major new IIS exploit doesn't necessarily validate Microsoft's efforts.
Still, Cooper and Howard agree on a crucial point: Windows' "attack surface" is shrinking. For Microsoft, that's a solid strategy. If applied rigorously, it's bound to help mitigate the risks we all face.
The tendency of Windows users to run with administrative privileges is one area where more rigour is needed. Perry Metzger, managing partner at Metzger, Dowdeswell and (with Schneier and others) a contributor to the Computer and Communication Industry Association's September 2003 "CyberInsecurity" report, recalls an all too common scenario.
Metzger wanted his CFO to run Windows with ordinary user privileges in order to minimise risk. Instead, he was forced to grant administrative privileges because an accounting program required it.
On this issue, fingers tend to point in both directions: at ISVs for requiring excess privileges, and at Microsoft for allowing it. Metzger thinks a "soft virtualisation" capability could help break the logjam.
"If an app thinks it must write the registry, provide it with versions of system calls via DLLs so it thinks it can write the registry but really can't," Metzger says.
According to Microsoft's Howard, the Windows Application Compatibility Toolkit takes just such an approach. Administrators can use it to tweak applications that try to write to protected locations — either in the registry or in the filesystem.
"If you mark the app this way," Howard says, "the OS will detect you're trying to write [for example] HKEY_LOCAL_MACHINE, and route to HKEY_CURRENT_USER automatically. Likewise, if you try to write to \Program Files or \System32, it'll route to your profile directory instead."
That's helpful, but if Microsoft is serious about making Windows secure by default, it should take a more hardline stance. In Mac OS X, for example, running with permanent admin privileges isn't even an option. Will Microsoft set a similar example and use its certification leverage to bring ISVs into line?
"We're seriously looking into that," Howard says. That's good to hear; now please make it so.
The .Net connection
The latest push from Redmond is for developers to adopt the .Net Framework in favor of traditional Win32 APIs. The .Net technologies could aid security in a variety of ways. For starters, .Net's managed code environment abolishes an entire class of pernicious buffer overflow errors. Moving up the stack, the framework classes encapsulate a set of best practices — such as caching, threading, and authentication — that are otherwise too often reinvented in ways that are incorrect and insecure.
Unfortunately for developers who would like to realise these benefits, however, the long-awaited mass deployment of the Common Language Runtime and .Net Framework still hasn't happened as of the release of SP2. That won't come, presumably, until Longhorn.
Minus the WinFS (Windows File System) pillar, Longhorn's release date is now slated for 2006. Avalon and Indigo, the remaining pillars, offer only managed APIs. Because Avalon and Indigo have been promised for both Longhorn and XP clients, 2006 is shaping up to be the year in which .Net is finally forced into the mainstream.
But important questions remain unanswered. Part of the original Longhorn vision was something called the NGSCB (Next Generation Secure Computing Base). Originally known as Palladium, the NGSCB was to marry trusted hardware to a trusted piece of the operating system. In the wake of the recent Longhorn shake-up, its role is being reevaluated. Gytis Barzdukas, director of product management at Microsoft's security business and technology unit, offers no additional details but expects Microsoft "will offer guidance by year-end".
Then there's Longhorn's new presentation system, Avalon. It will rely on Microsoft's new ClickOnce technology — part of version 2.0 of the .Net Framework — to achieve on-demand installation of partially trusted applications. This plan creates both high-tech and low-tech challenges. The high-tech challenge is to get developers up to speed with .Net's code-access security model and to ensure that Longhorn apps running with partial trust will be viable. The low-tech challenge is to help users understand how to trust mobile apps.
As the phishing epidemic makes painfully clear, people are easily tricked by such bogus "guarantees" as bank logos embedded in email messages. Although site certificates and signed components have been around for years, the industry has failed to create trustworthy user interfaces, disastrously. To the extent that Microsoft's Avalon strategy depends on mobile code, it will have to show progress toward a solution.
Not all bugs are shallow
"Many eyes make all bugs shallow," open source advocate Eric Raymond once said. His famous dictum is often cited by those who argue that, thanks to widespread scrutiny, open source products can be both more reliable and more secure than their commercial counterparts. There may be some truth to this assertion, but much depends on who is looking at the code, how they are trained, which analytic techniques they apply, and how the process is governed. Stepping back even farther, the assumptions wired into the code crucially determine its vulnerability to attack. No major software product will ever be bug-free. A formal and systematic approach to modelling threats and reducing attack surface area is vital to the production of secure software.
Led by Michael Howard, among others, Microsoft has embarked on a serious program of reform and claims it is committed to implementing these best practices. The comprehensive effort begins, Howard says, with mandatory training. Within 60 days of hiring, every developer assigned to a product team is indoctrinated with the principles of what Microsoft calls its Security Development Lifecycle.
"The level of security expertise in the marketplace — in the industry in general — is abysmally low," Howard says. "So we need to bridge that gap."
Independent experts, including Schneier, Metzger, and Cooper, can't accurately gauge the extent or the effect of these reforms. But they all know people who work inside Microsoft, and they all relay anecdotal reports that there has been real change in the right direction. Could Microsoft's military discipline make it a leader rather than a follower in the quest for more secure software?
Of course, even if Microsoft did everything right from now on, the sins of the past will be with us for many years to come — and no one believes that Microsoft is doing everything right. The bottom line? We're in a world of hurt because of Microsoft's past practices. That pain can't and won't go away anytime soon. But some of the right medicines are finally being applied, the results are tangible, and there's a reason for Microsoft to stay the course.
"The problem has always been economic, not technical," Schneier says. "It was never in Microsoft's economic interest to make its stuff secure."
But now, as growing numbers of people abandon Internet Explorer for Mozilla Firefox and as organisations look to Linux and OpenOffice.org as alternatives to Microsoft's OS and productivity suite, the cost of insecure software is starting to be felt in Redmond. If that pressure keeps up, we'll all be safer. Eventually.
Hardware-based security: The bug stops here
By Neil McAllister
Buggy code is a fact of life: That's the message from Microsoft these days. Or rather, the company recognises that as software grows in complexity it becomes increasingly difficult to prevent bugs from sneaking in under the radar.
This should come as no surprise to anyone who's wrestled with Microsoft products in the past few years. What's new is that the software giant is taking significant steps to address the problem. A new feature called Data Execution Prevention (DEP), which shipped with Windows XP SP2, gives users an added layer of security against buffer overruns, one of the most common methods of transmission for viruses and worms.
A buffer overrun is an exploit that takes advantage of a bug in which a programmer has failed to implement proper bounds-checking routines for data memory. By forcing too much data into a poorly controlled memory buffer, an attacker can cause its contents to "spill over," overwriting neighboring locations that contain executable code. If the sequence of bytes used to force the overflow contains instructions, they will replace the original code and be unwittingly executed, to whatever effect the attacker desires.
Modern execution environments, such as the JVM and Microsoft's .Net Framework, include measures to prevent buffer overruns. But many applications still rely on languages such as C, C++, and assembly language, which allow more direct access to memory and are thus more susceptible to the kind of bugs that allow overruns to occur.
DEP doesn't eliminate buffer overruns; only diligence on the part of programmers will do that. But it does reduce the risk that mismanaged data can become a gateway for malicious code.
It works in conjunction with the processor at a very low level, by activating a bit called the NX (No Execute) flag in those areas of memory that are intended for data storage only. The CPU will refuse to execute any instructions from memory marked with the NX flag. Forcing it to do so will cause the program to crash — not an ideal outcome, but one that is certainly better than allowing a virus or worm to execute malicious code.
Unfortunately, support for DEP won't be retroactive for every XP user. Because it relies on hardware features, DEP will only work with the newest generation of CPUs, including Itanium- and Prescott-core CPUs from Intel, AMD Athlons and the new series of Efficeon chips from Transmeta. Future designs from all three chip vendors are expected to follow suit.
DEP is hardly a panacea. Buffer overruns are only one type of exploit, and hardware-based solutions are no substitute for more careful software engineering. Its inclusion in SP2 is a telling sign, however. The market's tolerance for insecure software is waning, and this time Microsoft is listening.
Security supply and demand: an interview with Russ Cooper
By Jon Udell
It's easy to point the finger at Redmond when Microsoft products fall vulnerable to exploits and attacks. But according to Russ Cooper, senior scientist at IT security company Cybertrust, consumers play as much a part as engineers when it comes to building safer systems.
Who is to blame for security flaws in Windows applications: Microsoft or ISVs?
Remember SUV tires? Whose fault was it: the auto manufacturer, or the tire manufacturer, or the consumer? The fact was that consumers wanted vehicles that were top-heavy, and they got more use out of tires that were under inflated. Ultimately the dealers underinflated the tires, the manufacturer provided tires that might run better underinflated, so everybody's culpable. But it comes down to what consumers want.
Are you saying that SP2 reflects a changed consumer attitude?
No, unfortunately not. It represents a consensus amongst consumers and security professionals about how much restriction we can put on standard functionality without ticking off the people who don't care about security. It's neither as convenient as consumers would like nor as secure as security pros would recommend.
Is Microsoft really changing?
We are seeing a real cultural change, but there's no history to back it up. The SP2 deliverable represents a dramatic shift ... We see [Microsoft] doing things they would never have done before, such as increasing support cost to provide security and recognising that security will affect product sales. Will this shift be permanent, or will they flip back to feature-driven mode? We don't know.
Are you more optimistic lately?
Yeah, definitely. Clearly Microsoft is doing things I have asked them to do for years. I'd be a fool to say I'm not happy about that.
What's the next step?
They have to get more aggressive about end-of-lifing the older systems. Certainly they represent a huge investment by corporations, and lots of business-critical apps require legacy functionality, but Microsoft has to put a stake in the ground. At some point, we have to say we're not going to allow steam engines on the road anymore.
Any final advice?
Get SP2 installed. There's no rush, but do it. Realise, though, that apart from user education — which is valuable — we're not going to see the direct benefits of any of this for years. Also realise that consumers are the ones affecting us the most. Whatever we can do to improve their environment will make our lives simpler because we'll have fewer things attacking us. Talk to your service providers as well, and find out if they're doing anything to prevent you from being attacked by their customers. And if not, ask them why not.