Quality software means more secure software

FRAMINGHAM (03/17/2004) - Gary McGraw, chief technology officer at Cigital Inc. in Dulles, Va., has become a leading voice in software quality and information security. His latest book, published in February, is Exploiting Software: How to Break Code, co-authored with Greg Hoglund. He has a bachelor's degree in philosophy from the University of Virginia and a Ph.D. in cognitive science and computer science from Indiana University. He co-authored Java Security in 1996, Software Fault Injection in 1998, Securing Java in 1999 and Building Secure Software in 2001.

What makes this book different from all the other "black hat" hacker tomes, and how will the reader benefit? The biggest difference is that this book is about software security. Almost every other hacker book is about network security, classic network attacks, talking about what script kiddies do. That's important, but our book is about breaking software and the vulnerabilities that allow network attacks in the first place. How do you fix software? We're getting the message to people who build stuff in addition to the people who operate stuff.

How does software quality relate to security? Can software quality locate malware hidden in source code or take commercial software beyond a Common Criteria Level 3 certification? Software security relates entirely and completely to quality. You must think about security, reliability, availability, dependability -- at the beginning, in the design, architecture, test and coding phases, all through the software life cycle. Even people aware of the software security problem have focused on late life-cycle stuff. The earlier you find the software problem, the better. And there are two kinds of software problems. One is bugs, which are implementation problems. The other is software flaws--architectural problems in the design. People pay too much attention to bugs and not enough on flaws.

Things like the Common Criteria and checklist approaches to software security are a great start, but they do not guarantee security. Certification is about making sure that the claims made in the protection profile are true. If your claims are true, your software probably gets certified. The Common Criteria is a long way from guaranteed security.

Can common "garden variety" hacker exploits be prevented with proper software processes? Common hacker exploits can be fixed with good software process. Things like buffer overflows can be fixed using common code scanning. But we can't solve the more basic software flaw problem with static analysis tools.

Operations people don't care about fixing bugs or flaws. If you were a network manager trying to fix broken software, you'd try to protect it with something like an application firewall. On the other hand, a builder will fix the broken stuff by trying to get rid of bugs. Then they will get more sophisticated and go after flaws and try to fix the software life cycle.

What are software certification organizations like the Software Engineering Institute and DOD/NSA doing to tighten quality to improve security and countermeasures? You must have an excellent software process first. Then you must layer software security best practices on top. One security best practice is abuse cases: What happens when somebody does something wrong on purpose? People do use cases for their software, but they rarely do abuse cases. Another security best practice is requirements analysis for security and risk analysis for design. Thinking about risks in the design and architecture is absolutely critical, to find architectural problems before they happen.

You can use risk analysis to test security functionality. You can test for what your software does under attack, a critical insight. A lot of vendors sell software security by adding features like magic crypto fairy dust. Those security features are important, but they won't solve the security problem. Our book makes it clear that the way you break a system is not about attacking security features but by figuring out what assumptions a designer or coder has made and making those assumptions go away. You must learn all about attack patterns, not particular bugs. This is the first time I've seen attack patterns written down; we have 48 of them. My hope is that a science of attacks evolves.

Is there anything to fear from hackers using higher-quality software development methods to produce exploits? Exploits don't really need to be high quality. If you write a rootkit, it better be high quality. You can't take control of the operating system and cause it to crash; you'll get noticed. The people who really need to pay attention to software quality are not the attackers, it's the people writing software. Then there are different types of hackers--plain hackers, criminals, information warfare professionals.

Can hackers be identified by the design and implementation of their exploits, such as industrial espionage, government-sponsored information warfare, etc.? I'm not an expert here. But there are exploit kits, especially for things like viruses and worms, and you can, in theory, do historical forensics on the origins of exploits.

What are the economics of investing in software quality and security early in the design phase, in terms of things like reduced support costs, security exposure, reduced patches and higher performance? Barry Boehm, a famous software security guru at USC, did a famous study. About half of software problems are caused at the requirements level; about 25% or more are caused at the design level; just a few are caused during coding. You must start thinking about problems in the design. In terms of cost, finding or removing a problem in the requirements phase is cheap. Fixing a problem in the field costs many magnitudes more.

Patch management is real interesting. Most people don't patch their software. Hackers get the patch from the vendor, compare it to the software and find the hole that tells them where to attack. Patches are really attack maps. They're used every day, and we need to understand it.

What are the trade-offs in the costs of higher software quality vs. reduced software functionality or weak security? There's a story around the (Windows) XP Service Pack 2 release. Microsoft has done some good security stuff in Service Pack 2, but it may cause some applications to stop working. Those applications were not thinking about security. The service pack makes people think about security, a good thing. There is a trade-off in security vs. functionality, and most people have favored functionality. The trick is to use risk management to make that trade-off.

In general, what is the state of software quality in the industry today? I think that the software industry is beginning to pay real attention to software security, but they have a long way to go. Microsoft has spent $500 million or more on improving software security, and they've begun to get rid of buffer overflows. Now they need to concentrate on architectural flaws and security engineering.

Microsoft argued in an antitrust lawsuit that it could not unbundle the Internet Explorer Web browser from Windows, saying it was integral to the operating system. Should Microsoft be unbundling software to prevent exploits from affecting other parts of the system? Yes, they should be unbundling, and they know that. Mixing functionality brings up separation of concerns and compartmentalization, important security principles.

Do you ever participate in the Black Hat hacker show in Las Vegas? I do not go, but Greg (Hoglund) goes. Greg brings a hacker mentality to the book. I'm a software scientist, so I bring scientific discipline to the book.

Has a philosophy undergrad degree helped you in your career as a software quality and security guru? Absolutely. Philosophy school taught me to think. Doug Hofstadter, whom I studied under for my Ph.D. at Indiana, got a Pulitzer Prize for writing Godel, Escher, Bach: An Eternal Golden Braid, about math, art and Bach, which is music. That's probably one of the best computer books ever written. When I read that, I decided I wanted to get my Ph.D. with Doug.

How did you end up in security? When I got out of school in 1995 with my Ph.D. in cognitive science, I decided to do a start-up. I joined Jeff Payne's company, Reliable Software. Reliable changed its name to Cigital in 1999. Java was just getting released in 1995, so I started playing around with it and wrote Java Security with Ed Felten in 1996, the first book on Java security.

Are you a musician? Yes, I play the fiddle, the mandolin and the guita--cheesy acoustic hippie music. I have a band with a friend and we make CDs just for ourselves.

Mark Willoughby, CISSP, is a 20-year IT industry veteran and journalist with degrees in computer science and journalism. For the past seven years, he has tracked security and risk management start-ups and is a managing consultant at MessagingGroup, a Denver-based content development specialist.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about MicrosoftNSAUSC

Show Comments
[]