One week I get to see Tim Berners-Lee, the Father of the web, and the next week I get to meet with the “Mother of the internet”. What more could a networking editor ask for?
“Mother of the internet”. That’s kind of a strange marketing sound bite. I cringe when people emphasise my gender, because it’s really a very small part of my life, especially my professional life. Recently a recruiter for a company sent me email saying, “We are particularly interested in you as a female thought leader.” I didn’t reply, because I wasn’t interested in a job, but I fantasised replying: “Thank you for your interest. Although my credentials as a thought leader are impeccable, I must warn you that I am not that qualified as a female. I can’t walk in heels, I have no clothing sense, and I’m not particularly decorative. What aspects of being female are important for this position?”
What exactly does a distinguished engineer do?
The job is not that well defined so I get to kind of do whatever I want. I enjoy talking to various groups in and out of Sun to find out what they’re doing and often it intrigues me with some problem that needs to be solved. Or I meet two groups that ought to know each other and I introduce them. At Sun Labs it’s nice if we do things that make the company money but it’s also nice if we change the world. Though if all I did was change the world, then I assume that would be a bad thing.
What’s your take on the state of networking and security research these days?
The taste of whoever is in the funding agencies tends to cause everyone to look at the same stuff at the same time. Often technologies get hot then go away. There was active networking for a while, which always mystified me and has now died. In security, the money is behind digital rights management, which I think ultimately is a bad thing — not that we need to preserve the right to pirate music, but because the solutions are things that don’t solve the real problems in terms of security. The few dishonest people will always manage to steal things. But most people are basically honest, and are willing to pay if you make it convenient. If there’s a trust relationship there, most people will wind up buying things. I hate to see so much emphasis on digital rights management.
Where should the funding go?
The thing that seems absolutely unsolvable but that we have to solve is the user interface stuff. Everything is so complicated. People tell you to turn off cookies because they are dangerous but you can’t talk to anything on the web without using them. People build this horribly complicated software, put up all these mysterious pop-up boxes and then blame the users when things don’t go right. I keep hearing people say, like with distributed denial of service, that there are all these grandmothers out there who don’t know how to maintain their systems. Don’t blame the grandmothers; blame the vendors. Liability is one of those things I don’t understand. Somebody makes a toy and some kid manages to stick a piece up his nose and dies from it, that company has to pay millions of dollars because everyone is so sympathetic. But in the software industry, when you install something there is this 9,000-page legalese that basically says: “We have no idea what this thing does, we’re not claiming it does anything, if it remotely does anything useful you should be grateful to us, but you shouldn’t blame us if it doesn’t do what you expect.” And they get away with it!
We could use more standards, such as with document formats. Customers would be better off, but it’s not really in the interest of the vendors to do that. And customers actually don’t want standards; they want the cheapest thing that works.
Even though it’s good for them, how do you convince people to eat broccoli instead of chocolate bars?
Yeah, broccoli can be tough. Even tougher might be getting them to use something called an “ephemeriser”.
What’s this security project of yours all about?
You want to be able to create files that have expiration dates and make lots of copies of all of your storage so even if your datacentre burns down you can buy a brand new machine, reinstall the file system from scratch, get your backup tapes and be able to recover all the data that hasn’t expired and not be able to recover any data that has expired.
You want to be able to do this in a way that can be very scalable and in which you won’t lose performance and to do it with key managers that manage time-release keys in a way you don’t really have to trust them.
We’ve been working on it for a few years and it’s been evolving. Originally the design was every time you opened a file that had an expiration date you had to go to a key manager, like an external site, and ask him to unlock the file for you. When I tried to sell it to the file system groups they were unhappy about the overhead every time you opened a file and the amount of information you’d have to keep in the header of the file was a whole bunch. After that I changed it so that only after a file system recovers from a crash does it have to ask for one decryption from an outside agent and otherwise it works autonomously so it has no performance problems. In the header of a file all I need is about four bytes for a key ID.
What form might this all take in products?
The intention is that it will get built into file systems, but I’m just in research and who knows when and if things will happen. I’m optimistic that it will and fairly soon.
I’ve read your “Algorhyme” poem about spanning tree [which plays on Joyce Kilmer’s “Trees”]. I’m thinking that coming up with words that rhyme with “ephemeriser” might be tough.
True. The spanning tree one just sort of came out. My son actually set that poem to music and my daughter and I had a chance to perform it at a concert at her office at MIT’s Lincoln Laboratory.
What else is on your plate?
I’ll tell you, but there’s a story behind it. A couple of years back there was this Boston Globe article about a hospital network melting down and in the middle of it was mentioned the spanning tree algorithm. I’m thinking: those are words that don’t belong in a Boston Globe article even if spanning tree was involved.
Eventually, we tracked down the company providing the switches and indeed it was a giant bridged network. Bridging was never intended to do that: it was kind of a hack because people at the time were all confused about what Layer Three was and they thought Ethernet was a competitor to DECnet.
With bridges, we did such a good job and it was so plug-and-play that you didn’t have to think about them, so people are still taking large networks and doing bridges. As it turns out, people kind of believed IP must have been the best protocol ever because it just took over the world (just like English must be the best language ever because it’s going to take over the world, but no, it has nothing to do with how good a language it is). DECnet would have been a much better protocol for the world to have adopted. It had a lot of advantages, like a larger address space (We’re still talking about will IPv6 ever happen and if it does, there’s nothing better about it than what we could have had 15 years ago).
One of the advantages DECnet had was the ability to have a whole campus that was zero-configuration, that all had the same prefix, and you didn’t have to divvy up your address space for every link like IP does. But given that companies didn’t go in that direction, they’re using bridging, which is inherently more fragile, especially when you take that notion and try to make it more responsive by doing all these things that involve lots of configuration. If you get the configuration wrong, things can melt down. You shouldn’t be stressing it really hard. One of the things I’m trying to do now, given that we’re stuck with IP, is come up with something that gives you the advantages of bridging so it can be all zero-configuration within a campus and all look like one big prefix and not be confined to just transmitting data along the spanning tree. You’d be able to use shortest paths and will be safer if you have temporary loops, so it shouldn’t melt down.
About a year ago we finally got through the politics and got an IETF working group started called TRILL, which stands for “Transparent Interconnection of Lots of Links.” I’d written a paper about this five years ago and have been trying to sell it to the various standards bodies. I’m pretty sure it will get implemented. There are a lot of companies asking the sort of questions that only would be asked if they were planning to.
So TRILL is kind of like a new spanning tree?
You can think of it as a replacement to spanning tree that has the same properties of being zero-configuration, just plug it together and it works and it looks like one big thing but performs better because you have optimal paths. With spanning tree it’s like taking the highway system and saying you don’t need both Routes 128 and 495 [local roads in Massachusetts] just because they both sort of go in the same direction.
Jeez, I wasn’t even going to ask you about spanning tree. I figured it was old news.
Something else you would think is old news is my thesis from 1988 on how to design a network that had the property that even if some of the routers were really malicious and were trying to do bad things (lying about who they were connected to, flooding the network with garbage, etcetera) how could you cope with that. My thesis sounded really hard and important when I proposed it but the solution turned out to be embarrassingly simple.
I found out years later that University of Washington networking people were required to read it. The thing was, though, that my proof of concept required a small enough network that all of the routers in it could keep track of all the source destination pairs talking at the same time. Recently I was discussing my thesis with someone else and we realised it does not extend to larger networks where you need hierarchy. We had to rethink it and do it in a totally different way which also has implications for congestion control. That’s another paper that I’ve been working on recently.
Speaking of schools, I understand you aren’t thrilled about how networking is being taught these days.
I get frustrated. Universities tend to teach it like it’s a trade school, as if the only thing that every existed is TCP/IP. The attitude seems to be that everything about it is perfect, so you just need to get your students to learn how to use it and write applications to it.
But there are a lot of problems with this field where people just repeat things and nobody questions them anymore, including in textbooks that are used at reputable universities. There’s a lot in there that’s just wrong. Like that ISO failed because it had too many layers. Or, if everything were encoded in XML it would all be interoperable or that security problems will go away once you have IPv6.
What I’d like to see more of, and what I tried to do in [my book] Interconnections is to get people to think about things conceptually. One problem is that the books out there today only tend to deal with one or two layers and if they do all of networking they tend to only be strong in the areas of the writer’s expertise. I’ve thought of collaborating with others on a book that would look at all of networking.