Computerworld

Google's slack approach to server management

An internally-developed package is serving Google's servers well

Maintaining a large number of Linux servers to power its search and web application services is at the heart of Google’s business and, until now, has remained a closely guarded secret.

Speaking at the recent Australian Unix Users Group (AUUG) 2006 conference, Google corporate systems administrator Michael Still lifted the lid on some of the tools Google uses internally to manage clusters of servers.

Rather than relying on standard Linux packages, Google developed its own software, dubbed “Slack”, and released it as an open source project a year ago. Still says his speech at the AUUG conference was the first time the search giant has talked about it publicly.

“Slack is a source deployment system and it’s the way we install applications on servers,” Still says, adding that Slack is based around a centralised configuration repository which is then deployed onto selected machines in a “pull” method. Each of the “worker” machines asks for its new configuration regularly or when a manual command is run.

“An application install is called a Slack role, so if you have an LDAP slave, you have an LDAP slave role,” Still says. “You can have more than one role per machine although if the roles are going to tread on each other then your installs will have to handle how to deal with that.”

With Slack, Google system administrators build changes or patches against the source control system for configuration. These changes are checked into the central repository, and then to the “Slackmaster”, which Still says is “nothing special”, just an rsync server.

Slack also supports sub-roles for specific parts of an application, and both pre- and post-install scripts.

Still says there are alternatives to Slack, the most obvious being operating system packages, but one advantage of Google’s system is there is “no intermediate binary compact form” of the Slack role.

“So it’s reasonably easy to go poke around with just the bit you need without going and rebuilding an entire RPM.”

While there is no concept of rolling back a Slack role, if something is broken “you fix it and redeploy it everywhere.

“If you really regret that a machine is not an LDAP slave for instance, you have a repeatable operating system install [so] rebuild it for whatever it was meant to be.

“We can get a new server up in probably half an hour.”

There is also no logging of what Slack roles were deployed and when, but Still says that will be fixed soon.

Still believes none of what Google does is gospel and is sure there are other “equally valid” ways of keeping systems in check.

“If you’ve only got six machines then the answer might be to spend US$100,000 per machine but if you are going to build apps based on whitebox hardware then you have to assume that hardware is going to fail reasonably regularly,” he says.

The exact number of servers used by Google is kept under wraps but speculation puts the amount in the tens to hundreds of thousands. “Generally you architect things, even smaller internal corporate apps, so that when things fail the app stays up,” Still says.