Computerworld

Climate change arrives in NZ datacentres

High density servers are posing challenges for local datacentre managers

Gartner recently predicted that by 2008 up to 50% of datacentres will be unable to meet the increased power consumption and cooling requirements of high density equipment such as blade servers, but according to operators in New Zealand, the crunch point may already have been reached.

Datacentres are becoming increasingly wary of hosting high density servers and an overheating incident which occurred at Orcon earlier this year may have been unusual only in that it was reported.

Sean Weekes, general manager of ICONZ, says server overheating in datacentres is “quite common.”

“The basic underlying reason is that many datacentres were built a few years ago with the cooling installed around the periphery. That used to be enough.”

But an unwelcome side effect of Moore’s Law means server heat output is increasing in line with faster processor speeds.

“Every 18 months you get a server that puts out double the amount of heat,” says Weekes. “At the same time you are seeing a greater amount of servers and processors in each rack. What used to be in a 2U or 3U box [a standard server space unit] will now fit in 1U, which means that a standard 42U server rack which used to hold perhaps ten servers is now packing in 30 or more. It’s a massive compounding of heat output and cooling requirements in a given space.”

Weekes says that the ideal temperature in a server room is at the lower end of the 17-20 degrees centigrade range, as measured in the middle of the rack. Servers will usually start running into problems at between 28 and 35 degrees — temperatures which Weekes says have been recorded at datacentres in Auckland.

What happens next depends largely on the make and model of the server. The more modern or intelligent servers will start sending out warning messages while older servers will simply shut down with little or no warning.

While it is very unlikely that servers will suffer physical damage as a result of overheating, Weekes says a sudden shutdown could wreak havoc with data.

“It depends on what the server is doing. A proxy server, where the information is likely to be replicated elsewhere, will cause the least problems but if it is a database or SQL server and the data is not replicated, the chances are that you will lose data or, worse, end up with a corrupt database. If your datacentre is overheating then my advice is to get out of there, otherwise your investment could be ruined in a short space of time.”

Brett Herkt, managing director of Maxnet, says his company takes cooling and electricity supply issues very seriously as it is increasingly specialising in high-availability or mission-critical hosting.

Maxnet’s high-availability server room is cooled by two fully redundant, externally-situated water coolers which maintain a constant temperature of 20 degrees and 65% humidity.

Maxnet has also spent over $500,000 on power system upgrades over the past year, including fully redundant ‘A’ and ‘B’ power supplies with their own transformers and uninterruptible power supplies.

Herkt admits that Maxnet once suffered an overheating incident — due to a malfunctioning cooling system which started to blow hot air into the servers it was meant to be protecting — but this occurred “four or five years ago” when the company was a relatively small residential ISP.

He says overheating problems can be disruptive because they can develop quite rapidly. “It can take only half an hour to hour before temperatures start to become unacceptably high. It’s very difficult to get your customers to do a controlled shutdown in that time, and an uncontrolled shutdown is a nightmare.”

Andrew McMillan, a director of Wellington-based developer and hosting company Catalyst, says upgrading the cooling capacity of his company’s server room to keep up with new equipment is a constant challenge. Catalyst has also been forced to upgrade the electricity substation in its building to accommodate increased power consumption. While the company monitors its servers extensively, with alarms and alerts to warn of changes in temperature and humidity, so far customers have not asked to be provided with this information.

McMillan says server virtualisation — that is running four or five ‘virtual’ servers on one physical server — could go some way to alleviating cooling and power consumption problems, but a simpler answer could be to accept lower performance servers. Catalyst runs a mixture of servers ranging from the latest blade server types to older lower performance, lower power consumption models, and McMillan says the latter are adequate for most applications.

“The market has to move towards lower power servers to cut down on the thermal load, and I think we are starting to see elements of that happening.”

Weekes says that technically speaking, it is relatively easy to monitor the temperature of servers remotely, but this is information that datacentre customers will need to ask for.

“Many servers are intrinsically capable of sending IP messages in the case of overheating and it’s easy to put in a sensor to monitor the rack temperature. But this is something you must ask the datacentre to put online. A datacentre is not going to send you an email telling you that the temperature in their server room has reached 26 degrees but so far everything is working okay. They’re going to keep quiet about it the moment it starts to get warm.”

At the ICONZ datacentre, Weekes ensures that the operating temperature stays within limits by allowing a generous safety margin in the centre’s cooling capacity.

“As a rule of thumb you need to allow for double the cooling capacity required. Last week we installed a 30KW cooling unit and we have ordered a 60KW unit. That will give us about 2.4 times the capacity needed, based on the estimated growth of the datacentre.”

Weekes says the same goes for the related problem of the increased electricity consumption of high density servers.

“A rack that is fully populated with 1U servers is going to lead to a huge increase in electricity consumption and a lot of companies do not calculate that up front when building a datacentre. For a datacentre that is consuming 250KvA [kilovolt ampere] you should be specifying for about 400KvA.And if you specced out 400KvA then all of your cabling as well as your back up generators should be up to that standard. You also need to check with your lines company that 400KvA is available as a continuous 24/7 service — not just peaks — otherwise the transformer gets very warm.”

Herkt says that the increased cooling requirements and higher electricity consumption of blade servers are likely to lead to change the basis by which datacentres charge their clients. Maxnet is already spending $250,000 a year on electricity and every kilowatt consumed by the servers demands a further 1.2 to 1.3 kW to run the cooling equipment, which also takes up a lot of space.

“If every customer was using blade servers we’d be spending literally millions dollars on cooling equipment and we’d probably have to buy up the units next door to us and demolish them just to make room.”

At present Maxnet includes an electricity consumption allowance of up to 1.5KvA per rack but Herkt says blade server applications are drawing power well in excess of these limits. For example, one application for a large financial services organisation, which will be running on blade servers across two racks, will consume 8KvA alone. Tellingly, Herkt claims that apart from Maxnet, only one other datacentre in Auckland was prepared to host this application in a blade server environment.