Computerworld

Power and Cooling in the Age of the 'Hypescale' Data Center

While virtualization significantly boosted both data center capacity and compute power, it also increased energy consumption and drastically increased the demand for cooling technology, as well as the workloads of those tasked with administering and maintaining servers, racks, chassis and compute nodes.

As data centers continue to expand and evolve to accommodate ever-increasing workloads and sudden demand from cloud computing, Emerson Network Power's Intelligent Foundation (released last December) claims that it provides network administrators with a single-source infrastructure management tool aimed directly at these hyperscale deployments.

Shifting the Paradigm

To do so, Quirk says, the outdated "access and control" paradigm must shift to a framework that provides for efficient rack resource management (i.e., power and cooling) and infrastructure management while incorporating tighter security measures.

"The thinking behind Intelligent Foundation is to optimize the efficiency of a hyperscale data center," Quirk says. "Virtualization got us a whole huge chunk of additional capacity and reduced the amount of hardware in a data center, but the next real step is how to right-size the hardware that is there, and optimize the efficiency of all that hardware," he says.

[Related: 6 IT Strategies to Stay Ahead of Data Center Trends]

Intelligent Foundation, Quirk claims, makes it possible for administrators to manage connectivity at the hardware device level, and to write specific policies that manage each layer across the data center.

That includes managing power and data loads across the chassis level, at the rack level and at the individual node level, says Quirk. Doing so can improve efficiency and reduce the compute load on each individual component, which can lead to greater reliability, he says.

"You can have connectivity at the individual device level, and also have the ability to distribute that data across all layers within the data center so you're not always burdening one particular aspect and risking an overload and failure," he says.

Policy-based Management Reaches the C-Suite

Intelligent Foundation, Quirk says, allows administrators to write device-specific policies to control individual elements of the data center hardware, which can improve the distribution of the management responsibilities.

"There are implications for each segment within the data center server deployment," Quirk explains. "For the individual compute nodes, you can determine the appropriate compute loads. At the chassis level, you can gauge how much power each chassis is drawing based on the compute functions that are happening inside, and at the rack level, you can easily monitor what's going on across all the chassis within the racks," Quirk says.

And the implications reach higher than just individual administrators into the C-suite, Quirk says. By enabling better management at the device levels, higher-level management and C-level executives can focus on strategic decisions that affect the business, not on whether or not there's a 'hot spot' in the data center, or on line-item power consumption.

"We can take away a lot of the mundane tasks that used to be done by networking teams, application teams and even by management, and let them focus on thinking strategically and handling major technology and service issues, not on the minutia and the mundane," he says.

Heightened Security

Because Intelligent Foundation exists behind a firewalled, Quirk says, single-IP access point, security is baked into the solution. Policies can be written at the device level to not only alert administrators when patches and updates are needed, but also to automate the redress of simple issues,. This can also be a great selling point for potential customers, he adds.

"When you get right down to it, data centers exist to connect users to applications, and anything that takes away from delivering that service-level agreement (SLA) to your customers is not productive," Quirk says.

"But because Intelligent Foundation does the monitoring and alerting, you can deliver a higher SLA and a better customer experience," Quirk says. "Not only that, but how many data center operators are always on top of patches, BIOS code, firmware updates?" The amount of time and effort to manage and maintain those updates is astronomical, so being able to automate some of those tasks is a huge relief, says Quirk.

[Realted: Criminals Increasingly Attack Cloud Providers]

Because Intelligent Foundation is policy based, it can also help data centers defend against common intrusion attacks and other attempts to exploit vulnerabilities, Quirk says.

"By setting policies that send alerts when server vulnerabilities are found, for instance, it can be quick and simple to patch those," he says. "And by patching quickly, and then distributing changes across the data center, you're extending the time it takes hackers to find a vulnerability. And the longer it takes them, the more likely they are to be caught. All around -- the reliability, the efficiency, the security -- Intelligent Foundation is a great way to give your customers a better SLA," Quirk says.

Sharon Florentine covers IT careers and data center topics for CIO.com. Follow Sharon on Twitter @MyShar0na. Email her at sflorentine@cio.com Follow everything from CIO.com on Twitter @CIOonline and on Facebook.

Read more about data center in CIO's Data Center Drilldown.