Unisys NZ provides in-depth look into the Kiwi data centre market

Vendors announcing new data centres in New Zealand usually make a big thing of stressing their Tier 3 rating. But is it important?

Vendors announcing new data centres in New Zealand usually make a big thing of stressing their Tier 3 rating. But is it important?

Unisys country manager Steve Griffin maintains that design is only a small part of today's overall reliability equation.

In today’s world of 'location-less' data, businesses need a different approach to selecting data centre solutions that takes into account more than just the 'four walls'.

Data centre tiers were historically viewed as industry benchmarks related to facility design and theoretical reliability - it was a consistent measure to help define the capability of a data centre building.

Unfortunately, conceptual design outcomes do not always mirror reality, and in many cases, a better approach is to look at the history of what the data centre has delivered - as they say, history predicts.

This is because the way a site is actually managed, maintained, and operated has a greater impact on the availability and reliability metrics that are achieved from the site, more than what the paper-based theoretical design parameters suggest.

There are a number of notable challenges associated with following or mandating the theoretical design approach, two of which are:

1) The market often ends up with excess capacity and capability as a number of vendors build capacity to meet potential market demand.

In the New Zealand context, this is relatively small. The side effect of this is the cost of over-supply is ultimately passed onto the customer through inflated pricing, which hinders the expected uptake of capacity.

A subsequent premium price point is maintained, and the cycle continues.

2) The maturity of operation and management of the sites, as well as early-life failures of technology and process can and do occur, which create a service delivery risk to customers.

This in turn drives a slower uptake of the available floor-space as the perception of reliability is tarnished - customers typically do not want to be early adopters of technology / capability to deliver and support mission critical systems.

It is understandable why the focus on the data centre buildings and associated plant has historically been relevant, particularly at a time when network bandwidth was expensive, slow, and not fit for the purpose.

Infrastructure technology was very expensive and lacked the capability to deliver highly available systems within a data centre, let alone across diverse ones.

These features of the time more or less mandated a need to build data centres with exceptionally high availability metrics.

The problem is these thought processes still permeate specifications and procurement decisions today when for the most part, they are no longer relevant.

If you consider the delivery of services from data centres today, the paradigm has shifted considerably for two fundamental reasons:

1 - Both data centre providers and the Uptime Institute (UTI) have realised that delivering data centre availability is more about the management and operation of the facility:

a.      having robust [preventative] maintenance procedures, including the plant lifecycle

b.      maintaining good engineering and implementation practices

c.       good change, problem, capacity management systems and processes in place

d.      providing a healthy work environment for the staff.

Running data centres with a focus on these areas allows the delivery of outcomes not dissimilar in terms of availability and reliability to those possible from certified Tier 3 sites (noting of course that UTI have removed the expected uptime percentage of a given tier classification).

2 - Exploitation of Moore's law:

a.      reliable low latency, high bandwidth networks are ubiquitous

b.      storage and database technologies are easily capable of delivering reliable data replication

c.       network technologies to load balance inside and outside of the data centres are standard features in network designs

d.      virtualisation provides levels of transportability through abstraction of compute, storage and network resources.

These capabilities all combine to make virtual data centres a reality which is being delivered across geographically diverse locations.

Customers are starting to exploit such capabilities to create systems which are nominally agnostic to the location of the data i.e. they are “location-less.”

By way of example, the traditional perspective of production being in site A, and DR/ITSCM being delivered from site B is being broken down - today, customers are running systems which are able to 'flip-flop' without interruption, or change in operational processes, or design.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags unisys

More about CustomerseBayGartnerGoogleGriffinNetflixTier 3

Show Comments
[]