Computerworld

Grow your datacentre with colocation

Brian Burch knew the moment had arrived. Two of his datacentre's key services - availability and business continuity - needed fast and dramatic improvement.

Brian Burch knew the moment had arrived. Two of his datacentre's key services -- availability and business continuity -- needed fast and dramatic improvement. Design and location limitations meant that his company's existing data center couldn't be upgraded to the levels necessary to provide the required function and performance gains.

So Burch, senior worldwide infrastructure director of Kemet, a capacitor manufacturer headquartered in Simpsonville, S.C., decided last year that it was time for his data center to split.

Even in today's challenging economy, enterprises are facing rising internal and external demands for IT services. When an existing data center can no longer shoulder an enterprise's IT burden alone, or when it becomes necessary to establish a secondary site to provide enhanced business continuity or regional network support, an important decision point has been reached.

Colo 101

By one analyst's count, there are more than 400 providers of colocation services -- known as colo for short -- offering a huge range of options and price points.

Colocation is different from traditional hosting, which IT folks may be more familiar with. In a hosting situation, usually the service provider owns the hardware and software and other infrastructure that serve up your applications. Providers can specialize in different types of services -- application hosting, website hosting, database hosting and the like.

In contrast, colocation customers own their servers, routers and other hardware and often tend to this gear with their own employees (although customers can pay for "remote hands" services for the vendor to, say, restart a server so their IT staffers don't need to travel to the vendor's location just to do that).

Some colo providers specialize by going after SMBs, financial services firms or other categories of customers.

There are two general types of colocation providers: wholesale and retail. Wholesale colocation providers deal with large spaces -- a 10,000-square-foot data center, for example. Except for the power and cooling infrastructure, it's essentially empty space. The customer, or tenant, does the work of rolling in the servers and racks, cabling up the gear and making sure it all works.

On the retail side, spaces are usually smaller -- down to individual servers or "cages" -- and there is more setup help available, for a price. In general, says Jeff Paschke, senior analyst at Tier1 Research, expect to pay more for retail colocation than wholesale space.

Also, be on the lookout for the ever-present upsell. Darin Stahl, senior analyst at Info-Tech Research Group, says that many vendors are eschewing "straight" colo and will provide only managed services, where the vendors service and support the customer's equipment. The reason is a margin of "at least" 25% in managed services, Stahl explains.

If you're not ready for that kind of thing, make sure to look for a colo partner that's going to give you what you want -- no more and no less. -- Johanna Ambrosio

For a number of enterprises, the obvious solution is to add another data center, and for many of those it means partnering with a colocation facility. (For a definition of "colocation," see the sidebar at right.)

If you're considering this option, it doesn't just pay to do your homework, experts say; it's essential.

"You absolutely need to do the buy-vs.-build analysis," says Jeff Paschke, senior analyst at Tier1 Research. That said, "I am a former enterprise data center manager, and from what I know now, more should be using [colo] than they do," he added.

The No. 1 reason to consider colocation comes down to financials. "Do you want to go to your board and ask for $50 million in capex [capital expenditures] for another data center?" Paschke asks. "The alternative is to go to a provider and use opex [operating expenses] and not have to spend money upfront," he says.

Given the massive costs and time demands required to build a traditional data center, "fewer organizations are deciding to build their own satellite data centers," says Lynda Stadtmueller, a data center analyst at technology research company Frost & Sullivan.

Especially for enterprises that have latency-sensitive applications that require local presence, there is a trend toward leasing space from a colo or hosting provider rather than building and managing their own data centers, she explains.

A Frost & Sullivan study conducted a year ago showed that total data center space used by enterprises will increase by almost 15% annually through 2013. Yet the percentage of that space that the enterprises own themselves -- versus leasing from another provider -- will decrease, from 70% to 64%, during that time. "A pretty hefty swing," Stadtmueller says.

Technology research firm Info-Tech Research Group backs that up. Some 64% of organizations engage in some form of data center colocation services, including hosting, but over 77% of them do not outsource the entire data center, according to a survey of 78 customers conducted in late 2010.

Outer limits

Most organizations begin thinking about adding a data center as soon as their existing facility starts maxing out its physical space and/or support resources, Stadtmueller says. "Once you see you're beginning to run out of space, run out of server capacity, [or] when you're looking to add or upgrade an application, that's when you begin to look outside."

Sometimes the push comes in the form of a business need -- a new direction that requires a lot of extra capacity ASAP, or enough that it would push your existing data center over the edge of its existing power usage, for instance. Power is usually the gating factor in many older data centers these days, meaning that enterprises run out of power options long before they run out of space.

Livin' la vida colo

When looking for a colocation vendor, customers should search for players that have an established presence in the market, Info-Tech senior analyst Darin Stahl advises. "This designation isn't about capability or quality," he says, "but rather about their influence on the market. If they stomp on the ground, does the earth shake?"

Tier 1 vendors have a broad range of offerings and are the trend-setters, with major data centers that sport all the bells and whistles. A handful of vendors fall into this category, Stahl says, including Savvis, IBM, Hewlett-Packard, Rackspace, Terremark (now owned by Verizon) and a few more. These firms typically have global reach.

The next category -- Tier 2 -- is made up of vendors that have a little less market influence but are still large in their respective areas. Telecom vendors Verizon and Qwest and geographic-specific players, including Peak10 in the Mid-Atlantic sector, fall into this category, Stahl explains. "These are generally a good ride for most of our midsize customers," he says.

The last tier is "everyone else," including a large number of vendors that don't own their own buildings but are tenants looking to sublet the space out to customers. The problem with this scenario is if a vendor doesn't own the facility, it can't offer a meaningful service-level agreement, Stahl says. "Any SLA you see from them says 'with the exception of anything outside our control' " -- and not much is actually in their control. The only real advantage with this type of vendor is price.

Be careful, though; sometimes the lowest-cost option comes with what Stahl calls "interesting risk." He shares an anecdote about a municipality that went this route and found that its email address became blacklisted. "The vendor managed the entire IP block of addresses for all its tenants, including their email," Stahl explains. So when one of the municipality's virtual neighbors was caught doing a nasty spam operation, the entire block of IP addresses was blacklisted, Stahl says.

It will cost $150,000 for the municipality to switch colo vendors, and it "has to try to invoke breach and all that," he says.

There are different methods for finding a vendor. Large enterprises generally work through their in-house real-estate professionals, says Jeff Paschke, senior analyst at Tier1 Research. For smaller customers, some of the retail vendors have direct sales staffs that you can work with. Also, there are a number of free colo-finding sites on the Web, reachable by typing "colocation price comparison" into your favorite search engine. But Paschke advises caution: "This data isn't 100% reliable; there isn't necessarily any quality control done" on some of these comparison sites.

There is at least one vetted database of colo providers, available from TeleGeography Research; pricing starts at $5,500 for a single user and goes up from there. -- Johanna Ambrosio

For a number of organizations, the idea of building out a second site often arises from a desire to create, enhance or save costs on an enterprise business continuity strategy. "With our new site, we really wanted to improve on the response time from any kind of a failure," Burch says. Kemet was also looking for a way to escape a costly relationship with a disaster recovery (DR) services provider, he adds.

Analysis showed that the new facility would trim recovery time from 72 hours or more to a range of five minutes to 18 hours, depending on the system category. The annualized cost of the new facility would be about the same as continuing the current DR contract.

Given all that evidence, Burch decided to go with colo. And in addition to the DR features, now the company has "a modern test and development environment with a three-year refresh cycle," Burch says. "Basically, we got a new data center with new equipment and communications lines with zero change in budget.

"One month after go-live on the new data center we conducted a test recovery of the systems previously covered under our DR contract," Burch explains. "We recovered all of the target systems in less than 10 hours." He notes that the dramatic improvement over the previous recovery target of 72 hours or more included "normal delays from recovering on new equipment in a new location and using new procedures."

To maximize the new data center's business continuity value, Burch and his team decided to place a significant amount of distance between the new facility and Kemet's headquarters. "We felt like we had to go at least 100 miles away to avoid the types of disasters that lead to electrical substation problems -- large storms, those sorts of things," Burch says. The team ultimately fudged a little bit on its distance mandate and settled on a Columbia, S.C., location, some 90 miles away.

Beyond business continuity, Burch says the new data center was designed to fulfill another key goal: to provide a test and development center that would operate independently of the main facility. "Probably 95% of the hardware that's down there is being used for test and development instances of our applications," Burch says. "In the event of a disaster, it will just automatically convert from that role into running our production systems."

Licking latency

Another motivation for creating a new data center is to boost application responsiveness for regional employees, customers and other end users. Organizations running latency-sensitive network applications -- the kind commonly used to power shopping and travel websites, financial services, videoconferencing and content distribution -- usually like to place their applications as near to end users as possible to improve response times. By splitting a data center into two or more sites, an organization can efficiently serve users distributed across a wide region or even over multiple continents.

Dayton, Ohio-based LexisNexis, known for its legal research and workflow services, decided in 2009 to establish a colo data center in Scottsdale, Ariz., to serve customers more efficiently from a location that's relatively immune from storms, earthquakes and other natural calamities. "We wanted something that was in the western region of the U.S.," says Terry Williams, the company's vice president of managed technology services. "Location was a huge part of our decision." The company already had a data center in Dayton.

Not surprisingly, network availability and performance were essential considerations for LexisNexis as it went about choosing its new data center site. "The key thing for us is network connectivity," Williams says. "That was something that just couldn't be compromised on."

LexisNexis is hardly the only organization seeking to bring data centers closer to end users for better service, says Darin Stahl, a data center analyst at Info-Tech Research Group. "There's a definite move toward decentralization and that's helping enterprises that want to open additional data centers for one reason or another," he says.

Williams says that turning to a colocation provider -- Phoenix-based i/o Data Centers, in his case -- didn't require his firm to compromise on any facility services or amenities. "We expected all of the normal things that a high-tier data center would have in terms of backup power, generators and all of those things, as well as network connectivity," he says.

For his part, Burch feels that using a colocation provider -- his firm chose Columbia, S.C.-based Immedion -- allowed a faster, less costly deployment without sacrificing convenience or functionality. "We were able to get everything set up within a two-month period, and that included the building out of office space, even converting some office space into raised-floor data center space, which is pretty amazing."

Yet, finding a suitable colocation provider can be just as challenging as scouting a site for a traditional data center. "We looked at taking a building and converting it ourselves," Williams says. After deciding that overhauling a standalone building wouldn't be cost-effective, LexisNexis started looking for a colocation provider. "I would say that we probably spent six months searching for a site, and we probably looked at no less than 30 different locations and providers -- it was a very extensive search," Williams says.

Space can be at a premium

Then, too, colo space can be tight in some geographies, so expect to pay a premium in those areas. Tier1's Paschke explains that the economic slowdown and resulting credit crunch put the kibosh on a lot of data center capacity build-outs. That slowed down some of the colo vendors, of course, but it also meant that enterprises put their own data center expansion plans on hold. So nowadays, if customers choose to turn to colo vendors, they may find that there isn't quite as much data center space as they need.

This situation is, of course, very dependent on the geographic area involved. A recent Wall Street Journal article, for instance, talked about an oversupply in the New York/New Jersey metropolitan area. In general, though, many analysts point to an undersupply of colo space in key locations.

One reason this is important is because some shops opt to have their second data center near their main facility so they can stay close to their gear. Paschke calls these "server huggers" -- people who want to reach out and touch their servers, even though the goal in most data centers is to automate much, if not all, of the systems management. If your main facility is in a high-demand area, it might be difficult to find a nearby colo facility.

More factors to think about when going colo include deciding upfront what you're willing to pay for. Some customers need mega-bandwidth for instant response times and require stringent service-level agreements, and some choose to have telecom links to several providers for backup purposes, in case one telecom vendor goes black. Others aren't so concerned. "Some people don't care; milliseconds don't mean that much to them," says Jonathan Hjembo, senior analyst at TeleGeography Research. "Customers just need a ridiculous amount of different things," and it's that diversity that's pushing the market forward, he adds.

Other considerations include security -- both physical and virtual -- and backup infrastructure, including power, cooling, fire suppression and the like. Customers also need to discuss their future needs with their would-be colo partners, to make sure the vendors will have enough space for the customer's anticipated needs for the next few years. And be sure to do a financial analysis.

Staffing and related issues

Mention "colocation" and a lot of IT staffers will hear "outsourcing" and will naturally fear losing their jobs or influence, analysts say. "People are resistant to change," Tier1's Paschke says.

Figure on your staff needing some time to become comfortable with this notion. Info-Tech's Stahl talks about an evolution from using colo for a backup data center to perhaps handling more critical, first-tier kinds of hardware, storage and applications. "Once that happens, customers start to wonder whether it's the best use of a server admin to go to the colo facility and mess around in the cage for a day." At that point, the company may be ready to consider managed services for some of their IT functions.

LexisNexis' Williams notes that one secondary data center requirement that tends to be overlooked until the very last moment is finding qualified people to staff the facility. Sometimes enterprises opt to use the colo vendor's on-site experts, but other times they simply lease space within the facility and staff it themselves.

"Obviously, you're going to do local hiring," Williams says. But he notes that a remote data center has different staffing needs than a primary site. Since secondary data centers generally don't have as many management and administrative jobs as main sites, hiring needs tend to focus on technical individuals who can easily move between multiple tasks. "You want a small staff that can actually do a number of different things," he advises.

Still, Williams notes that LexisNexis had no shortage of Dayton data center staff members volunteering to transfer to the new location. "If it's in a nice location like Scottsdale, everybody is raising their hand to move out there and provide support," he says.

For most enterprises, adding a colocated data center is usually a significantly easier task than creating a primary site from scratch. In most cases, established platforms and practices can be replicated fairly painlessly at the new location. Kemet used its main data center as a staging area for the new site.

"To ease the transition, we actually built all the new equipment in our primary data center," Burch says. "We synchronized all the data that was going to be replicated at the new site and conducted some tests to make sure everything was going to work the way it was supposed to." The equipment was then transported to the new data center. "We then simply turned it on and just let it catch up on what it had missed in the eight hours it had been in transit," Burch says.

To complete the job, the Kemet team conducted a series of tests to make sure that the new business continuity system would work flawlessly. "Once we had confirmed that, we basically declared it in production and then, a month later, we let our traditional [disaster] recovery contract expire," Burch says.

Other pointers

Careful planning and close attention to details are vital to a successful deployment, Burch says. "Most of all, look carefully at any contracts that might be involved with the new data center, particularly any disaster recovery or hosting contracts that could be either a positive or a negative in your planning," he advises.

Burch also urges organizations not to neglect their main data center when planning their new facility, particularly if they intend to use the new site in any sort of backup role. "We did our new facility in conjunction with upgrading all of the equipment in our current data center," he says.

Kemet also placed all-new equipment in its remote data center. "That's provided us with a good bit more flexibility as well as horsepower for our test and development environment," Burch says. "The developers are very pleased with that."

LexisNexis' Williams feels that finding a competent and trustworthy colocation partner is essential to the success of a secondary data center, since the provider will be responsible for delivering essential infrastructure services, including power and cooling. "The key thing is to find a partner that can provide what I would consider to be that intimate level of service -- meaning that you feel that you're the only client there."

John Edwards is a technology writer in the Phoenix area. Contact him at jedwards@gojohnedwards.com.

Additional reporting by Johanna Ambrosio, Computerworld's technology editor.