A panel discussion held at MIT this week hailed major advances in networking technology, but warned that the challenges facing the world's information infrastructure are severe.
"We're now starting to talk about not millions any more, when we talk about user devices that are in play, but billions," said Mehra. "Of late, in the industry, we've started talking about how many devices versus how many human beings on the planet."
[THE CHANGING NETWORK: F5 tries to stay ahead of the curve with moves into security and beyond]
And it's not just the proliferation of smartphones and tablets contributing to skyrocketing growth in the total number of network-connected devices, according to Mehta. The oft-cited phenomenon of the "Internet of things" - which refers to the growth of network connectivity in objects that weren't previously online - means that there could be as many as 30 billion network devices installed worldwide by 2020.
Even though a huge number of those devices are likely to be connected cars or refrigerators or traffic lights, Akamai's Alexander said that a substantial amount of the total increase in demand in the future is a product of the growing ratio of devices to people.
In 2005, he said, there were a little more than a billion Internet users and 1.5 billion connected devices online. In 2010, those numbers changed to 1.8 billion and 5 billion, respectively, and projections for 2015 indicate that they could increase to 2.9 billion and a whopping 15 billion.
"There's an explosion of endpoints going on," Alexander said.
And while a rise in peak connection speeds might give the appearance of supply keeping up with demand, he said, it's important to look closer.
"That would be all well and nice if applications and devices had a concept that they aren't the only application or device at work," Alexander said. "Any time any application fires up, whether it's Netflix, email or a software update, it assumes it's pretty much the only thing that needs resourcing. It asks for as much as it can take, and you end up with network contention."
There are undoubted upsides to this hyper-connected world, and panelist Russell of Veniam Works, is part of the phenomenon. Veniam is a vehicular networking startup, which plans to bring a type of mesh network to the road that can connect and disconnect almost instantaneously, turning traffic into a nest of Wi-Fi hotspots for in-car connectivity and a host of other potential applications.
"We had a research project ... where they monitored bus drivers' vital signs and connected that data back to the network," Russell said. "They had GPS on the bus ... and they could tell where and when the bus drivers were stressed as they were driving."
Other ideas, according to Russell, include things like measuring the carbon footprint of a given stretch of road (by monitoring fuel consumption) and improved navigation and traffic avoidance.
BU professor Crovella, however, says that this hyper-connectivity is problematic, in particular at four central pain points. The first is the Internet protocol itself. The standard that makes a global Internet possible, as originally conceived, is essentially out of usable IP addresses, thanks to rapid growth.
"It was never conceived of that we would have multiple Internet protocol addresses for every single human being on the planet," he said. "And you can trace some of the decisions that have been made along the way as being somewhat suboptimal."
For instance, according to Crovella, MIT itself was given 16 million IP addresses. "They don't need 16 million addresses to run the university," he said. Fundamentally, however, the problem is a simple shortage of possible addresses under the IPv4 standard. The newer IPv6 standard ups the number of possible addresses from a little less than 4.3 billion to 3.4 x 10^38 - more than enough to meet even the wildest growth scenarios - but it's not backwards compatible with the earlier system, making the transition a headache.
The second problem, Crovella said, is transport control protocol, or TCP. This system, designed to address network congestion problems and improve reliability, has a seemingly minor issue that nonetheless complicates its use with wireless connections, which are increasingly prevalent.
TCP monitors connections for packet loss - when it encounters them, it assumes this means the network is congested and throttles traffic accordingly.
"The problem is, as we've seen, we're moving to a world in which most data is sourced or synched on a wireless network," said Crovella. "And wireless networks have different properties, and they lose packets for different reasons. A wireless network can lose a packet for reasons that have nothing to do with congestion."
What this means is that wireless packet loss due to, in Crovella's example, a microwave oven turning on, could prompt the TCP to assume the network is congested and act accordingly.
The third problem is a lack of security at the highest levels of the global Internet. The border gateway protocol that governs traffic between big ISPs has no built-in security, the professor said, a fact that has been exploited in several high-profile incidents, including the Pakistan/YouTube outage in 2008.
Finally, according to Crovella, there's a shortage of available wireless spectrum available for large-scale network projects, which means that existing frequencies may have to be repurposed and new auctions held.
"For example, the white space between television channels is probably going to be used for home networking, and we're going to try and dislodge the frequencies that have been used in the past, but aren't being used anymore," he said.
Email Jon Gold at email@example.com and follow him on Twitter at @NWWJonGold.
Read more about data center in Network World's Data Center section.