It's usually called "backhoe fade". Not this time, though. Late last month, two underwater communications cables were accidentally cut near Alexandria, Egypt. Internet service was disrupted throughout the Middle East and North Africa, and as far away as India. The apparent cause? A simple foul-up: misplaced ship anchors that ripped up the cables in the Mediterranean Sea.
The result? Well, that's the interesting part: Nobody panicked.
Sure, there were problems. Thanks to "boat anchor fade", internet connections and phone service were reportedly slow or flaky in India, Pakistan, Egypt and Persian Gulf countries. One Indian association of ISPs estimated that bandwidth was down by at least 50%, according to Reuters. In Cairo, many business users were completely cut off from the internet, and one stock-market trader told a reporter, "At times, we were trading blind".
But in what's arguably the most trigger-happy part of the world today, no one started shooting. During a month of wild stock-market swings, financial markets didn't collapse.
Local telecommunications companies rerouted around the problems as well as they could. Banks and businesses struggled along. Users complained and made do.
And they'll have to continue making do for up to a week, until the cables between Egypt and Italy can be fully repaired.
Understand, this isn't some backwater where net access doesn't matter. The internet is every bit as critical to businesses across the Middle East as it is in the US. But when the internet went down, everyone just worked around the problems.
Which is pretty much the way we do it here, too.
Sure, we hear users scream when the net disappears or suddenly grinds to a near-standstill. Helpdesk phones ring off the hook. Trouble tickets pile up.
But nobody panics. They'd better not, because fiber-optic cable cuts happen almost constantly along the tens of millions of miles of networks in the US. And most of those cable cuts literally live up to the "backhoe fade" nickname — they happen when cables are dug up by heavy construction equipment.
Some outages are small and annoying. Some are big and disruptive. Either way, we howl at our network providers, point to service-level agreements and figure it's their problem. And for now, it is.
But cables will keep getting cut. And while users today adapt and adjust, tomorrow that may not be so easy.
The more we integrate and automate supply chains, the more we put ourselves at risk by relying on a fragile global network. Human users can figure out what to do when the net goes down. But on their own, servers and applications won't.
We're building systems that rely on an unreliable network. So far, we've gotten away with it. As we peel people out of our processes, that becomes much riskier.
The solution? We can't rid the world of backhoe fade, boat anchor fade and everything else that just might sever a fibre-optic cable.
And we're not going to stop automating our global business processes or pushing highly adaptable, panic-resistant people out of the loop.
That leaves only one option: building systems that can recover gracefully from this kind of network failure on their own. Not by mysteriously grinding to a halt, not by dumping automated transactions into the bit bucket, but by cleanly handling a downed internet.
Yes, that will require retrofitting and refactoring. It'll cost more and require more work.
But we've just received another reminder of why we have to do it.
Remember, all it took to cripple the internet in the Middle East was a ship or two anchored in the wrong place.
And that's the kind of foul-up that'll never fade away.