Computerworld

AWS says a typo caused the massive S3 failure this week

The cloud provider is implementing several changes to prevent similar events

Everyone makes mistakes. But working at Amazon Web Services means an incorrectly entered input can lead to a massive outage that cripples popular websites and services.

That's apparently what happened earlier this week, when the AWS Simple Storage Service (S3) in the provider's Northern Virginia region experienced an 11-hour system failure.

Other Amazon services in the US-EAST-1 region that rely on S3, like Elastic Block Store, Lambda, and the new instance launch for the Elastic Compute Cloud infrastructure-as-a-service offering were all impacted by the outage.

AWS apologized for the incident in a postmortem released Thursday. The outage affected the likes of Netflix, Reddit, Adobe, and Imgur. More than half of the top 100 online retail sites experienced slower load times during the outage, website monitoring service Apica said.

Here’s what set off the outage, and what Amazon plans to do:

According to Amazon, an authorized S3 employee executed a command that was supposed to "remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process," in response to the service's billing process working more slowly than anticipated.

One of the parameters for the command was entered incorrectly and took down a large number of servers that support a pair of critical S3 subsystems.

The Index subsystem “manages the metadata and location information of all S3 objects in the region,” while the placement subsystem “manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate.”

While those subsystems are built to be fault tolerant, the number of servers shut down required both to be fully restarted.

As it turns out, Amazon hasn't fully restarted those systems in its larger regions for several years, and S3 has experienced massive growth in the intervening time. Rebooting those subsystems took longer than expected, which added to the length of the outage.

In response to this incident, AWS is making several changes to its internal tools and processes. The tool that was responsible for causing the outage has been modified to take down servers more slowly and to block operations that will take capacity below safety check levels.

AWS is also evaluating its other tools to make sure they have similar safety systems in place.

AWS engineers are also going to start refactoring the S3 index subsystem to help speed up reboots and reduce the blast radius of future problems.

The cloud provider has also changed its Service Health Dashboard administration console to run across multiple regions. AWS employees were unable to update the dashboard during the outage because the console relied on S3 from the affected region.