Nowadays, even regular web surfers know some of the things to avoid when designing a website for fast performance. Cut the number of requests to the web server. Shrink JPEG sizes. Employ a content delivery network vendor like Akamai Technologies or Limelight Networks. The problem is, according to Steve Souders, steps like these aimed at optimising the web server make only a tiny impact. "We used to tear apart the Apache [web server] code to figure out what Yahoo was doing," said Souders, who was Yahoo's chief performance engineer for several years before moving to Google in the same role.
But after performing a detailed analysis, Souders discovered something startling: only 10% to 20% of the time it took to load a website could be attributed to the web server.
That may have made sense a decade ago, but in today's era of PCs powered by dual and quad-core CPUs, it doesn't. And the cost of the delays created can be high.
Google has found that a 500-millisecond delay results in a 20% decrease in web traffic, while Amazon.com has seen a 100 millisecond delay cutting its sales by 1%, Souders said.
Doing so helped one Google site that Souders declined to name speed up its initial page rendering by 60%.
Also, users tend to stay on certain sites, such as their web mail, all day. These sites will re-render constantly throughout the day, incurring a delay from over-elaborate CSS files each time, Souders said.
"When I look at it, I feel like the teacher who hands out very severe grades," he said. Search engines with minimal content on the page, such as Google.com and Microsoft's Live.com, are among the rare sites that get an A from Yslow.
There are other tools besides Yslow for diagnosing performance bottlenecks. Microsoft offers the Visual Roundtrip Organiser, while AOL developed a now-open-source tool called PageTest. All these tools judge website performance by a set of rules, though none of them matches YSlow's 22 criteria.