Students and developers at Google have jointly created an open-source tool designed to better predict the effect on real-world website performance if changes are made to things like network infrastructure.
Called Monkey, the tool first captures data from actual client sessions, inferring various network and client conditions — what its creators call the "monkey see" portion of its work. It then attempts to emulate those conditions for server tests — a process called "monkey do."
Source code for the tool is available here.
Monkey is aimed at helping solve a dilemma of web testing: Trying out network or server changes on even a small portion of actual user traffic is risky, but simulations are often unrealistic because they don't accurately reflect users' network conditions, says Yu-Chung Cheng, a graduate student at the University of California, San Diego, who worked on the project during an internship at Google.
Cheng presented a paper about Monkey yesterday at the Usenix Annual Technical Conference.
Cheng admitted that the tool, which is optimised for Google's specific search application, might not be as accurate at predicting server response for other types of applications. In response to audience questions, he says Monkey also doesn't attempt to model how user behavior might change as server response speeds or slows (for example, more search requests might come in if server response improves).
"In the end, we believe it is unrealistic to build a generic one-for-all TCP replay tool," the paper, "Monkey See, Monkey Do: A Tool for TCP Tracing and Replaying," concludes. "But it is possible to build replay tool(s) for specific applications."