Imagine it’s Black Friday, when all of the sudden … boom! Your most critical web application goes awry, bringing your e-commerce operation to a screeching halt on the very day flawless performance is needed most. Think it can’t happen? Think again …
Heavy traffic volumes brought on by peak sales periods can put even the most battle-tested websites and applications to the test. To make matters worse, missteps almost always seem to happen at times when businesses have the most to gain — or lose. A recent survey from my firm Gomez found that a third of online shoppers had a bad experience (e.g., slow downloads, frequent user errors and/or transaction problems) on a retail website last holiday season.
Don’t think that holiday shoppers are a forgiving bunch, either. These same online shoppers report little tolerance for poor web performance, even during periods of peak traffic when they realize that your site is likely inundated with visitors. According to the Gomez survey, 67 percent of consumers said they expect websites to work well regardless of how many visitors a site may have at any given time. In addition, 78 percent of respondents indicated they'll readily switch to a competitor’s site if they encounter slowdowns, errors or transaction problems.
Providing exceptional web performance during peak sales periods can be challenging because organizations are expected to ensure scalability across an extremely wide range of application and infrastructure elements, including those that lie beyond companies’ firewalls. Today’s feature-rich websites and applications include components, content and services delivered not just from inside data centers, but from a number of third-party providers. For example, an online storefront may include shopping carts, search engines, user reviews and analytics all from specialized third-party providers.
In addition to third-party services, your applications likely traverse a complex delivery path that may include internet service providers, content delivery networks, desktops, mobile devices and browsers en route to anxious end users around the world.
Traditional, inside-the-firewall testing tools remain popular with Q&A and testing professionals. Unfortunately, these tools are built around an antiquated philosophy of generating load and measuring performance behind the firewall. This kind of internal testing only tells testers part of the story, because it only identifies problems rooted in a data center. Your end users don't live in data centers, of course. They’re located around the world, at the end of a long and complex web application delivery chain.
The key to load testing today’s modern websites and applications is to measure from the perspective of end users at the internet’s core and at its edges. Both are meaningful vantage points for understanding the true end user experience, and not just a lesser proxy of that experience. A winning approach is to apply load from the cloud along with load generated from real end users’ desktops and devices around the world.
With this combined approach, organizations can more accurately identify which end user segments are likely to experience a bottom line-risking performance degradation. With a more appropriate end user-oriented testing approach, IT professionals can more quickly identify, isolate and resolve application issues and provide better value to their organizations. Bring on the crowds!
Matthew Poepsel is vice president of performance strategies at Gomez, the web performance division of Compuware. Reach Mathew at mpoepsel@gomez.com.
- Companies:
- Gomez
- People:
- Matthew Poepsel