Search engine optimization (SEO) is an art as well as a science. As with any scientific discipline, it requires rigor. The results need to be reproducible, and you have to take an experimental approach — so not too many variables are changed at once. Otherwise, you won’t be able to tell which changes were responsible for the results.
You can glean a lot about SEO best practices, latest trends and tactics from SEO blogs, forums and e-books. But it can be hard to separate the wheat from the chaff, to know with any degree of certainty that a claim will hold true. That’s where the testing of your SEO comes in. Prove what works and what doesn’t.
Unlike multivariate testing for optimizing conversion rates, where many experiments can be run parallel, SEO testing requires a serial approach. Everything must filter through Google before the impact can be gauged. This is made more difficult by the fact that there’s a lag between making changes and having the revised pages get spidered, as well as another lag while the spidered content makes it into the index and onto the search engine results pages (SERPs). On top of that, the results delivered depend on the search history of the user and the Google data center accessed.
Experimental Approach
An experimental approach to SEO: You have a product page with a particular ranking in Google for a specific search term, and you want to improve the ranking and resultant traffic. Rather than applying a number of different SEO tactics at once, start varying things one at a time.
1. Tweak just the title tag and see what happens.
2. Continue making further revisions to the title tag in multiple iterations until the data shows that the tag truly is optimal.
3. Then move on to the headline, tweaking that and nothing else.
4. Now, watch what happens. Optimize it in multiple iterations.
5. Then move on the intro copy, then the breadcrumb navigation, and so on.
Testing should be iterative, not just a “one off” in which you give it your best shot and you’re done. If you’re testing title tags, continue trying different things to see what works best. Shorten it; lengthen it; move words around; substitute words with synonyms. If all else fails, you can always put it back to the way it was.
When doing iterative testing, it’s good to do what you can to speed up the spidering and indexation so you don’t have to wait as long between iterations to see the impact.
You can do this by flowing more link gain (PageRank) to the pages you want to test. That means linking to them from higher up in the site tree, e.g., from the homepage. But, be sure to give it some time before forming your baseline.
Or, you can use the Google Sitemaps protocol to set a priority for each page — a priority of 1.0 to pages increases the frequency those pages will be spidered. (Note: Don’t make the mistake of setting ALL your pages to 1.0, otherwise none of your pages will be differentiated from each other in priority and thus none will get preferential treatment from Googlebot.)
Since geolocation and personalization mean that not everyone is seeing the same search results, you shouldn’t rely on rankings as your only bellwether as to what worked or didn’t work.
Other Useful SEO Metrics
Many other meaningful SEO metrics exist, too, including:
3 traffic to the page,
3 spider activity,
3 search terms driving traffic per page,
3 number and percentage of pages yielding search traffic,
3 searchers delivered per search term,
3 ratio of brand to nonbrand search terms,
3 unique pages spidered,
3 unique pages indexed,
3 ratio of pages spidered to pages indexed,
3 and many others.
(Some of these off-the-beaten-path metrics are the result of research conducted by Netconcepts for the study Chasing the Long Tail of Natural Search: How to Capture the Unbranded Keyword, published in 2006.).
But just having better metrics isn’t enough. An effective testing regimen also requires a platform conducive to performing rapid-fire iterative tests, where each test can be associated with reporting based on these new metrics. Such a platform especially comes in handy with experiments that are difficult to conduct under normal circumstances.
Testing a category name revision applied site-wide is harder than, say, testing a title tag revision applied to a single page. Specifically, consider a scenario where you’re asked to make a business case for changing the category name “kitchen electrics” to a more search engine optimal “kitchen small appliances.” Conducting the test to quantify the value would require applying the change to every occurrence of “kitchen electrics” across the Web site. A tall order indeed, unless you can conduct the test as a simple search-and-replace operation, which is exactly what could be done by applying it through a proxy server platform.
By acting as a middleman between the Web server and the spider, proxy servers can facilitate useful tests that normally would be quite invasive on the e-commerce platform and time-intensive for the IT team to implement.
Comparison Testing
During the proxying process, not only can words be replaced, but also HTML, site navigation, Flash, JavaScript, frames, even HTTP headers — almost anything. It also can give you the ability to do some worthwhile side-by-side comparison tests, a champion/challenger sort of model that compares the proxy site to the native Web site.
Start With a Hypothesis
A sound experiment always starts with a hypothesis. For example, if a page isn’t performing well in the engines and it’s an important product category, you might hypothesize that this product category isn’t performing well because it’s not well-linked from within your site. Or you may conclude that this page isn’t ranking well because it’s targeting unpopular keywords. Or even still, the page isn’t ranking well because it doesn’t have enough copy.
Once you have your hypothesis, you can set up a test to gauge its truth. Try these steps:
1. In the case of the first hypothesis, link to that page from the homepage and measure the impact.
2. Wait at least a few weeks for the impact of the test to be reflected in the rankings.
3. Then if the rankings don’t improve, formulate another hypothesis and conduct another test.
Granted, it can be a slow process if you have to wait a month for the impact of each test to be revealed, but in SEO, patience is a virtue. Happy testing!
Stephan Spencer is president and founder of Netconcepts, a Web design and consulting firm specializing in search engine, optimal Web sites and applications. Reach him at sspencer@netconcepts.com.
- Companies:
- Yahoo! Search Marketing