Every website has a purpose. And every page on that site should also have an objective. One of the best ways to improve user experience is to conduct A/B tests on your website.
There’s an underlying goal behind the construction and maintenance of every site built (and every page on that site). Whether the point is to generate leads and sales on an e-commerce site, or generate revenue through affiliate ads or simply increase readership and build an authoritative author profile, there is some measurable objective to virtually all websites (and of course many sites have multiple purposes).
And one of the key factors in a site achieving their aims is User Experience (UX).
Savvy website operators know that UX is perhaps the primary factor in turning visitors into customers, or steering them towards other desired ends.
But how do you know if your website is optimized for UX? How can you tell if your visitors are getting the experience that directs them towards your overall objectives?
The answer is threefold: 1) you run some tests 2) you run some more tests 3) you test some more.
That’s right, a successful website must be a nimble operation that is constantly examining and studying the UX factors and then making changes based on the findings.
A best practice has emerged among web development and marketing gurus who want to identify how site modifications affect UX, and ultimate outcome by which the site’s performance is measured.
For instance, you might start with your “normal” website – this is test subject “A” – and then make a change to the background color for test subject “B”.
You then route half of your site’s visitors to landing page A, and the other half to landing page B (there are many useful A/B testing tools that will help you do this).
Meticulously track the crucial data points for each landing page – and your A/B testing tool will do this for you – like clicks, time spent on page, and most importantly conversions – conversions could be people who purchase a product from your site, click on one of your affiliate links, or sign up for your newsletter or other lead generating service.
Once you’ve identified a statistically significant “winner” in your A vs. B contest, then you obviously implement the most successful page across your site.
But you don’t just stop there. The most successful A/B testers are constantly running tests, running A/B tests on different landing pages on their site, testing the effectiveness of various page attributes like headlines, calls to action, graphics, fonts, and product descriptions.
You can be running a multiple A/B tests at one time, but it is crucial to note that you can only be running one test per landing page at a time. It is an A vs. B test, not an ABCD test, so you will only have two versions of any one landing page operating at one time, identical in every way except for the one aspect that you are testing. If there are several differences between the A and B pages, even if they are minor distinctions, then you run the risk of causation confusion when the results start coming in; you won’t know for certain which page factor really caused the outcome disparity.
A/B tests were once de rigueur for only the big brands and major e-commerce sites; Amazon.com was an early pioneer of the practice and it was critical in the development of that brand into a worldwide retail giant. But these days even the smallest of small businesses are seeing the value in conducting split tests to improve the UX of their sites.
For that reason, there are some on-site code tactics to implement to mitigate those contingencies; one of the pages should be coded with the “no index” meta tag in the header, while the other page (generally the original, de facto page) should get the rel=canonical reference embedded. This will tell the search robots to crawl and index only one of the two pages.
Cody Cahill is a writer and SEO guru for Page One Power a leading link building company who operates a popular link building blog.
There’s an underlying goal behind the construction and maintenance of every site built (and every page on that site). Whether the point is to generate leads and sales on an e-commerce site, or generate revenue through affiliate ads or simply increase readership and build an authoritative author profile, there is some measurable objective to virtually all websites (and of course many sites have multiple purposes).
And one of the key factors in a site achieving their aims is User Experience (UX).
User Experience (UX)
UX encompasses all the elements of a website’s layout and design that the user interacts with onsite. Aspects like design, interface and informational architecture affect the general aesthetics of the website, navigability between pages and the overall “findability” of useful information, thus impacting the overall experience of the user.Savvy website operators know that UX is perhaps the primary factor in turning visitors into customers, or steering them towards other desired ends.
But how do you know if your website is optimized for UX? How can you tell if your visitors are getting the experience that directs them towards your overall objectives?
The answer is threefold: 1) you run some tests 2) you run some more tests 3) you test some more.
That’s right, a successful website must be a nimble operation that is constantly examining and studying the UX factors and then making changes based on the findings.
A best practice has emerged among web development and marketing gurus who want to identify how site modifications affect UX, and ultimate outcome by which the site’s performance is measured.
A/B Tests
The experimental approach is known as A/B Testing in which two versions of the website are compared, each identical except for in one distinct factor that is being tested.For instance, you might start with your “normal” website – this is test subject “A” – and then make a change to the background color for test subject “B”.
You then route half of your site’s visitors to landing page A, and the other half to landing page B (there are many useful A/B testing tools that will help you do this).
Meticulously track the crucial data points for each landing page – and your A/B testing tool will do this for you – like clicks, time spent on page, and most importantly conversions – conversions could be people who purchase a product from your site, click on one of your affiliate links, or sign up for your newsletter or other lead generating service.
Caution!
Most testing software tools will help you identify when a variance between the key outcomes on either page is statistically significant. This is of vital importance. As with any scientific testing, you have to be very careful that you aren’t making any conclusions based on insufficient data or small sample sizes. The only thing worse than making UX decisions without analyzing the data is to make a decision based on faulty or incomplete data.Once you’ve identified a statistically significant “winner” in your A vs. B contest, then you obviously implement the most successful page across your site.
But you don’t just stop there. The most successful A/B testers are constantly running tests, running A/B tests on different landing pages on their site, testing the effectiveness of various page attributes like headlines, calls to action, graphics, fonts, and product descriptions.
You can be running a multiple A/B tests at one time, but it is crucial to note that you can only be running one test per landing page at a time. It is an A vs. B test, not an ABCD test, so you will only have two versions of any one landing page operating at one time, identical in every way except for the one aspect that you are testing. If there are several differences between the A and B pages, even if they are minor distinctions, then you run the risk of causation confusion when the results start coming in; you won’t know for certain which page factor really caused the outcome disparity.
A/B tests were once de rigueur for only the big brands and major e-commerce sites; Amazon.com was an early pioneer of the practice and it was critical in the development of that brand into a worldwide retail giant. But these days even the smallest of small businesses are seeing the value in conducting split tests to improve the UX of their sites.
SEO considerations
One final note on A/B tests comes from an SEO perspective. Most of the A/B testing tools, including the popular Google Website Optimizer, require you to build the landing page variations on different URLs. This can have negative SEO impacts as the search engine crawlers could see it as duplicate content and/or get confused as to which of the two pages is the one to index.For that reason, there are some on-site code tactics to implement to mitigate those contingencies; one of the pages should be coded with the “no index” meta tag in the header, while the other page (generally the original, de facto page) should get the rel=canonical reference embedded. This will tell the search robots to crawl and index only one of the two pages.
Cody Cahill is a writer and SEO guru for Page One Power a leading link building company who operates a popular link building blog.
Improve User Experience by Conducting A/B Tests on Your Website is a post from:
The post Improve User Experience by Conducting A/B Tests on Your Website appeared first on .