The benefit of 3D and Augmented Reality (AR) has become vastly evident in today’s eCommerce market. Those that engage see increased conversion rates, revenue per visit, and average order value. Check out our case studies here.
But these metrics alone do not answer the most important questions for our customers:
- Am I seeing a positive return on this investment?
- Is 3D/AR converting non-buyers into buyers?
- Is this a supplemental solution for those buyers already on the purchase path?
The most efficient and effective way to answer these questions is through an A/B test.
A/B testing, also known as split testing, is a user experience methodology in which analysts compare two versions of a web page against each other to determine the best performer. An A/B test can be as simple as an on/off comparison or it can be expanded to test various deployments and call-to-actions in order to get the most optimal results. We commonly do both of these tests, and recently completed the world’s largest 3D/AR split test with over 1 million unique users visiting over 4,400 Vertebrae-enabled SKUs. Here are a few best practices we have identified while split testing:
Nothing matters until significance
Significance is the statistical term for having enough data to reduce the likelihood of false-positives. This is especially important in the eCommerce world since so many factors affect a SKUs daily performance (weekends, promotional sales, marketing/social media campaigns, etc.) and can create ‘noise’ in the test. Decisions or actions should not be made from the split test data until significance has been reached. The best practice is to decide on a sample size in advance and wait until the experiment is over before you start believing any of the data.
Pick winning SKUs
When ramping up an A/B test, it is important to select SKUs that have both meaningful traffic and transactions that will lead to significance as quickly as possible. We’ve seen significance be reached in as little as one week with high-volume SKUs and have seen it take months for low-volume SKUs. You also will want to avoid SKUs that are commonly sold out!
Don’t overlap your test
It may seem appropriate to run multiple overlapping A/B tests at once in order to increase the velocity of testing. However, running multiple tests can create interference with each other that could result in selecting the underperforming variant in both tests. Unless the two tests have a negligible effect on each other, it is best to avoid overlapping.
Separate by device
The user experience is inherently different between desktop and mobile devices. This is especially important when testing the effectiveness of 3D and AR. The best approach is to evenly split desktop and mobile traffic between the test variations and analyze the performance of each segment separately.
The power of A/B testing can provide a focused understanding of your customer’s preferences and shopping behaviors. We all know that just the slightest shift in lift on a product can drive an incredible impact on revenue. Shoot us a message to find out more about our unique Axis platform testing features for your products.