Does A/B Testing Matter for B2B Ecommerce? If So, What Should You Test?

Ecommerce | December 12, 2018

The purpose of A/B testing, or split testing, is to find the best solution; a goal relevant to wholesalers and retailers alike. But it seems to be an afterthought to those implementing B2B ecommerce. The problem is that many people believe that design doesn’t matter on B2B, so A/B testing doesn’t matter. They’re not way off target. I’d say what is traditionally thought of as “design” often matters less. However, you don’t test a design; you test your customer’s response to the design, and more importantly, the user experience, which is much broader.

A/B testing is not purely about proving out particular user experiences (UX). You may make business decisions on your assumptions about the mental characteristics and attitudes of your target audience. With a test, you have a chance to validate your hypothesis.

Does a change make a customer or buyer more or less likely to purchase (conversion rate) or spend more money when they buy (average order value)? It can be helpful to tackle user experience and psychological factors by breaking them up into four thematic areas.


Just as you don’t want to waste your customer’s time with a page that takes ten seconds to load, you don’t want to design a page that requires ten seconds to understand the layout or find content. You should test the navigation, structure, and functionality with your target audience to improve the time it takes to review. You should also test to confirm the clearest and most compelling value propositions so a potential customer wants to take an action on your site.


If users are going to provide their confidential information or commit their financial resources, they need to trust your site. A professional design is a good start, but testing can reveal low-cost UX or content changes that instill confidence and build trust and which will lead to higher conversion rates.


You can test which tools (features) aid your customers in finding the right product, which get in the way, or which send the customer down an unproductive path. Oftentimes, features that are meant to aid users can unintentionally create confusion or distract them from converting. Measuring “revenue per visitor” and tracking of engagement within key pages between the original experience versus an alternative presentation allows you to understand both the intermediate and the bottom line effect.


There are a number of must-have persuasive techniques for any ecommerce site, as well as best practices for a number of industries. Certain persuasive techniques are less relevant to a B2B buyer when compared to a B2C consumer, so this may not be a major area of testing. For example, we have often tested where and how often clients place promotions. Still, if you can find ways to improve urgency, then you can find ways to test them.

A bad UX gives buyers a reason to look around elsewhere. If you have enough traffic to reach statistical significance, testing will allow you to explore ways to keep customers moving through your funnel.

We’ve covered the “why” of A/B testing, but how do you start?

  1. Determine what resources you have, both technically and employee, to run an optimization program. Do you have an optimization testing tool, such as Optimizely or Dynamic Yield? Does this tool integrate with your ecommerce platform? Will you use Google Analytics to track data? And, do you have the correct headcount and knowledge to run the program? If you don’t have the resources, consider a partner to accelerate and manage your optimization program.
  2. With the necessary stakeholders, prioritize your primary and secondary KPIs. Your primary KPIs could be to generate more sales, increase conversions, or reduce the bounce rate. Secondary KPIs may be increasing the click-through rate on the product display page or the number of loyalty program registrants.
  3. Once you have your KPIs, you can then determine what testing concepts you’d like to apply to the user experience to see improvement. These testing concepts need to realistic and prioritized. If your online store doesn’t have a significant amount of traffic and you run too many tests at once, you run the risk of never reaching statistical significance. While our primary focus in this article is A/B testing, there are multiple types of tests you can run, such as multivariate testing
  4. You’ve nailed your testing concepts, next is creating your hypothesis (the “If then” statement). For example, if you add customer reviews to the product display page, then more shoppers will add the item to the shopping cart. Based on the hypothesis, you will need to create at least two different experiences to test against one another. One could be the Control (what the current experience is), the second would be a variation.
  5. Finally, once your test reaches statistical significance, you can analyze the results and determine next steps or promote the winning experience to 100 percent of customers.


Kari Mayhew is the Marketing Manager for iCiDIGITAL, and has contributed engaging and educational ideas to a variety of enterprises, both on and off-line, for well over a decade. She can also be found experimenting in the kitchen, running on dirt trails and cheering for favorite teams.