What is A/B testing?
We live in an era of data-driven marketing, far from the days when marketers made decisions based on guesswork and intuition and hoped for favorable outcomes. The modern-day marketer has a scientific approach and relies on data. And A/B testing is the best way to remove uncertainty and gut feeling when making marketing or design decisions for websites, ads or other digital campaigns.
Embrace the experimentation mindset. By basing your strategy on data and A/B tests, you’ll be agile, but most importantly, you’ll have guaranteed feedback on what works and what doesn’t. You’ll be better placed to make sound business decisions and to invest time and money in what your visitors actually want.
75% of websites with more than 1 million monthly visitors already run A/B testing programs.
But successful A/B testing requires preparation, education, time and effort to put into practice. You’ll have to create a process, put a framework in place, learn about statistics, set up and learn a new tool and make sure you’re getting accurate results. But the effort and time are worth it, given the potential to achieve your marketing goals.
To help, we’ve brought together in-depth content on A/B testing from the best blogs and experts.
What is A/B testing (or Experimentation/Split testing)?
A/B testing is an online experiment conducted on a website, mobile application or ad, to test potential improvements in comparison to a control (or original) version. Put simply, it allows you to see which variation (version) works better for your audience based on statistical analysis.
What is split testing?
A/B testing is also known as split testing, which can be either the same thing as A/B testing or mean split URL testing. For a classic A/B test, the two variations are on the same URL. Alternatively, with split URL testing, your changed variation is on a different URL (although this is hidden from your visitors).
What about multivariate testing (MVT)?
Sometimes, you want to test several changes on a page—for example, the banner, header, description and video.
To test all of these elements simultaneously, you use multivariate testing (or MVT). In this case, you have multiple variants generated to try all the different combinations of these changes to determine the best one.
The big downside of multivariate testing is that it requires an enormous amount of traffic to be statistically accurate. Before starting a multivariate testing project, you need to check that your audience is high enough to provide representative results.
What is a A/A test?
A/A tests enable you to test two identical versions of an element. The traffic to your website is divided into two, with each group exposed to the same variation. Through this, you will determine whether the conversion rates in each group are similar and confirm that your solution is working correctly.
Introductory resources on A/B testing to help you get started
What are the benefits of A/B testing?
Optimize your website continuously to improve the visitor experience and general conversion rate.
Engage visitors around your brand
Give your visitors an exceptional user experience to engage them around your brand and retain them over the long term.
Get to know your visitors better
Analyze how the different elements of your pages impact your visitors’ behavior and learn more about their needs and expectations.
Make decisions based on quantified results
A/B test your hypotheses and reduce all risk factors. Make decisions based on reliable facts and statistics rather than subjective assessment.
Optimize your time and budget
Channel your efforts (and your budget) into what works best for all of your audience, using what you learn from your A/B tests.
With A/B testing you’ll be able to confidently answer these questions:
- Which elements drive sales, conversions or impact user behavior?
- Which steps of your conversion funnels are underperforming?
- Should you implement this new feature or not?
- Should you have long or short forms?
- Which title for your article generates more shares?
How does A/B testing work?
You compare the current version (control) of a page/element against one (or more) variations of it with the changes you want to test. This could be a website page, an element in a page, a CTA, a picture, or bigger changes to the customer journey.
You divide your traffic into equal portions, and visitors are then randomly exposed to one or the other variation during the set period of time when the test is running. Then, their relative performance (in terms of metrics such as conversions or sales) are compared and analyzed to determine if the change(s) are worth implementing.
Dynamic traffic allocation or multi-armed bandit testing
Multi-armed bandit testing (or dynamic traffic allocation) is when your algorithm automatically and gradually redirects your audience toward the winning variation.
A/B testing statistics and how to understand them
A/B testing is based on statistical methods. While you don’t need to know all the math involved in analyzing your results, having a basic knowledge of statistics will improve your chances of success.
There are two main statistical methods used by A/B testing solutions. One isn’t better than the other, they simply have different uses.
This approach allows you to see the reliability of your results thanks to a confidence level: If this is at a level of 95% or more, you have a 95% chance of it being accurate. But this method has a downside. It has a "fixed horizon", meaning that the confidence level is valueless until the end of the test.
This approach provides a result probability when the test starts, so there is no need to wait until the end of the test to spot a trend and interpret the data. But this method also has challenges: You need to know how to read the estimated confidence interval given during the test. With every additional conversion, the trust in the probability of a reliable winning variant improves.
A/B testing : Full stack or client-side
The best approach to choose will depend on company structure, internal resources, the development life-cycle and the complexity of the experiments, as this blog will explain.
- Client-side experimentation and personalization does not require advanced technical skills, making it well-suited to digital marketers. It enables teams to be agile and to run experiments very quickly, avoiding bottlenecks and getting faster test results.
- Server-side testing approach requires technical resources and more complex developments, it does enable more powerful, scalable and flexible experimentation.
Brands must be able to use each of these approaches to involve all their teams in the optimization process and manage their different projects in the best conditions.
The client-side approach: increased flexibility for marketing teams
In a client-side environment, web pages are modified directly in the visitor’s browser. Essentially the source code of the original page is forwarded from the server to the end user's browser and a script steers all the changes to the browser (whether it is Chrome, Firefox, Safari, Opera, etc.) on the fly to display a version of the modified page.
With client-side testing, you can create and deploy front-end tests and personalizations very quickly - for example changing text and CTA button placement, switching blocks around or adding a pop-in to improve usability.
Server-side testing approach: maintain control over your experiments.
Working server-side means that optimization hypotheses are created on the back-end architecture side and not via the visitors’ browser, as is the case in the client-side approach. The changes are directly generated when the HTML pages load.
With a server-side testing approach, you control all the elements of your tests and experiments from directly within the coding environment. You can therefore run more in-depth tests and personalizations, on the architecture or the running of your website, with more freedom over test design.
Hybrid experimentation: bringing together client and server-side testing
Brands can create and run hybrid experiments that bring marketers and developers together, using the same tools and without the need to integrate different products or choose between a client-side or server-side approach.
How to put in place an A/B testing strategy
1. Measure and analyze your website performance to identify what can be optimized
- Before optimizing the visitor experience and redesigning your website, it is essential to identify the weak points and areas to optimize on your pages.
- Every website is different and brands must develop their strategy based on the nature of their audience, their goals and the results obtained after analyzing their website performance.
- To identify friction points on your website, you can use behavioral analysis tools such as click tracking, heatmaps, etc.
2. Formulate your optimization hypotheses
Once you’ve identified the friction points that are stopping your visitors from converting on your website, formulate hypotheses to figure out what experiments to put into place.
- Observation: the sticky bar you installed is only rarely used by website visitors.
- Hypothesis: perhaps the icons are not clear enough; adding information could improve this weak point.
- Planned experiment: adding wording below each icon.
3. Prioritize your A/B tests and establish your roadmap
To implement an effective A/B testing roadmap and obtain convincing results, it is vital to prioritize your actions. With the Pie Framework, created by WiderFunnel, you can rank your test ideas according to three criteria rated 1 to 10 to determine where to start:
- Potential: on a scale of 1 to 10, how much do you think this page can be improved?
- Impact: what is the value of the traffic (volume, quality) on this page?
- Ease of implementation: how easy is it to implement the test (10 = very easy, 1 = very difficult)?
By averaging the three grades, you’ll know which tests to launch first. (There are, of course, other prioritization frameworks; feel free to find the one you prefer)
Discover the 4 steps to build your testing roadmap.
4. Analyze your A/B tests and learn from your results
It’s crucial to analyze and interpret your test results. After all, A/B testing is all about learning and making decisions based on the analysis of your experiments.
For effective analysis of your results:
- Learn to recognize “false-positives”
- Establish representative visitor segments
- Don’t test too many variations at the same time
- Don’t give up on a test idea after one failure
What elements should you A/B test on your website?
You can A/B test absolutely everything on your website, from messages to design to browsing elements. Here are a few examples to inspire you:
A/B testing: best and worst practices
Best practices for successful A/B tests
- Analyze the behavior of your visitors to formulate optimization hypotheses
- Set clear goals and associated KPIs
- Prioritize your tests and establish your roadmap
- Create teams made up of varied profiles and able to grasp the different aspects of an A/B testing project
- Combine full-stack and client-side to involve all the teams in your optimization process
- Communicate the results internally to cultivate a culture of optimization and continuously improve your practices
To go further:
A/B testing mistakes
- Starting with overly complicated tests and not taking “quick wins” into account
- Not validating your hypotheses with insights on the behavior of your visitors before launching your tests
- Testing without a defined process or plan
- Launching your tests without prioritizing
- Optimizing the wrong KPIs
- Not testing continuously to draw lessons and understand the behavioral changes in your visitors
To go further:
Adopting A/B testing in your company
A culture of experimentation
It’s crucial to adopt a culture of experimentation in-house if you are to put in place an effective A/B testing strategy on your website. To do this, surround yourself with the right people:
Personalization specialists and project managers have solid digital experience. They have watched the sector evolve and understand its challenges. Their role is to structure the strategy and manage the project by coordinating the different profiles and resources.
The developers and designers are in charge of the operational aspect of the personalization strategy. The developer handles the integration and the technical aspect of the experiments. As for the designer, they must have sound UX knowledge in order to create personalized experiences adapted to visitors’ needs.
The different organizations to lead A/B testing projects
To set up your dedicated experimentation team, you can draw inspiration from the three main types of organization structures that exist in companies today:
- A centralized structure that drives the A/B testing strategy for the entire company and prioritizes experiments according to each team’s needs;
- A decentralized structure with experts in each team to run several projects simultaneously;
- A hybrid structure with a experimentation unit and experts in each team.
The essential features of an A/B testing platform
Setting your goals and your variations, preparing your tests, analyzing your results: what are the essential features of your A/B testing platform?
There are many tools available to get the best out of your optimization practice. Choosing a comprehensive solution that offers customized support is also an important aspect.
Find all the essential features to optimize your A/B testing practice in our checklist.
Increase your knowledge and learn how to implement effective A/B tests, interpret and analyze their results, and build an experimentation roadmap to support your UX and business goals thanks to our online training course. At the end of the course, test your knowledge to earn a certificate.