Even if you have 100% confidence in your product, testing will help you to strengthen its position on the market. Testing helps to find out what your product can endure, reveal the defects, minimize the risks in the further product development process and maximize company’s profits. Choosing the right testing technique is an essential decision that directly impacts the product's output. If your objective is to improve your overall user experience, you definitely should try the A/B testing method.
Why do you need A/B testing
A/B testing, also known as a split test, is an experiment aiming to determine which of the two webpage (or UI screen, or feature implementation) variations are better. Based on the testing results, you can confidently customize the given page (or screen) so that your users can perform targeted actions easier and with more convenience.
A/B testing is conducted by presenting each proposed version to users at random and analyzing the results. This method demonstrates the efficiency of potential changes, enabling data-driven decisions and ensuring positive impacts. The inarguable advantages of A/B testing are ease of analysis and quick results.
Almost any content and parts of a webpage can be A/B tested, for example:
Web pages and their elements
A/B testing can be divided into 7 steps, starting with the hypothesis creation, and leading to the confirmation or validation of your hypothesis.
1. Formulate hypothesis
First, decide on the assumption to test — clearly define your hypothesis. Try to build it based on web analytics and visitor analysis. It would be useful if you research customer preferences. Talk to your customers — ask why they love your product, what problem in their opinion should be solved and analyze support requests. You can find pages with the lowest conversion and use the metrics to trace what people generally do on your page. Based on your findings, determine the main pain-points or bottlenecks.
One important rule — in a single experiment only one change can be tested. So if you have formulated several hypotheses, you should conduct separate testing sessions for each of them.
2. Identify targets
Once you have built the hypothesis, select the criteria for this specific test. Think again, what is the problem that needs to be solved first. As an example, your criteria could be the number of purchases, average total, number of applications or bounce rate. Identify one main target — you will be focused on it while comparing the two web page versions.
3. Select the test item
The next step would be to select what you want to test your hypothesis with.
You can test any element of your page, for example:
Headings and subheadings (length, content, position)
Calls to action (length, content, placement)
Buttons (color, size, location, text)
Images (size, content, placement)
Text on page (length, content)
Forms (placement, size, number of fields)
Prices of goods and other items
Choose the element that you would like to test and create a copy of the page. Then make the necessary changes to it. Remember, that during one test, only one change can be made.
4. Determine the test sample size
It is important to count in advance how many people have to visit the page, for the results to be statistically correct and trusted. Then, decide if you target only repeat customers/users, or new visitors too. If the testing participants are only existing users, who are already accustomed to the interface, their reaction to changes might be completely different from those, who see the interface for the first time.
Your test sample size depends on how big is the change that you plan to make. For stronger changes, fewer people will be required to sample. You can use an online calculator, e.g. Optimizely or AB Tasty sample size calculator, to determine the number of participants (www.optimizely.com; www.abtasty.com).
5. Determine the duration of the experiment
Even if you get the right number of visitors in just a couple of days, the test still should run for at least a week. The visitors' behavior might change on different days of the week, and it is important to track it. If your target indicator is a purchase, the recommended testing time would be 10-14 days, because people do not make purchase decisions immediately. When you have determined the duration of the test, please stick to your plan and do not finish it earlier. Remember that better results take time.
6. Run A/A testing
Now when you are prepared for the testing, you should start with A/A testing. It is conducted by showing all users two identical pages and tracking the results. If they match, the results of A/B testing will be reliable too. And if the results of A/A testing differ — there is no sense in proceeding to A/B testing since it will be incorrect.
7. Run A/B testing
Once you've gone through all the previous steps, the fun part can finally commence. To run it, you might find useful some of the special services, like Google Experiments, which is free and directly connected to Google Analytics. Then, you have to be patient and accurate analyzing the results gathered. Using the right tools and software is highly recommended for the results to be precise.
Keep in mind that A/B testing should not be treated as the main way to increase conversion. This testing method only allows you to confirm your hypothesis with data, make the educated decision and maybe resolve some internal arguments. But to deal with other business issues, a more complex testing approach has to be applied.
Whenever you need to test the elements of your website
landing page, give a try to our A/B testing template.
It provides an example of a product launch project, which you can easily adjust to your needs.
How to use this template?
Create a new project and choose the A/B testing template to start. All Infolio templates include some demo content. Feel free to remove it once you've familiarized yourself with the concept.
The tasks are grouped by testing sessions and evaluation: "Test 1", "Test 2", and "Evaluation". Example tasks provide general guidance, edit them or add new tasks to turn the process into your own.
Group tasks by Status to see the test schedule ("Week 1", "Week 2", "Week 3", and "Week 4"). In this view you can easily add new statuses to your workflow or rearrange existing ones. Change the status of any task by dragging and dropping it to the corresponding column.
To see how tasks are distributed within your team, group the project by Assignee. In this view you can reassign tasks quickly by dragging and dropping them between columns
If you need any further help or if you have suggestions about how to improve
this template, don't hesitate to let us know!