An effective communication should ensure that all stakeholders in your company receive the information that is relevant to their needs. After a test, the website manager will want to know the increase in conversion rate, while the CEO will want to be informed on how much extra revenue the experiment has generated. So how can you ensure you are communicating the best possible results?
An experiment will affect many metrics: bounce rate, conversion rate, average order value, revenue etc. In order to build positive attitudes toward the CRO program, we would recommend you first define your primary, secondary and monitoring goals.
Set up relevant goals
The primary goal determines if the test won or not. It is THE most important goal of the experiment and tracks how the change(s) impact the visitors’ behaviour. The primary goal needs to be set up close to the change you implement to avoid external factors influencing the results (i.e. if you change the location of the CTA on the main landing page, you should track its CTR and not the revenue generated by the change, down the funnel).
Non-optimised primary goal set-up:
In the situation above, the test is unlikely to reach statistical significance due to external factors.
If you want to track important business metrics like conversion rate, average order value or revenue, we would recommend you set up secondary goals. They will provide valuable insights into the post-experiment results.
Primary and secondary goal set up:
Based on our experience and previous UX research, we recently tested a hover effect on the category pages of one of our ecommerce clients.
We expected to see a significant uplift in the number of users clicking on the product pages (primary goal) and a smaller increase in terms of ecommerce conversion rate or revenue (secondary goals), down the funnel.
The results were really surprising as we observed the opposite trend. The number of users clicking on the product pages increased by 1% in the variation while the ecommerce conversion rate increased by 5% and the revenue by 9%.
Setting up many goals throughout the customer's journey allowed us to conclude that the hover effect qualified the clicks. It caused users to focus on the products and their clicks.
Finally, monitoring goals will help you understand how the change impacts the engagement metrics such as the bounce rate, the average visit duration, the page viewed by visit, scroll depth etc. You can set up as many monitoring goals as you want. However, we would recommend reporting them using your web analytics tool and not the A/B testing platform.
Primary, secondary and monitoring goals set-up:
Link your A/B test platform with Google Analytics
To simplify the reporting and dig into the data, your A/B testing platform should be linked with Google Analytics. This integration will allow you to connect all your experiments' results to the key metrics you already tracked on the website.
You will also be able to gather new insights and test ideas using segmentation (create segments by device or browser, by user type (new vs returning users, logged-in vs non-logged-in, high spender vs low spender, by location or country, by traffic sources etc.)).
The test results documentation
In order to communicate the best possible results to all stakeholders, you should create a standard post experiment analysis document. The report should be updated after every test and include:
- The data and context that led to the test idea
- The test settings (primary, secondary and monitoring goals, URL targeting, audience, additional tracking in Optimizely or Google Optimize, Google Analytics or Google Tag Manager).
- A visual comparison
- The experiment results in the A/B testing platform
- The experiment results in Google Analytics
- The learnings. This is the most important section as it is the interpretation of the results and key insights generated from the data. This section should influence the decision your company makes and generate new questions for future testing.
- And finally, the next steps.
Review the CRO program
Every quarter or every six months, you should review the testing program. Thanks to the information included in the roadmap (i.e. audience, pages, primary and secondary goals, test theme) you should know exactly the area on which you should focus next (you might have run 10 tests on desktop and 5 on mobile or 6 tests focussing on removing distraction and 3 promoting social proof).
We would also recommend creating an infographic including the following information for executive stakeholders:
- Number of tests ran
- Number of active experiments
- Different audiences used in the program
- Revenue, or additional conversion rate generated by the tests
- Best performing test
- Top-level summary of learnings
- Next steps
You can find below an example of the infographic we created for our client Rentokil, the leader in the pest control industry.
Sharing the results with the company will improve the visibility of the A/B testing program and promote a data-driven culture. If you have any questions or just want some advice on how best to approach conversion rate optimisation, please get in touch with our experts.