How to Start an A/B Testing Program

Committing to a rigorous A/B testing program yields a lot of value and growth potential. But it’s different than run-of-the-mill product management and requires a different mindset and a different setup. You need at least 2 different people performing a few different functions. Then there’s a formula to follow for developing and prioritizing a backlog.

Elements needed:

  1. Product manager: Create goals, collect data to review funnel health, generate hypotheses, set up metric tracking requirements, prioritize tests, and write tickets. May also bifurcate and set up the test versions through testing software (this would alternatively be accomplished by the developer).

  2. Designer: If there is design needed, this person does that. This is generally only needed if there is a sharp departure from the current design: color and text changes, even moving elements around, can mostly be accomplished by non-designers.

  3. Developer: Take requirements and designs (and usually caffeine) and turn them into a testable experience. (May bifurcate and set up the test versions if PM does not.) Implements tracking requirements. Launches test.

  4. QA: Tests control and variations for usability and functionality on all devices and platforms. Tests metric tracking success. 

Your mileage may vary on who you need and what functions they serve, so many of these functions may be accomplished by the same person. It depends on the size and specialization of your team. A PM can double as a designer (depending on the PM’s skills and the complexity of the design changes). PMs can do QA. So can a developer. The minimum individuals needed is 2: always have someone independent QA the work. If a PM can accomplish 1-3, someone else does 4. A PM can do 1, 2 and 4, but a developer does 3. You get the picture.


Developing Your Program

Honestly, that first step is a doozie, and this is the most dramatic way A/B testing differs from product management. 

Set an Intention

Hint: this often means you intentionally set aside your ego. The biggest impediment to real, meteoric success in A/B testing is someone on the team (in a position of authority) who doesn’t believe the test results. Or someone who does not care what the data says and wants the look/feel/function to align with their vision. If this is the case, A/B testing will do you no good. So the first step in any A/B testing program is for everyone to hold hands and kumbaya and agree that you will follow the data wherever it leads you because you want results and not a certain look.

    1. The Hard Truth #1: You are not the user. It does not matter even 1 iota what you personally find appealing or useful. It only matters if the plurality of your users find it appealing and useful. If anyone needs inspiration to motivate adopting this mindset, this is what Amazon does. Want to be like Amazon or anywhere close to their success? Follow the data.

Hard Truth #1: You are not the user.

Set some goals. Once everyone is on the same page, you have to have a single goal that you are trying to achieve. More clicks? Sales? Leads? Email subscriptions? Time on site? Pages per session? Return visitors? Likes? Whatever it is, know what it is and how you can measure it from the most accurate data source.

    1. Hard Truth #2: You only get to choose 1 metric to optimize at a time. You can retarget to new and different goals once this goal is achieved. Do not under any circumstances choose 2 goals. If you do, we will send 3 ghosts to show you the error of your ways.

Hard Truth #2: Choose 1 metric to optimize at a time.

    1. Choosing 1 goal does not mean that you put on blinders regarding the impact of your testing on other important metrics. The impact IS important, but that’s part of the evaluation stage. So know what all the critical metrics are, and measure them, but only optimize for one at a time.

Funnel review/collect data. Where in the process are you losing people, and how can you stop losing them? That’s the essence of this step. First identify the where (in what step, on what page, etc.) and then develop some ideas about how you might entice more of them to complete an action. (See last section for the top 3 test changes to achieve results.) You lose most users in one of these places:

    1. Committing to the action: “cart”/action abandonment is real. Analytics data will show you where, and surveys can help you understand the reasons why your users did not make the choice to commit.

    2. Finding what they need (browsing/search): Do you have a lot of browsing activity but not a lot of your goal action? 

    3. Having enough information to make a choice (is there pricing? Is the process transparent? Is the incentive enough for them to commit?)

    4. Connecting with them so they come back again later (is there an incentive to provide information so you can help them continue their search and reach their goal regardless of whether they have committed the action in this session? Is there an opportunity to win them as a return visitor?)

Generate some hypotheses and test ideas. This is the fun part--brainstorming! Based on what you see in the funnel health, what are some things you can play with that will improve your chosen metric? Be bold, and take calculated risks here. This is where you may need to recommit to your intention to set aside ego and ignore what “should” be true or what you want to be true. Some examples (full of lots of Hard Truths) of this are:

    1. “But I like this messaging.” Personal bias, my favorite. Back to Hard Truth #1: You are not the user. Maybe there is more compelling messaging. Test it.

    2. “But I don’t like orange CTA buttons.” More personal bias! (See how much we’re growing today?) People click on warm colored CTA buttons (particularly yellow and orange, YMMV) more than others regardless of the color palette used. Users be users. Accept that and test it.

    3. “But I really like the carousel like it is.” Mmmm, a pattern is emerging. You see why it was so important to set aside what we want and like and align ourselves to the scientific method? Your design choices, image choices, color palette, narrative structure, funnel path, search method, checkout pattern, etc. may be confusing or annoying people, maybe the timing or functionality is off, or maybe the content is not helping your users. Test other options and functionality settings.

    4. “But I spent so much money developing this page.” Oh, hi Sunk Cost Fallacy, I didn’t see you standing there. Hard Truth #3: There’s always room for improvement, and that doesn’t mean anyone is calling the baby ugly. You spent money on it? Great! Consider that the first draft, and testing will refine it. You don’t have to throw the baby out with the bath water, and no one is calling the baby ugly to suggest there could potentially be room for improvement. 

Prioritize. Choose the lowest cost and highest potential reward tests and do those first. This can be napkin math--it doesn’t have to be researched and annotated with a full 5-page bibliography of references and sources about why it’s a good idea. If it takes longer to research the possible outcome than perform the test, it’s way too much. Just test it. Even tests that break even in terms of options help you learn something, and you haven’t wasted time. 

    1. It’s a good idea to prioritize tests on the area of your funnel or experience where you have a hunch that users are exiting the most. 

Develop your first test and go! 

Generally, the most effective test types to run:

  1. Simplify your funnel by removing complexity.

  2. Try different versions of messaging and/or CTA text.

  3. Remove, move, or optimize interruptive design elements that don’t immediately funnel the user where you want them to go (carousels, videos, etc.).

We’ll go into those in more detail in another post. But for now, happy testing!

Previous
Previous

No, AI Can’t Replace Product Managers Anytime Soon

Next
Next

How to Perform a Website Audit (and find the holes in your funnel)