10 steps to get your fake door testing right

Fake door tests are a quick and straightforward way to gather incredible consumer insights for making data-based strategic product decisions. At its core, you need three things, a set of ads, a landing page and an idea of what you want to test. 

However, to get the most out of these tests, there are certain processes that should be done to make sure your tests have:

  • A defined insight they are trying to gather
  • A purpose to what is being tested
  • An understanding of what relevance each insight gathered has on the product

In this blog, we will go through the process we use with our corporate & enterprise clients to ensure they are getting the most relevant and reliable insights to inform their product decisions.

1. The tests mission

The first piece in the landing page test journey is to understand what the problem you are trying to solve is. This will be the foundation and north star for everything you test, so getting this right is essential.

Luckily, our lovely team have created an equation to get you started and into the right frame of mind when thinking about your problem:

‘We are trying to decide X. To help us make this decision, we are going to test Y and analyse the data points Z.’ 

Let’s break this down.

X - This is going to be the main reason you are testing. For example:

‘We are trying to decide if we should launch a blue version of our product.’

This is the problem you are trying to solve or the question you are trying to answer and the reason why doing that is important. It clarifies and helps you direct all the testing around it. What it shouldn’t do is influence any of the results.

There is an adage - Don’t get attached to an idea, but to the problem. Meaning, that the outcome of the landing page tests should leave you with an answer to this starting question.

Y - This is where you are going to decide what areas you are going to test. For example:

‘We are going to test 3 different colours of our product against one another’

This helps you restrict what you are going to test to the main points that help answer your X. If you want to see if launching a blue version of your product makes sense, there is no point in testing for pricing sensitivity.

By restricting your tests, you are making them more focussed and, therefore the insights much more reliable. We will talk more about this in the design stage and the importance of that.

Z - This is where you are defining your areas of success, so to speak. It’s the data that is going to have the most impact on your decision making. For example:

‘Analyze the data point of double opt-in conversions’

There are, of course, many different data points you can measure against, each with its own impact on your final decision. If you were testing different target audiences, you might want to focus more on the cost-per-conversion rather than the actual conversions as you can then forecast ‘if we spend € on this audience, we will return €€€’.

What we are doing here is defining the metric that really matters to this test and what will inform us of a successful test.

To put this example all together, you would be looking at a final test statement, with a little refining, of:

‘We are trying to decide if we should launch a blue version of our product. To help us make this decision, we are going to test 3 different colours of our product against one another and identify the most attractive ones by comparing conversion rates on the button click “buy now”’

2. Hypothesis

Now you have your tests mission, you need to define what you believe to be true based on your previous market research.

Where the mission is the overall idea of what you are testing and why the hypothesis is you putting down what you think will happen. This is an important step as without it, you can’t easily reference back to previous research, it won’t have any defined success without it.

An example of a hypothesis could be:

‘From our consumer insights research, we believe that launching a blue version of our product will increase sales by 25%’

As you can see, this statement supports our tests mission, but is framed with real results and theories behind it, it is the defining question we are trying to answer.

Now you are ready to start setting up your landing page tests.

Never miss expert insights and case studies on market success prediction with our monthly newsletter

Hi! 👋 Who should we deliver this too?

3. Design your landing page test parameters

We know what we want to test and how we will measure that success now, so we are in the final stages of planning for our landing page test.

This is where we nail down what exactly we are going to test. Using the example above, we would be running a feature test with 3 different colour variations.

Based on that, we need to plan for 3 variants of our landing page and adverts.

That’s all you need to do in this stage, just make sure you have an understanding of how many variants you will need to be creating.

Top tip:
We advise that you run a minimum of 2 variants and a maximum of 6 variants. The reason for this is that running multiple variants will generate the most interesting insights through comparison between them. Having more than 6 however becomes chaotic and hard to pull the true data story from. If you feel you need more than 6, you first need to ask yourself if you are really testing for one aspect of the product or not. If you do still believe you need to run more, then we suggest running 6 to start, take the 2 top performing variants and run them against the next selection.

4. Design your landing pages

Now is the time to start getting into the planning and design of your landing page and its variants.

If you’ve ever done A/B testing on a product or website before, you will know the golden rule for landing page test design:

Minimum changes for maximum attribution.

What we mean when we say this is that the more you change across the landing pages, the less attributable the data is. 

Imagine the example we’ve been using of the blue product and you launch these 2 landing pages:

They are so different that being able to reliably say that the blue headphones performed better because they are blue is impossible. There are too many changes in the landing page to make an accurate decision, it has too many influences on the user.

Instead, you should look at greeting landing pages that look more like this:

You can see here that the only change between these landing pages is the colour of the headphones, everything else stays the same. This means we can easily attribute the delta in performance we’re measuring between the variants to the headphone colour rather than any other design feature on the landing page.

This is just the header; many other sections are needed to create a convincing landing page for your product tests. Here’s an example of how we lay out landing pages for tests to achieve optimal performance.

Now this example has shown testing for only one element, as is best practice, the colour of the headphones. What if you wanted to test the pricing or value proposition? For that, you would want to run multiple different sets of tests. This way, you keep the variables down to get accurate results while giving the flexibility to test multiple aspects of the product.

If you are struggling with where to start designing your landing pages and setting up your research, our Research Consulting team offers testing services to help you strategise the test, conduct it and analyse the results, to help you get the insights you need.

Never miss expert insights and case studies on market success prediction with our monthly newsletter

Hi! 👋 Who should we deliver this too?

5. Design ads

Similar to the landing pages, you will want to create a series of ads to connect to those pages. These can be as simple as having one master ad design with each landing page having its own colour version, for our example, or you could test multiple advert designs to each page.

We tend to suggest keeping your advert master design to a maximum of 2. It’s the same principle as the landing page tests, you don’t want too many variables, but ultimately you want to get as much traffic to these landing pages as possible to get the quantity of results you are looking for. 

With that in mind, a couple of ad designs can help.

Designing the perfect Facebook ad creative is a whole post in itself, so we won’t go into that here, but keep it simple and to the point. Even just replicate the header in an ad format, and you’ll be fine. The point of landing page tests is that they are quick to set up, we don’t want to get stuck on creating the best performing ads.

6. Create your audience

While in the ad manager of your choice, we suggest using Facebook ads, you will need to create the audience you want to test.

With landing page tests, you can run audience-based tests, so setting up multiple audience variations within Facebook ads and assigning each of those to their own landing page to measure which performs the best.

In most cases however you will only want to setup one audience.

Having run many landing page tests, we have found that there are a few key indicators to a good audience.

  • 50,000 is a minimum sized audience to get significant results
  • 2 to 3 million is a good sized audience for the best results
  • Include as many interests that are relevant to your market as possible
  • Exclude interests to make sure you aren’t getting outliers

7. Create your follow up content

You’ve got the main parts of your landing page test flow done now. All that’s left are the smaller items that turn it from a lone landing page into a complete and in the end transparent (about consumers taking part in a test) customer journey.

This could be what comes after someone has clicked to ‘add to cart’ your product and you can use them to track the funnel further and get even more detailed insights on purchase intent.

Note that, before collecting any email address or other personal data from consumers, you need to inform them on your page (e.g. in a pop-up after clicking "add-to-cart" and before requesting any personal data, such as an email for adding them to your newsletter list) that your product is not available. It’s crucial to make sure they are aware of this before they submit any personal data. We have an additional piece that easily explains on how to tell consumers your product is not available. You can find it here.

8. Connecting it all up

We’ve got all the pieces to run our landing page tests, but nothing is connected. At the moment, you have a solo landing page, ads and confirmation pages which don’t link to each other and won’t make it easy to extract data from.

Shameless pitch time:

Here’s the thing, you can link all of these together with a series of Facebook ads, Unbounce, Mailchimp, Google Analytics and some coding to make sure it all works together. The problem is that it takes a lot of time, and that’s not the point of these tests, they are supposed to be quick.

This is where Horizon comes in, it’s an all-in-one landing page test tool to make all of this process easy. You can do everything from designing a landing page to setting up your ads and confirmation pages and emails within the tool. It makes things simpler.

Pitch ended. 

9. Run the landing page tests

You’ve made it! You’ve got everything set up and ready to go.

Now all you need to do is launch it.

We recommend at least a week of testing before taking any results from it. This will allow you to gather enough data for it to be relevant.

Be aware of seasonality when running your test. Don’t attempt to sell roses on the day before Valentine's.

10. Analyse your consumer insights

The tests are over, you’ve got some raw data now that needs to be analysed.

This is where we now take it all the way back to the start - our mission and hypothesis.

You can take the data you have collected over the past week or so and really look into the areas identified that would be relevant to answering your test mission and hypothesis.

The problem we are faced with here is that telling the story from the data you receive can be hard. Imagine you have just completed your tests and are faced with a series of conversion rates through the funnel, cost-per-clicks, cost-per-leads, how many people double opted-in and more. Being able to see what matters most isn’t always so clear.

That’s where benchmarking comes into play. This can be done over time through running a lot of tests and averaging out the data to get an idea of what is ‘good’ and what is ‘bad’, but what if there was an easier way?

We have created our Customer Demand Score system in a way that benchmarks your data against all the other tests Horizon have ever run, which is a lot. This speeds up the process and gives you a super fast overview of what has and hasn’t worked.

Never miss expert insights and case studies on market success prediction with our monthly newsletter

Hi! 👋 Who should we deliver this too?
Written by
Steven Titchener
An experienced growth marketer now helping Horizon and it's customers create successful products. Always looking to expand his ideas and take on unique and interesting takes on the world of marketing and product development processes.
LinkedIn Profile Link

More insights from Horizon