How to Design (GREAT) Experiments: Comprehensive guide
Bombastic Beats launched a new music app that processes speech patterns and noise levels at a party and adjusts the music accordingly. “A personal DJ at every party” proclaims the Bombastic Beats advertisement. After a few months of sales Tammy Techno, the CEO, has a great idea to boost sales with a promotion. She’ll throw in an offer that gives buyers of Bombastic Beats a limited time opportunity to buy a disco ball at a very discounted rate of $10 (as opposed to the going market rate of $60).
Much to her dismay, sales are cut in half after the ‘disco ball’ offer goes into effect. Tammy is shocked. “What gives? Don’t people like good deals?” Was her intuition wrong?
Behavioral economics is built on numerous examples of how intuitions can lead us terribly astray. Take availability bias, for example. Which state has the highest number of tornados? “Kansas” says anyone who remembers the horrors of Dorothy and Toto getting blown away in a twister. Nope. Texas has almost twice as many tornadoes as Kansas.
In much the same way, intuitions can lead marketers and product designers down many blind alleys. A set of studies by Itamar Simonson and colleagues showed that products paired with unneeded features or promotions have lower sales, as compared to products without unneeded features or promotions. That is, everyone loves a good deal, but only when it’s relevant to his or her preferences. Otherwise, a good deal can backfire just like Tammy’s promotion.
Admittedly, the role of intuition in marketing and product design is a tricky one. On one hand, it has led to numerous innovations, but on the other hand it has led to countless flops. So how do we harness our intuition effectively? 10 out of 10 behavioral economists agree (social proof) that experimenting is the way to go.
“I know, I know” you say, “I’m a huge proponent of A/B testing.” But, good experimental design extends far beyond having at least two conditions. And, it takes practice to get right.
In our experience of working with numerous big and small companies, we found a few key mistakes that people often make in designing experiments.
Some common ways companies get testing wrong
1. Pilots. Take one new concept and do a limited roll out to test how people like it.
Experimental Error: Pilots are not experiments. Companies often roll out a new product or feature to a select group of people, measure adoption and ask users how much they like it. However, these responses are not compared to a measure of how much people like their current way of doing things. People may say they like a concept, but they might actually like the product even more without the new concept. Pilots easily give false positives.
A control condition, where everything is the same except for the new concept being tested is essential.
2. Sequential Testing. Trying something and seeing if it works better than what it replaced.
Experimental Error: Sequential test are not true tests. Numerous things can change between the time you test one idea and test the next idea. For example, one test run on Wednesday and another test run on Saturday might be influenced by the difference in type of users who use the product on weekdays or weekends.
Everything about the two conditions of a test must be the same, with the exception of the thing you’re testing.
3. Redesigns. Full redesign tested against the old design.
Experimental Error: You change the picture on the landing page, a few functions, and the cost of your product and test it against the old version, and it works better. The problem is you don’t know why. Was it the picture, was it the function, or was it the cost?
Is the reason it didn’t work as well as expected because the picture on the landing page changed or because the pricing model changed? Would only changing one of those things have led to even better outcomes? You won’t know.
Each condition of a test must test ONLY ONE thing, so that you can draw an inference of what caused the outcome of your test.
Many of these errors are obvious once pointed out. However, our experience with many product and marketing teams, shows that they are also extremely frequent.
This practical guide to experimental design should be used as a checklist before launching experiments.
We urge you to focus in on a behavior you’re trying to influence with your product, and work through it using our guide to developing hypotheses and designing experiments. As you do that, remember that the mark of a good experiment is not whether your prediction is correct; the mark of a good experiment is the ability to create a sustainable and trustworthy learning that the company can use to drive business strategy.
Learn more about what to test, the key behavioral principles: Hacking Human Nature: A practical guide to changing human behavior.