Why I'll NEVER accept a revenue sharing deal for conversion rate optimisation work

Last  month, one of my split tests resulted in about a £500,000 saving. If I were getting a modest 5% revenue share, that would mean £25,000 for one month’s work. 

Sounds pretty good, right? And there are even bigger results coming soon.

But I'll never do revenue sharing. Here's why:


1) The Inherent Imprecision of Statistics 

The exact size of a win you get from a test is statistically unreliable. Even when you're 99.9% certain that it's a win, you still can't be sure exactly how big a win. 

The results of a split test, showing the statistical distribution of probable results. The bars at the bottom are the results you'd see in a split testing tool like Visual Website Optimizer.

The results of a split test, showing the statistical distribution of probable results. The bars at the bottom are the results you'd see in a split testing tool like Visual Website Optimizer.

In the diagram above, we've got just enough data to be 95% sure that the challenger beat the control. In the test, the control converted at 10% and the challenger at 15% (A)

That makes it a 50% win, right?

Nope. Lots of people miss those crucial little error bars. In fact, all we can really say is that we're 95% confident that the challenger is beating the control by somewhere between the smallest possible difference: 4.9% (B), and the largest possible difference: 120% (C).

If each percentage point were worth £100 and we got 10% of the win, we're looking at a revenue share of somewhere between £49 and £1,200.

I'd like my pay cheque to be determined with a little more precision than that.

In theory, we could run the test for longer. The more data we collect, the smaller the error bars get. Eventually, we'll get a more precise estimate.

But let's get real for a moment. This is business, not a controlled academic study. It would be insane to keep running a test for a second longer than is necessary for statistical significance. To do so would be throwing away money because:

  1. you're still showing a page that has been demonstrated to perform worse; and
  2. you're blocked from running the next test and getting additional learning and wins. 

Call me crazy, but I'm not going to ask my clients to sacrifice a bunch of revenue just so that I can tell them my fee with a bit more precision.

 

Reason 2: The Attribution Problem

To get your revenue share from a win, something critical has to happen. The client has to agree that what you've done and only what you've done is responsible for the increase in conversion. 

That's really hard to get to happen. Especially when them agreeing means that they have to give you a big chunk of their profits. 

Robert Ringer touches on why this happens in his fascinating book "To Be Or Not To Be Intimidated". He posits 3 types of people in business:

  1. The people who tell you they're out to take all your chips and then do indeed take all your chips.
  2. The people who pretend they're not out to take all your chips but secretly they actually are. Then they do.
  3. The people who really mean to give you your share of the chips, but by the end of the process circumstances mean that they take all your chips anyway, even though they feel genuinely terrible about it.

Behavioural economics gives us some clues as to the motivations behind clients trying to take all our chips. 

Loss aversion

It's much more painful to lose $1,000 than it is pleasurable to gain $1,000. This doesn't make logical sense, but it's how people work. When you pay out a revenue share, you suffer what behavioural economist Dan Ariely calls "the pain of paying". Even though the profits are brand new, it quickly feels like you already own them. And suddenly you have to give up some of what you own to some upstart CRO... Ouch. People avoid this pain whenever possible.

Sunk cost fallacy

On the other hand, paying a fixed retainer feels like an investment. It's a sunk cost that can be mentally partitioned off from the profits. You've already agreed to suffer a controlled, expected pain of paying in order to get the results you want. Then as long as you're getting results, the pain of paying is still worth it, and the size of the pain doesn't suddenly increase beyond what you agreed.

 

So your clients are driven by two powerful psychological motivations against paying your "fair" revenue share. Given that, they will always be able to come up with reasons why it wasn't really your test that caused it, reasons why the test wasn't really your idea, and reasons why the results can't really be relied on.

That's the attribution problem in a nutshell: correctly identifying what led to any win is difficult, imprecise and contentious. 

 

But can't I protect myself somehow and legally guarantee that I get my revenue share?

This may be possible. But there are too many potential loopholes for me to ever feel confident about it.

And that's why I don't do it. 

What do I recommend instead? A simple retainer with a guarantee. Like David Ogilvy did when he was getting started: "We'll smash your control – or we'll give you your money back AND pay your costs."

And for taking on the risk like that, it's only reasonable to charge a premium price.

So, what do you think? Do you do revenue share? Have you been stung? If it worked, how did you get it to work? Do share your experiences in the comments...