Ever notice how data sometimes does funny things?
(This is a true story)
The other day, I was looking over the results of some email marketing campaigns I was optimizing and firing off for a client.
I was ELATED that one blast netted me 100% open rate.
With a modest 3.7% CTR.
Not bad at all.
To clarify, by optimizations here, I mean I did the usual best practice stuff:
- I shrank the subject line
- Added an emoji
- Updated the main hook in the email body
And so forth. Pretty basic stuff.
In other words, I was expecting to see the results improve, but not to THAT level.
But! Lest I get too far ahead of myself, there was a caveat I noticed.
The sample size was pretty small: 50 recipients.
I’m too in love with data to pretend 100% opens on 50 emails was statistically significant.
After all, who knows how the 51st person in a hypothetical list would act?
So, I sent another blast to the next slice of my larger list.
The results came back: 94.6% open rate with 2.4% CTR
Not bad. The open rate was similar (just shy of perfect) and the list size was slightly bigger at 84 emails.
So, if the subject line affects open rates, this subject line is a home run through and through, right?
Right?
However, something stood out in these results that everyone should have noticed by now.
My CTR for the second last dropped significantly.
From 3.7% to 2.4%
Naturally, to see if this was an anomaly, I fired off the blast to a few more slices of the list.
The results were not what you would expect based off the first two blasts.
On even larger lists, the open rate fell to 57.9% with a 4% CTR.
In other words, open rates were nearly halved, while CTR nearly doubled.
What gives?
Now, seeing things like this might cause you to scratch your head and wonder if the data isn’t acting a little bit random.
But the important thing to remember is that data is weird when you’re only looking at a small sample size and then trying to optimize based on what you see in your little slice.
For instance, if I had only seen my early slice of 94.6% open rate with 2.4% CTR, I would have assumed that my subject was perfect, and my offer inside was weak.
However, when I fired off a blast using the same email to the rest of the list (850 recipients) my results were:
48.5% open rate with a 5.2% CTR.
In other words, my open rate was average, but my CTR was above average.
Meaning, if I based my decision off my small data slice, I would have learned the WRONG lesson and changed my CTA, likely harming my CTR.
The short of it is that small samples of data never tell the whole story.
And neither do numerous small samples necessarily all add up to the big picture either.
This example here that I wrote about was for emails, but it’s also true for other things.
For instance, if you’re trying to understand the conversions reporting in your Google Ads, your heatmaps, or landing pages, looking at too little data can teach you the wrong lessons.
In other words, by drawing the wrong conclusions you will take the wrong course of action, missing out on the real lessons and opportunities.
So how do you learn the right lessons from your data? We can help! Acorn is a boutique, data-driven digital consulting agency. We’ve got 20 years of combined experience sifting through data to correctly identify the right takeaways for getting more out of your campaigns.
Reach out to us for a free 30-minute call. We’d love to hear how we can help.