In a typical month, the average sales rep spends about 31 hours responding to emails and managing their inbox.
That’s roughly one-fifth of their working life.
So you want to make sure they’re spending that time as effectively as possible, right?
One way to do that is through A/B testing. Done well, it can maximize the impact of each and every email your reps send.
Read on for all the fundamentals you need to start your A/B testing journey (including eight tried-and-trusted best practices for A/B testing cold emails).
Let’s get into it…
What Is Email Outreach A/B Testing?
A/B testing for email outreach involves building two different variants of an email and sending each to a different subset of your audience, with the purpose of optimizing your campaigns.
For instance, say you’ve written two subject lines for the same email:
Variant A: Question about lead generation
Variant B: Lead generation, [First Name]?
You’re not sure which to use — so why not try both?
So you split your audience in half, send Variant A to one subset, and Variant B to the other. A few days later, you check the analytics in your cold email platform: turns out Variant B generated 15% more opens.
Now you can feel pretty confident that the personalized subject line plays better with your audience, so you can replicate this approach in future campaigns.
Simple, right?
8 Best Practices for A/B Testing Cold Emails
Running A/B tests is easy — but doing it effectively is a whole other matter. Follow these best practices for A/B testing cold emails to generate the most valuable, impactful insights.
Start With a Hypothesis
Before launching any A/B test, think about the outcome: which variant do you expect to perform best, and why?
Going back to the example we used in the previous section, we might come up with the following hypothesis: “We expect Variant B to generate a higher open rate because the subject line uses more actionable language.”
This sort of strategic thinking is an essential element of A/B testing because it helps you create tests that truly optimize your performance, rather than just testing for the sake of it.
Prioritize Your A/B Tests
There’s really no limit to the number of A/B tests you could run on a single email.
But, clearly, not all possible variants are equally valuable. Adding the recipient’s name to the subject line will almost certainly have a higher impact than swapping a colon for an em dash in the third paragraph of your email body copy.
It’s in your best interests to prioritize tests that are likely to produce the most significant results.
One way to identify those high-value variants is to use the ICE scoring model, which comprises three elements:
Impact: Simply put, how much of a difference is this variant likely to make on the performance of your email?
Confidence: How sure are you that the variant you’re testing will have a positive impact on your email outreach performance?
Ease: How long will it take you to implement the necessary changes for this A/B test?
With every A/B test you’re planning to run, give it a score out of 10 for each of those three elements, then multiply the three numbers to get your ICE score. Prioritize tests with the highest scores.
For instance, we might give our planned subject line test the following scores:
Impact: 7/10 (it’s a minor change to the wording, but because it’s in the subject line, everyone will see it).
Confidence: 5/10 (we predict Variant B will drive more opens because it’s more actionable, but we’re not totally confident).
Ease: 10/10 (it takes a matter of seconds to build an A/B test for two different subject lines).
Multiplying 7 x 5 x 10 gives us an ICE score of 350. By definition, any test with a score of more than 125 (5 x 5 x 5) should be at least moderately impactful, so 350 sounds promising. But we’d have to see how it compares to any other tests we’re planning before deciding which to prioritize.
Test One Variant At a Time
When you begin playing around with A/B tests, it’s easy to get carried away and start testing everything at once.
But there’s a problem: testing multiple variations simultaneously makes it impossible to make sense of the data. Sure, you know that one email variant outperformed the other — but you don’t know if the result was down to the subject line, the call to action (CTA), the send day, or whatever else you tested.
All of which means you can’t draw any meaningful learnings from the test, so you might as well not have run it in the first place.
Instead, test one variable at a time so you can create a clear action plan from your results.
Use a Control Variation
Rather than pitching two totally random variants against one another, always use one “control” variant that follows your standard best practices.
Think of the control as a safe bet: it’s the sort of thing that’s worked in the past.
But if you just keep doing exactly the same thing, you’ll never improve your email outreach results, so you’ll want to push the boundaries with your non-control variants.
Using a control also helps ground your results. Think about it like this: if the control gets the same results as always but the variant performs much better, you know people loved the variant. Whereas if both get sky-high results, you know something else has changed that you didn't account for — like deliverability or timing.
Capture Sufficient Data
Be wary of drawing firm conclusions from small sample sizes, where the actions of a single recipient can massively skew the results in favor of one variant or the other.
Instead, wait until you’ve gathered a statistically relevant data set — as a general rule, we recommend at least 100 sends for each variant.
Compare Like for Like
Be sure to test each variant under the same conditions so the results aren’t affected by external factors.
To give a simple example, it wouldn’t be fair to compare the open rates of emails sent from different domains because the sender reputation could impact deliverability.
Test With Different Audiences
List segmentation is a key element of effective A/B testing.
Rather than simply splitting your entire list 50:50 and sending each variant to half your audience, try sending to smaller segments based on factors like:
Job title
Industry
Location
You might find that one variant works best with recipients who work in healthcare, while those in the finance industry prefer the other.
Also, try testing variants within each list segment. Because an email that resonates with CEOs might flop with HR leaders.
This sort of testing helps you tailor your approach based on the preferences of individual audience segments.
Use the Right Tools
Last but not least, make sure you have the tech stack in place to support your A/B tests. At the absolute minimum, your chosen outreach platform should let you:
Segment your audience
Test variants of email elements like subject lines and preview text
Analyze the results to understand which variant performed best
Easily turn variants on or off to optimize your campaigns
How To Analyze and Act on A/B Test Results
Coming up with a bunch of A/B test ideas and variants is only part of the battle.
In many ways, the hard work begins when the data starts rolling in: this is where you dig into the numbers and make key strategic decisions regarding your future email outreach campaigns.
To understand which variant worked best, look at the following metrics:
Open rate: The number or percentage of recipients who opened your email, influenced by factors like the send day, subject line, and preview text. Clearly, the more people who open a message, the more chance you have of converting them!
Reply rate: The number or percentage of recipients who replied to your email. Nail your body copy and CTAs to drive more replies.
Click-through rate: The number or percentage of recipients who clicked a link in your email. Click-tracking is an important metric for outreach campaigns incorporating links to Calendly pages, meeting signups, and other types of landing pages.
Deals won/lost: The number of closed deals attributed to a campaign. Each reply conversation can be marked “won” or “lost”, giving you an easy way to track meetings booked and view it as a metric in your email analytics.
If you’re using QuickMail for email outreach, you can gain further insights through unique analytics like:
Positive reply rate: The ratio between positive and negative replies to your cold email campaign. Aim for a positive reply rate of at least 50% — any lower and you’ve likely got issues with your prospect list, value proposition, or email template (or all three).
Why people unsubscribe: QuickMail goes beyond the basics of unsubscribe rates by giving you detailed feedback on why people unsubscribe. Maybe the prospect wasn’t the right person to speak to; maybe it’s the wrong time. Or perhaps they were confused by your email — in which case it’s time to clarify the body copy.
Send-day attribution: Our advanced analytics view tracks your outreach activities back to the date you sent the email, making it easy to identify the days when you’re most likely to receive positive responses.
Once you’ve crunched the numbers, use the results to build your own set of best practices for each of your ideal client profiles.
How Frequently Should You Run A/B Tests?
If you send a high volume of emails (like thousands a day), you could theoretically run a different A/B test every single day.
But that’s not necessarily the best approach.
At QuickMail, we recommend setting a goal for each email outreach campaign — like a specific amount of meetings booked, deals closed, or revenue generated.
Then run a different A/B test each week until you’re consistently hitting or surpassing your target.
What Email Outreach Elements Can You A/B Test?
Let’s look at all the different email elements you can test during an outreach campaign:
Subject Lines
The subject line is arguably the single most important element of any outreach campaign, with almost half of recipients deciding to open emails based on the subject line alone. We recommend limiting your subject lines to 60 characters to ensure they display in full across the widest range of devices and email clients.
Preview Text
This often-overlooked email element appears next to the subject line in the recipient’s inbox, so it plays a big role in driving email opens. Treat it as an extension of the subject line that compels the reader to click. We recommend writing no more than 90 characters of preview text, although some email clients allow much higher limits.
Email Length
Simply put, this is the length of your email. Most (effective) cold emails contain around 50 – 150 words, but A/B testing will help you understand the ideal length for your audience.
Greeting
The greeting is where you say “Hi!” (or “Hello”, “Hey”, or “Hiya”) to your prospects. Play around with the wording of your greetings, and analyze whether they prefer you referring to them by name.
Intro Lines
The intro line leads people into your outreach email and makes them want to keep reading. It’s a key opportunity to personalize your message, such as by congratulating them on a new office opening or praising their latest LinkedIn post.
Pitch
This is the real meat and drink of your outreach email: the point where you explain what you’re selling and why the recipient should be interested. Instead of just listing a bunch of features, try to mention specific benefits and how they apply to the prospect.
CTA
An email CTA tells the email recipient what you want them to do next — whether that’s replying to your email, booking a meeting via a Calendly link, or something else. Stick to one CTA per email: any more and you risk leaving prospects confused.
PS Lines
The PS line is a signoff at the bottom of your cold email that reinforces your pitch and compels the reader to follow your call to action. Experiment with highlighting different benefits in your PS lines to see which resonate best with your audience.
Send Days
As you probably guessed, this is the day you send a cold email. There are hundreds of studies suggesting the best days for sending cold emails, but we analyzed over 65M cold emails to generate ours.
Our data shows that sending your campaigns early in the week is best and we suggest sending Sunday thru Wednesday as a control variable when testing schedules.
The graph below shows the percentage of opened emails broken down by send day - get the full breakdown in this guide.
Channels
Remember, there are lots of other outreach channels — social media, SMS, cold calling. Using additional channels might actually enhance your email outreach. For instance, you could test whether your emails see higher open rates if you sent a LinkedIn connection request the day before.
Final Thoughts
A/B testing is the best thing you can be doing to optimize your email outreach campaigns.
And great news, there’s really no limit to the number of A/B tests you can run. There’s always another subject line variant or audience segment or CTA to experiment with — and the more tests you run, the better you understand your audience.
All of which means more conversions and revenue.
At QuickMail, we go beyond standard A/B testing by offering A/Z testing, allowing you to test multiple variants simultaneously. So if you’ve got five different subject lines you want to try out, you don’t need to wait — you can test them (against your control variant :) ) simultaneously.
Sign up for your free QuickMail trial today!