Skip to main content
A/B testing (also called split testing) lets you send two or more versions of an email to a portion of your list, measure which version performs better, and automatically deliver the winner to the remaining contacts. It removes guesswork from subject line and content decisions by using actual recipient behavior to determine your best approach.

What you can test

The platform supports two types of A/B tests per campaign:

Subject line

Test different subject lines to maximize open rates. Vary length, personalization, urgency, questions, or emoji use.

Email content

Test different body layouts, images, CTAs, copy length, or section order to maximize clicks or conversions.
You can test either subject lines or email content — not both in the same campaign. Choose the variable most important to your current campaign goal.

Setting up an A/B test

1

Open or create a campaign

Go to Marketing > Emails > Campaigns and open a drafted campaign (or click + New to create one). Build the base email design in the Email Builder.
2

Enable A/B testing

On the left panel of the Email Builder, find the A/B Testing tab and toggle Enable A/B Testing to on.
3

Choose your test type

Select either Email Subject or Email Content. This determines which element you will create variations for.
4

Set the test duration

Choose how long the test runs before a winner is selected. Options range from 30 minutes to 24 hours. Set this based on your list size and how quickly your audience typically engages — a 4–8 hour window works well for most campaigns.
5

Set the test audience size

Use the slider to define what percentage of your total list receives the test variations. The remaining contacts receive the winning version after the test concludes. For example: set test size to 40%, with two variations at 20% each, and 60% reserved for the winner.
6

Create your variations

Add up to 6 variations. For subject line tests, enter a different subject for each variation — you can use the AI subject line generator to suggest alternatives. For content tests, each variation uses a separate email design.
7

Choose the winning metric

Select the metric used to determine the winning version:
  • Unique Open Rate — best for subject line tests
  • Unique Click Rate — best for content tests
8

Send or schedule

Click Send or Schedule to start the test. The platform sends the variations to the test audience, evaluates performance after the duration, and automatically sends the winner to the remaining contacts.

Variation limits and audience allocation

SettingRange
Maximum variations6
Minimum test duration30 minutes
Maximum test duration24 hours
Test audience size10%–50% of total list (varies by variation count)
When you increase the number of variations, the platform requires a proportionally larger test audience to maintain statistical validity. The slider adjusts automatically.

Winning criteria

Unique Open Rate counts one open per contact, regardless of how many times they open the email. Use this metric when your goal is improving subject line performance. Unique Click Rate counts one click per contact. Use this metric when your goal is driving link clicks, CTA engagement, or conversions. The platform automatically identifies the variation with the highest rate after the test duration and delivers it to the remaining contacts without any manual action needed.

Viewing results

After the test concludes, navigate to Marketing > Emails > Campaigns, find the campaign, click the three-dot menu, and select Statistics. Results show:
  • Performance of each variation (open rate or click rate)
  • The variation declared the winner
  • Delivery metrics for the final winner send
Save a copy of the winning subject line or content design. Duplicate the winning campaign version as a starting point for future sends to build on what works.

Subject line testing best practices

Test one variable at a time to get clear, actionable results. Here are common subject line dimensions to test:
Compare a short subject (under 40 characters) against a longer, more descriptive one. Short subjects often display better on mobile and load faster in the subject preview, while longer subjects can communicate more context.
Test a subject with the recipient’s first name (using the {{contact.first_name}} merge tag) against a generic version. Personalization typically lifts open rates, especially in promotional contexts.
Compare a deadline-driven subject (“Offer ends tonight”) against an informational one (“Your April summary is here”). Urgency works well for time-sensitive campaigns.
Test a subject posed as a question against a direct statement. Questions create curiosity gaps that can lift open rates for educational or nurture content.
Test a subject with a relevant emoji against the same subject without one. Emoji can increase visual standout in a crowded inbox, but performance varies by audience.

Content testing best practices

For content A/B tests, isolate a single change between variations to understand what drives the performance difference:
  • CTA text: “Get started” vs. “Claim your offer” vs. “See the demo”
  • Button color: High-contrast vs. brand-aligned
  • Image vs. no image: Text-only email vs. image-led layout
  • Long copy vs. short copy: Detailed explanation vs. brief teaser with a link
  • Content order: Lead with the offer vs. lead with the benefit story

Frequently asked questions

If results are inconclusive (both variations perform very close to each other), the platform still selects the variation with the highest metric value at the end of the test duration and sends it to the remaining contacts. Plan a follow-up test with a more distinct difference between variations to get a clearer result.
Yes. A/B tests work with all delivery methods including immediate sends, scheduled sends, and batch sends. Set the test duration appropriately — for batch sends, ensure the test duration is shorter than the batch window so a winner can be determined before the remaining batches go out.
As a general rule, each variation should reach at least 200–500 contacts for results to be statistically meaningful. For very small lists, A/B tests are less reliable, and the declared winner may not reflect a true preference difference.
Yes. For scheduled and batch campaigns, you can pause the campaign from the campaign list by clicking the three-dot menu and selecting Pause. This stops all variants and the final winner send until you resume.
A/B testing through the campaign A/B feature applies to broadcast campaigns only. For workflow emails, use the Workflow Split Action to route contacts down different paths and compare performance between the branches over time.
Yes. The subject line field in the A/B test setup includes an AI generation option. Describe the email content or goal and the AI will suggest alternative subject lines to test.
Last modified on March 5, 2026