A/B Testing
Score CRM's A/B testing lets you test multiple variations of your emails to find what resonates best with your audience before committing to a full send.
How A/B Testing Works
- Test phase: A configurable percentage of your audience receives different variants
- Wait period: The system waits for a specified duration to collect engagement data
- Winner selection: The best-performing variant is identified
- Full send: The winning variant is sent to the remaining audience
Testable Fields
You can create variations across these fields:
| Field | Description |
|---|---|
| Subject Line | Different subject lines for each variant |
| Content | Different email body HTML for each variant |
| From Name | Different sender names for each variant |
Combinations
Variants are created as combinations. For example, if you have 2 subjects and 2 content versions, you get 4 variants:
| Variant | Subject | Content |
|---|---|---|
| A | Subject 1 | Content 1 |
| B | Subject 1 | Content 2 |
| C | Subject 2 | Content 1 |
| D | Subject 2 | Content 2 |
Setting Up A/B Testing
- In the campaign builder, enable A/B Testing
- Select the test field (subject, content, or from name)
- Add your variants (minimum 2)
- Configure the test percentage (e.g., 20% of total recipients)
- Set the test duration (hours to wait before selecting winner)
- Choose the winner criteria: Open Rate or Click Rate
Distribution Modes
Random
Recipients in the test group are randomly assigned to variants with equal distribution.
Weighted
You assign a specific percentage to each variant:
- Variant A: 50%
- Variant B: 30%
- Variant C: 20%
Smart Bandit
An adaptive algorithm that dynamically adjusts the distribution during the test phase. Variants that perform better early receive more sends, maximizing the overall campaign performance. The algorithm balances exploration (trying all variants) with exploitation (favoring winners).
Winner Selection
After the test duration expires:
- Score CRM compares engagement metrics across all variants
- The winner is determined by your chosen criteria (open rate or click rate)
- If there's no statistically significant difference, Variant A is used as the default winner
Manual Winner Selection
You can also skip automatic selection and manually choose the winner:
- Go to the Campaign View page during the test phase
- Review the per-variant performance metrics
- Click Select Winner on your preferred variant
Variant Analytics
The campaign report provides detailed per-variant analytics:
- Performance metrics: Opens, clicks, bounces for each variant
- Heatmap view: Matrix visualization showing performance across all variant combinations
- Statistical comparison: Side-by-side metrics to identify the winner
Inline Spintax
For simpler variations, you can use spintax directly in any field instead of formal A/B testing:
Subject: {Save 20% Today|Limited Time Offer|Don't Miss This Deal}
Each recipient randomly gets one option. This doesn't include test-then-send logic — all variations are sent immediately to the full audience.
Best Practices
- Test one variable at a time: Changing both subject and content makes it hard to know what drove the difference
- Use a meaningful test size: At least 15-20% of your audience for statistically significant results
- Allow enough time: Give at least 2-4 hours for opens and clicks to accumulate
- Test regularly: Small, consistent tests improve your email performance over time
- Keep variants distinct: Subtle differences may not produce measurable results