Skip to main content

A/B Testing

Score CRM's A/B testing lets you test multiple variations of your emails to find what resonates best with your audience before committing to a full send.

How A/B Testing Works

  1. Test phase: A configurable percentage of your audience receives different variants
  2. Wait period: The system waits for a specified duration to collect engagement data
  3. Winner selection: The best-performing variant is identified
  4. Full send: The winning variant is sent to the remaining audience

Testable Fields

You can create variations across these fields:

FieldDescription
Subject LineDifferent subject lines for each variant
ContentDifferent email body HTML for each variant
From NameDifferent sender names for each variant

Combinations

Variants are created as combinations. For example, if you have 2 subjects and 2 content versions, you get 4 variants:

VariantSubjectContent
ASubject 1Content 1
BSubject 1Content 2
CSubject 2Content 1
DSubject 2Content 2

Setting Up A/B Testing

  1. In the campaign builder, enable A/B Testing
  2. Select the test field (subject, content, or from name)
  3. Add your variants (minimum 2)
  4. Configure the test percentage (e.g., 20% of total recipients)
  5. Set the test duration (hours to wait before selecting winner)
  6. Choose the winner criteria: Open Rate or Click Rate

Distribution Modes

Random

Recipients in the test group are randomly assigned to variants with equal distribution.

Weighted

You assign a specific percentage to each variant:

  • Variant A: 50%
  • Variant B: 30%
  • Variant C: 20%

Smart Bandit

An adaptive algorithm that dynamically adjusts the distribution during the test phase. Variants that perform better early receive more sends, maximizing the overall campaign performance. The algorithm balances exploration (trying all variants) with exploitation (favoring winners).

Winner Selection

After the test duration expires:

  1. Score CRM compares engagement metrics across all variants
  2. The winner is determined by your chosen criteria (open rate or click rate)
  3. If there's no statistically significant difference, Variant A is used as the default winner

Manual Winner Selection

You can also skip automatic selection and manually choose the winner:

  1. Go to the Campaign View page during the test phase
  2. Review the per-variant performance metrics
  3. Click Select Winner on your preferred variant

Variant Analytics

The campaign report provides detailed per-variant analytics:

  • Performance metrics: Opens, clicks, bounces for each variant
  • Heatmap view: Matrix visualization showing performance across all variant combinations
  • Statistical comparison: Side-by-side metrics to identify the winner

Inline Spintax

For simpler variations, you can use spintax directly in any field instead of formal A/B testing:

Subject: {Save 20% Today|Limited Time Offer|Don't Miss This Deal}

Each recipient randomly gets one option. This doesn't include test-then-send logic — all variations are sent immediately to the full audience.

Best Practices

  • Test one variable at a time: Changing both subject and content makes it hard to know what drove the difference
  • Use a meaningful test size: At least 15-20% of your audience for statistically significant results
  • Allow enough time: Give at least 2-4 hours for opens and clicks to accumulate
  • Test regularly: Small, consistent tests improve your email performance over time
  • Keep variants distinct: Subtle differences may not produce measurable results