A/B Testing Your Notifications
Use A/B testing to find which notification copy, design, and timing converts best on your site.
A/B testing lets you compare two versions of a notification and automatically determine which one performs better. Rather than guessing whether "Sarah just bought this" outperforms "Someone purchased this item," you let real visitor behavior decide.
When to run an A/B test
Wait until your baseline notification has accumulated meaningful data before splitting traffic. As a rule: run A/B tests only after you have at least 1,000 notification views on the original. Testing with less data produces unreliable results and can lead you to pick a loser.
If you have a high-traffic site, you can reach this threshold in a day or two. On lower-traffic sites, you may need to let the original run for a week or two first.
What to test first
Not everything is worth testing. Start with the variables that have the highest potential impact:
- Message copy — the single biggest lever. Compare personalized copy ("Sarah from London just bought X") against generic copy ("Someone just purchased X"). Also test urgency framing versus neutral framing.
- Notification position — bottom-left versus bottom-right. Position affects visibility without changing the message, which makes it a clean, isolated test.
- Display timing — a 3-second initial delay versus an 8-second delay. Earlier shows more impressions; later catches visitors when they are more engaged.
- Notification duration — 5 seconds versus 8 seconds on screen. Longer gives visitors more time to read; shorter can feel more natural.
Test one variable at a time. If you change the copy and the position simultaneously, you cannot know which change caused the difference in results.
How to set up an A/B test
- Open the notification you want to test in your Activly dashboard and click Create A/B Test. This locks the original as Variant A.
- Activly duplicates the notification. Open Variant B and change exactly one variable — the copy, the position, or the timing.
- Set the traffic split. A 50/50 split is recommended for most tests. Uneven splits (90/10) are useful when you want to protect performance while exposing a small segment to the experiment.
- Set a test duration or a sample size goal. A duration of 14 days is a reasonable default. If you prefer to set a sample size, aim for at least 500 views per variant before drawing conclusions.
- Click Launch. Both variants go live immediately. Activly randomly assigns each visitor session to one variant and holds them there for the session.
Reading the results
The A/B test results panel shows side-by-side metrics for each variant:
- Views
- Click-through rate
- Conversion rate (if you have an e-commerce integration)
- Statistical significance
Activly declares a winner when one variant reaches 95% statistical confidence. This means there is less than a 5% chance the observed difference is due to random variation. Below that threshold, the test is still running — do not end it early just because one variant looks ahead.
When the test completes, click Apply Winner to set the winning variant as your active notification and archive the loser.
Pro tip: test one thing at a time
Multi-variable tests (also called multivariate tests) require much larger sample sizes to be reliable. For most Activly users, running sequential single-variable A/B tests — one after another — is the fastest path to a well-optimized notification. Once you find the best copy, test position. Once you find the best position, test timing.
What's next
Was this article helpful?
Still stuck? Contact support