Overview
Experiments in Stringboot allow you to A/B test different string variations to find the most effective messaging for your users. Test headlines, CTAs, error messages, and more to make data-driven decisions about your app’s copy.What You Can Test
Call-to-Action Buttons
Test different button text to improve conversion rates
Headlines & Titles
Find the most engaging headlines for your screens
Error Messages
Test clearer, more helpful error messaging
Onboarding Copy
Optimize onboarding flows for better retention
How It Works
- Create variants of your strings in the dashboard
- Set traffic distribution (e.g., 50% control, 50% variant-a)
- SDK automatically assigns users to variants based on device ID
- Track results in your analytics platform (Firebase, Mixpanel, etc.)
- Choose the winner and roll out to 100% of users
Prerequisites
Before creating an experiment, ensure you have:✓ An application created in Stringboot
✓ String keys added to your application
✓ At least one active language configured
✓ SDK integrated in your app (recommended for tracking)
✓ Analytics handler configured (recommended for results)
Tip: Set up analytics integration in your SDK before running experiments. See the Android A/B Testing, iOS A/B Testing, or Web SDK guides for setup instructions.
Creating an Experiment
Navigate to Experiments in your dashboard sidebar and click Create Experiment. You’ll be guided through a 5-step wizard:Step 1: Basics
Define your experiment’s foundation.1
Enter Experiment Name
Choose a descriptive name that explains what you’re testing.Example: “Homepage CTA Button Test”The system auto-generates an
experimentKey (slug) from your name: homepage-cta-button-test2
Add Description (Optional)
Explain the hypothesis or goal of your experiment.Example: “Testing whether ‘Get Started Free’ performs better than ‘Start Free Trial’ for conversions”
3
Add Notes (Optional)
Internal notes for your team (not visible to users).Example: “Recommended by growth team based on Q4 user research”
4
Select Languages
Choose which languages to test. You can test the same keys across multiple languages.
- Must select at least one language
- Only active languages for your app are available
- Experiment name: 1-100 characters (required)
- At least one language must be selected
Step 2: Select String Keys
Choose which string keys to include in your experiment.1
Search or Browse Keys
Use the search bar to find specific keys or browse the full list.The table shows:
- Key name
- Current values for each selected language
- Page/context where the string appears
2
Select Keys to Test
Check the boxes next to keys you want to test.You can test multiple keys in one experiment (e.g., test both a headline and CTA together)
- Must select at least one string key
signup_headlinesignup_cta_buttonsignup_subheadline
Step 3: Create Variants
Define the test variations for each selected key and language.Understanding Control vs Variants
Understanding Control vs Variants
Control Variant:
- The current/live value from your app
- Automatically pulled from your string catalog
- Serves as the baseline for comparison
- New variations you want to test
- Can create multiple (variant-a, variant-b, variant-c, etc.)
- Each variant gets a unique name and value
1
Review Control Values
For each key + language combination, the system shows the current live value as the “control” variant.Example:
- Key:
signup_cta_button - Language: English
- Control: “Start Free Trial”
2
Add Test Variants
Click Add Variant to create new variations.Example variants:
- variant-a: “Get Started Free”
- variant-b: “Try It Free”
- variant-c: “Start Your Free Trial”
3
Edit Variant Text
Type or paste the variant text for each variation.Keep variants similar enough to isolate what you’re testing (length, tone, specific words).
4
Delete Variants (Optional)
Remove variants you don’t need using the delete button.
- At least one variant must have text for each key/language combination
- Test one variable at a time for clear results
- Keep variant lengths similar for UI consistency
- Use meaningful changes (not just punctuation)
Step 4: Set Traffic Weights
Define what percentage of users see each variant.Traffic Distribution Modes
Traffic Distribution Modes
Per-Key Mode:
- Each key can have different weight distribution
- More flexibility but more complex
- Same weights apply to all keys in the experiment
- Simpler and recommended for most experiments
1
Choose Distribution Mode
Toggle between Per-Key or Global mode.Recommendation: Use Global mode for simplicity unless you have specific reasons to vary weights per key.
2
Set Percentages
Assign traffic percentage to each variant.Example for 3 variants:
- Control: 34%
- variant-a: 33%
- variant-b: 33%
3
Verify Totals
Weights must add up to exactly 100% for each key/language.The system validates this before allowing you to proceed.
- All weights must sum to exactly 100% per key/language combination
- 50/50 split: 50% control, 50% variant-a (simple A/B test)
- Equal 3-way: 34% / 33% / 33% (test 2 variants against control)
- Challenger test: 80% control, 20% variant-a (low-risk testing)
Step 5: Review & Publish
Final review before launching your experiment.1
Review All Settings
Check the summary of:
- Experiment name and description
- Selected languages
- String keys included
- All variants and their values
- Traffic distribution
2
Final Validation
The system runs final checks:
- All required fields completed
- Weights total 100%
- At least one variant per key/language
3
Save or Publish
Choose your action:Save as Draft:
- Saves your work without starting the experiment
- Can edit later before publishing
- No users are assigned to variants yet
- Immediately starts the experiment
- Users begin seeing variants based on traffic weights
- Cannot edit once started (can only pause/end)
Experiment Statuses
Experiments move through different statuses during their lifecycle:| Status | Description | Available Actions |
|---|---|---|
| DRAFT | Experiment saved but not started | Edit, Delete, Start |
| RUNNING | Experiment is live and assigning users | Pause, End, Delete |
| PAUSED | Temporarily stopped | Resume, End, Delete, Edit |
| ENDED | Experiment completed | Delete, View Results |
Managing Experiments
Starting an Experiment
- Navigate to the experiment detail page
- Click Start Experiment
- Confirm the action
- Status changes from DRAFT → RUNNING
- Users immediately begin receiving variant assignments
Pausing an Experiment
Temporarily stop variant assignments while keeping data:- Open a RUNNING experiment
- Click Pause Experiment
- Status changes to PAUSED
- No new users are assigned (existing assignments persist)
- Discovered an issue with a variant
- Need to make adjustments
- External factors affecting results (holiday, outage, etc.)
Resuming an Experiment
Continue a paused experiment:- Open a PAUSED experiment
- Click Resume Experiment
- Status changes back to RUNNING
- Variant assignments resume
Ending an Experiment
Permanently complete an experiment:- Open a RUNNING or PAUSED experiment
- Click End Experiment
- Confirm the action
- Status changes to ENDED
- Variant assignments stop
- Results are finalized
- Roll out the winning variant to 100% of users
- Update the string key with the winning text
- Archive the experiment
Deleting an Experiment
Remove an experiment completely:- Navigate to the experiment
- Click Delete (available in any status)
- Confirm deletion
- Experiment and all data are removed
Understanding Results
Viewing Analytics
Experiment results are tracked through:-
Dashboard Analytics (if integrated)
- Variant assignment counts
- Distribution percentages
- Total impressions
- Active user counts per variant
-
Your Analytics Platform (Firebase, Mixpanel, Amplitude, etc.)
- User properties show variant assignments
- Track conversions, events, revenue by variant
- Statistical significance testing
- Detailed user behavior analysis
How Variant Assignment Works
1
User Opens App
SDK sends device ID with API request
2
Server Assigns Variant
Backend uses device ID to deterministically assign user to a variant based on traffic weights
3
SDK Receives Assignment
SDK gets the variant string value for assigned variant
4
Analytics Tracking
SDK calls your analytics handler with experiment assignment
5
User Sees Variant
App displays the variant text to the user
- Same device ID always gets same variant (consistent experience)
- Assignment persists across sessions
- Multiple experiments can run simultaneously
- Each experiment is independent
SDK Integration
For complete analytics tracking, integrate the SDK with your analytics platform:Android SDK
Set up analytics handler for Android
iOS SDK
Set up analytics handler for iOS
Web SDK
Configure analytics for Web
Best Practices
1. Test One Hypothesis at a Time
- Good
- Bad
Experiment: “CTA Button Text Test”
- Control: “Start Free Trial”
- variant-a: “Get Started Free”
2. Run Experiments Long Enough
✓ At least 1-2 weeks for statistical significance
✓ Minimum 100 users per variant (ideally 1000+)
✓ Include full week cycles to account for weekday/weekend differences
✓ Don’t end early even if one variant is clearly winning
3. Use Meaningful Distributions
For Initial Tests:- 50/50 split between control and one variant
- Fastest path to statistical significance
- Equal distribution: 33% / 33% / 34%
- Clear comparison between options
- 90% control / 10% variant
- Test radical changes with limited exposure
4. Document Everything
Use the Notes field to record:- Hypothesis being tested
- Expected impact
- Stakeholder discussions
- External factors during test period
- Learnings and results
5. Choose High-Impact Strings
Prioritize testing strings that affect:- Conversion funnels (signup, purchase, subscription)
- User activation and onboarding
- Critical error messages
- Primary CTAs
6. Respect Statistical Significance
Don’t declare a winner until:- Sufficient sample size reached
- 95%+ confidence interval
- Results are consistent over time
- Accounted for external factors
- Online A/B test calculators
- Your analytics platform’s experiment tools
- Statistical software (R, Python)
Common Use Cases
Testing CTA Buttons
Goal: Improve click-through or conversion rates Example Keys:cta_signup_buttoncta_subscribe_buttoncta_purchase_button
- Action-oriented: “Get Started” vs “Start Now” vs “Begin Free”
- Value-focused: “Start Free Trial” vs “Try Free for 30 Days”
- Urgency: “Start Now” vs “Start Today” vs “Get Instant Access”
Testing Headlines
Goal: Increase engagement or time-on-page Example Keys:homepage_headlinefeature_section_titlepricing_page_headline
- Benefit-focused vs feature-focused
- Question vs statement format
- Short vs descriptive
Testing Error Messages
Goal: Reduce user frustration and support requests Example Keys:error_network_offlineerror_invalid_emailerror_payment_failed
- Technical vs plain language
- Including vs excluding next steps
- Apologetic vs matter-of-fact tone
Testing Onboarding
Goal: Improve activation and retention Example Keys:onboarding_welcome_titleonboarding_step1_descriptiononboarding_complete_message
- Length: Brief vs detailed instructions
- Tone: Formal vs casual
- Focus: Feature benefits vs use cases
Integration with SDKs
Once you create an experiment in the dashboard, your SDKs automatically receive variant assignments:How It Works
- Dashboard: Create experiment with variants and weights
- Backend: Stores experiment configuration
- SDK: Requests strings with device ID header
- Backend: Assigns device to variant based on configuration
- SDK: Receives variant string value
- SDK: Calls analytics handler with assignment
- Analytics: Tracks which users see which variants
Platform-Specific Setup
- Android
- iOS
- Web
Configure analytics handler in your Application class:See the Android A/B Testing Guide for complete setup.
Troubleshooting
Variants Not Showing in App
Variants Not Showing in App
Possible causes:
- Experiment is in DRAFT status (not started)
- SDK not integrated or initialized correctly
- Device ID not being sent with requests
- Cache needs to be cleared
- Verify experiment status is RUNNING
- Check SDK initialization logs
- Confirm device ID in SDK configuration
- Clear app cache or reinstall
All Users Getting Same Variant
All Users Getting Same Variant
Possible causes:
- Traffic weights set to 100% for one variant
- Device ID generation issue
- Cache serving same value
- Verify traffic weights in Step 4 of wizard
- Check device ID is unique per installation
- Force cache refresh in SDK
Can't Edit Running Experiment
Can't Edit Running Experiment
This is expected behavior.Once an experiment is RUNNING, you cannot edit it to ensure data integrity.Options:
- Pause the experiment (allows limited edits)
- End the experiment and create a new one
- Clone the experiment and modify the copy
Analytics Not Tracking Assignments
Analytics Not Tracking Assignments
Possible causes:
- Analytics handler not configured in SDK
- Analytics integration issue
- Network blocking analytics calls
- Verify analytics handler implementation
- Check analytics platform integration
- Test with SDK debug logging enabled
- See platform-specific A/B testing guides
Next Steps
SDK Integration
Set up analytics tracking in your SDKs
String Management
Learn how to manage string keys
Languages
Configure languages for your app
Best Practices
Advanced tips and troubleshooting
Questions? Contact support at [email protected]