Skip to main content
A/B testing (also called split testing) lets you run two or more versions of a funnel page simultaneously and automatically measure which one converts better. HoopAI routes incoming traffic between variants, tracks performance independently, and lets you declare a winner when you have enough data.

How A/B testing works in funnels

Each funnel step can have multiple page variants. When A/B testing is active, HoopAI splits incoming visitors between the variants based on the traffic percentages you configure. Each variant gets its own design, content, and URL path. Stats are tracked separately per variant so you can compare performance directly.

Setting up an A/B test

1

Open the funnel step

Navigate to Sites > Funnels, open your funnel, and go to the Steps tab.
2

Add a new page variant

Click the gear icon on the step you want to test. Look for an option to Add Variant or Split Test. Enter a name for the variant and assign it a unique URL path.
3

Design the variant

Click Edit Page next to the new variant to open it in the builder. Make the changes you want to test — headline, layout, copy, button color, or any other element. Changing one element at a time makes it easier to attribute performance differences to a specific change.
4

Set traffic split

Define what percentage of traffic goes to each variant. A 50/50 split reaches statistical significance fastest. You can weight the split differently if you want to protect revenue while still testing (for example, 80% to the control, 20% to the variant).
5

Activate the test

Enable the split test. HoopAI will begin routing traffic between variants immediately.

What to test

Focus your A/B tests on elements that have the biggest impact on conversion:
ElementExamples to test
HeadlineBenefit-focused vs. curiosity-driven vs. pain-point framing
Sub-headlineWith vs. without; long vs. short
Call-to-action buttonText (Get Instant Access vs. Yes, I Want This), color, size
Hero image or videoImage vs. video; person-facing vs. product-focused
Form lengthFull form vs. email only
Page lengthShort vs. long copy
Offer framingPrice presentation, bonuses, guarantee wording
Social proofWith testimonials vs. without; different testimonial formats

Reading A/B test results

Monitor test performance in the Stats tab. Each variant appears as a separate row, showing:
  • Unique page views per variant
  • Opt-ins and opt-in rate per variant
  • Revenue per variant (if the step has an order form)
Do not declare a winner too early. A test with 50 visitors per variant is not statistically meaningful. As a general rule:
  • Run the test until each variant has at least 100–200 unique visitors.
  • Allow at least 7 days to account for day-of-week traffic variation.
  • Look for a difference of at least 10–15% in conversion rate to be practically significant.

Declaring a winner

Once you have enough data and a clear winner:
  1. Open the step settings and disable the losing variant.
  2. Set the winning variant to receive 100% of traffic.
  3. The losing variant’s page can be kept as a reference or deleted.
Alternatively, you can set the winning variant as the new default and continue testing a new challenger against it.

Best practices

  • Test one change at a time to isolate what caused the performance difference.
  • Run tests on your highest-traffic steps first — low-traffic steps take too long to reach significance.
  • Keep track of your test results and what you learned in a simple log. Past test data helps you make faster decisions on future funnels.
  • Never run a test without a clear hypothesis: “I believe changing [element] from [A] to [B] will increase [metric] because [reason].”
Your control variant (the original page) should always be included in every test as a baseline. Never replace the control with something completely untested without running it through a split test first.
A/B testing traffic splitting is random — the same visitor may see different variants on different visits. For the most accurate results, let the test run until you reach your target visitor count and do not pause or modify the test midway through.
Last modified on March 5, 2026