Classy research hub
Driven by innovation and relentless optimization, we continually test to deliver top-performing donor experiences. Discover insights from our latest research and A/B tests:
FAQs
How does Classy A/B testing work?
Much of Classy’s product innovation is built on A/B tests in which our data, product, and design teams test a single change or concept across many Classy nonprofit campaigns, rather than testing on one campaign at a time.
This allows 1) all test learnings to be based on a variety of different nonprofits and causes b) the impact of testing to be minimized on any single campaign or nonprofit, and c) overall testing length to be shorter so that we can learn and innovate faster, while minimizing donor impact.
Test lengths can vary anywhere from 8-45 days, depending on the complexity of the test and how long it takes for the test to receive sufficient traffic to reach statistical significance.
Does Research Hub include results from all Classy tests?
This hub includes findings from a selection of our recent tests. There are two main reasons we don’t just publish it all. First, much of our testing is iterative, which means our teams are constantly making small tweaks to confirm and expand on similar findings. So, we’ll generally prioritize a readout of the earliest findings and then focus on validating with slight variations in the background. And, we want to make sure there’s always a “so what” learning interesting and different enough so you know it’s worth your valuable time to read. So, even though we’re always testing, we’ll only publish again if there’s a meaningful insight we think you should hear about.
If your platform is optimized, why do you have to do so much testing?
Donor needs are always changing. So we’re constantly innovating on not just new product features, but also on each step and detail of the donor experience. Our priority is to make sure your campaigns are continuing to bring in as much value as they can. Few organizations have the time, specialized in-house team of experts, or research budgets to run their own ongoing testing programs. Even those that do would benefit from the quick iteration and scaled validation that only comes with a large volume of donors. Part of the value nonprofits get with Classy is a commitment to continuous improvement based on rigorous testing (and insights from the world’s largest donor community).
What does A/B testing mean for my donors?
Our testing efforts are driven by our commitment to providing donors with a high quality giving experience. Like any thoughtful A/B testing program, Donors will not know a test is underway. We do not notify campaign visitors of the test, nor is it obvious to them that they are landing on a campaign that is being tested.
Instead, when a donor arrives on a campaign that is a part of a test, all that they will see is either a) the campaign exactly as you built it, or b) the campaign with the change being tested.
We build every test and take necessary precautions to ensure minimal donor impact and a positive donor experience on every campaign, regardless of whether or not a test is underway.
What precautions does Classy take when A/B testing?
Our data, product, and design teams take a number of precautions to ensure that A/B testing does not negatively impact website visitors or campaign performance. While the precautions we take vary by feature or test, some of these precautions include:
- Testing across many campaigns: We don’t run tests on a single campaign or handful of campaigns. Rather, we test across a wide variety of campaigns to minimize individual impact.
- Controlling testing traffic: In some cases we will control traffic so as to leave some visitor’s experience untouched, testing on a smaller segment of total traffic.
- Slowly ramping test traffic: We often start testing with a small subset of traffic (e.g. 5%), monitor impact closely, and slowly increase the percentage of traffic in the test as confidence increases that there is not a negative impact.
- Limiting variables and outliers: We choose to test one change at a time and watch closely for outliers that could negatively influence results (holidays, crises, political changes, etc.)
- Real time monitoring of test results: We closely monitor how test variables are performing to ensure that it is not negatively impacting campaign performance. If it is, we can disable the test and return all campaign settings to normal.
Will my organization be included on every A/B test?
No. We run tests with a variety of organizations across all Classy customers, ensuring that what we learn is applicable to a wide variety of nonprofits and campaigns.
Can I get results for my specific organization?
We run our tests across a wider array of customers and campaigns to ensure that test results are statistically significant and applicable across a wide variety of Classy customers. For this reason we cannot provide nonprofit-specific results.
Can I run my own A/B tests for my forms specifically?
Absolutely! Our goal is to make sure your campaigns are raising as much as possible to fuel your mission. That’s why we’re committed to a robust, continuous A/B testing program, to make sure the donation experiences you create are optimized on your behalf. So while you don’t need to worry about testing donor flow and functionality, if you’re interested in testing the content, imagery, and style to best suit your unique campaigns, we certainly encourage it! To get started, inject your testing tool snippet* through a tag manager such as Google Tag Manager or Tealium.
* Currently, to avoid collisions with our platform wide testing, Optimizely is not supported