top of page

Never date an A/B test design you wouldn't want to marry

  • Writer: Camron Lockeby
    Camron Lockeby
  • May 9, 2024
  • 4 min read

Updated: May 16, 2024


ree

I once had a product manager tell me “I don’t care if we fill the page with dancing hamsters if it tests well and increases revenue.” Luckily, he later recanted this opinion as an exaggerated statement, and thankfully so. It would make most designers spin uncontrollably in their Eames chairs. It does however stand as a great example of how people can lose sight of the bigger picture when looking for revenue boosts from A/B testing within the silo of any single part of a larger experience. If you’re unfamiliar with this type of testing, the folks at FullStory have a great deep dive into the topic, but this isn’t that. I can however offer a few personal insights from a design perspective.


Right Place, Wrong Time

I believe A/B and multi-variant testing are a great way to fine-tune any aspect of your digital experience. At a high level, you’re releasing one or more variations of an experience to targeted segments of your audience, either in a test environment or production, which can yield actionable, data-driven insights for an agile product team. That being said, I feel A/B testing is NOT the place to try out bold new approaches to design or functionality. Taking these “first date” moments straight to an A/B test may cause test data to be skewed by factors outside of the test causing a “false positive”. To keep from confusing or frustrating users, those types of changes are better suited for the iterative style of design and development a team can validate via user testing. Done early and often, user testing is a safe way to struggle through what can often be clumsy first encounters between your innovative new concept and users who struggle with change. Yes, even good change.


Come Together

False positives and user confusion aside, A/B testing larger changes to an experience can negatively impact other teams working within your digital ecosystem. While working as a UX resource across three different teams in an e-commerce department, I discovered two teams running A/B tests affecting the same part of an experience. One team was trying to add more value to a stagnating add-ons page while another was testing circumventing the page altogether in an effort to streamline the checkout process. In product design and A/B testing, much like dating, communication is key! It’s important to know, at least at a high level, what other teams may be testing, when and for how long those tests will be run, and what goal the testing is working toward. Double-checking this information with the associated stakeholders can also shed light on many of the whys involved in the test plan. With big changes, a positive result can cause a ripple effect of unintended consequences elsewhere. With so many active players, over-communication is better than the awkward moment of running into your team’s experience in another team’s A/B test!


I Think We’re Alone Now

Many organizations that first launch into company-wide A/B testing will create a “testing team” which acts as a chaperone for the A/B experience. This ensures situations like the one I described above are avoided by gatekeeping the who, what, and when. This does not however prevent an overzealous Product Manager from circumventing proper design leadership vetting and directly requesting a test to validate an incredible idea. It’s like having an amazing time on the first few dates. Everything seems perfect so why not book a cruise together? Adding a representative chaperone from the design team can help in those circumstances but with more rigorous vetting of test ideas, a testing team can quickly become a bottleneck. Proofing a test idea evolves into a process of forms and emails passing back and forth between your product team and the testing team. Once both teams agree, booking a time slot to run the test could be months out. The chaperones prevent any poor choices but also delay your idea from getting real data from meaningful customer connections.


A better goal to work toward would be empowering each team to manage their own A/B testing environment. In organizations like Amazon which have more mature design and research departments, A/B testing is often done for every change, each team dialing the percent of customers who see a given experience up or down depending on the performance data. This kind of system takes organizational foresight and more effort to set up in the beginning, but like all foundational processes, it makes for scaleable results that pay higher dividends in the future.


Go With The Flow

The process I most prefer when introducing new ideas to an existing product experience includes plenty of cross-team and stakeholder communication, frequent user testing, and research throughout the iterative design and development process, ending with A/B testing to fine-tune the details and evolve the idea with real-world, data-driven insights. I like this flow because it provides transparency to everyone involved and helps expose any red flags from customers early. Once there’s agreement on the idea’s compatibility within the bigger picture, you can drill down and make sure the more detailed aspects are providing the best possible outcomes across the board. Before you put a ring on it.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

© 2025 | Camron Lockeby, lockeby.design | All rights reserved

bottom of page