Skip to main content
Shiva Manjunath

How to run A/B testing and stay compliant – Interview

October 15, 2021
Reading time: 
5 minutes
Daniel Boltinsky
Daniel Boltinsky
Kameleoon, Managing Editor, North America

This interview is part of Kameleoon's Expert FAQs series, where we interview leading experts in data-driven CX optimization, A/B testing and experimentation. Shiva Manjunath has nearly a decade of experience helping teams mature their experimentation programs, so we wanted to find out how companies can scale their A/B testing, especially while worried about compliance.

 

Hi Shiva, thank you for your time. Could you please tell our readers a bit about yourself and what you do?

I’m a senior strategist at Speero, a leading experimentation consulting agency. I’ve been an experimentation geek for 7+ years now, building experimentation programs for companies like Norwegian Cruise Line and Gartner. One of the super fun things I’m proud to be working on now is helping teams uplevel their experimentation maturity. something I’m super geeking out on frameworks & processes. I’m also helping consult as a strategist to help a few other accounts improve and run their optimization programs as well!


Experimentation and staying compliant


Many organizations see experimentation as something for conversions, not retention. How do you get them to make a mindset shift? 

The way I tend to get folks focused more on making a mindset shift away from singular metrics (e.g. simply looking at conversion rate) is more along the lines of education about experimentation, and the power of the tool. This has been said ad-nauseum, but I’ll repeat it. CRO ≠ conversion rate optimization. CRO = experimentation. It means risk mitigation, challenging the status quo, and running a ‘test to learn’ program. What can that manifest as? Running an experimentation program based on learning more about your audience. Bucketing audiences differently, such as new users vs. existing users, traffic sources, behavior on the site, desktop vs. mobile, etc.

It also means collaborating with UX/devs/other teams to translate complex ideas into tangible test ideas that you can run a test on to learn more about concepts / marketing strategies. The more you run a ‘test to learn’ program, the more you’re able to iterate and build a website with a strong user experience. And that, at its core, is a major driver in building strong customer retention.

healthcare report

 

Companies say data is critical to optimizing their CX, but many feel overwhelmed by volume. How can someone feel less overwhelmed about data?

So an interesting example of this is session recording. You have a tool which will maybe record 10,000 sessions in a month. If you simply go to the tool and click ‘play’ on the first session recording, watch it, then go to the next one, then the next one, you will spend days watching sessions and probably come up with nothing tangible.

Abundance of data is great, but there has to be direction to it. This is where UX and research is important. This is where a framework for analyzing data is important. Those will give you starting points to begin your journey of data investigation, and from there you can learn, experiment, iterate, and do it all over again.

 

Brands in regulated spaces like healthcare & banking are apprehensive about optimization. How can you put them at ease so they can compete with digital-first competitors?

The fear of testing things tends to be unfounded. Sometimes, people don’t want to have their ideas challenged. Sometimes, people want to move ‘fast’. To which I’d argue I’d rather be right, and know my efforts are truly working, than be ‘fast’. You wouldn’t, as a big pharma company, simply develop a drug, then skip clinical trials, right? Legality aside, you have those trials to ensure your drug is actually going to work. Why wouldn’t you do that with your digital property too?

Let’s use another example - healthcare. As a doctor, you don’t simply look at a person and diagnose their condition, right? You run tests. You ask questions. You try to understand what they’re going through, then come up with a solution. Then monitor. This is a perfect example of using research to identify problems - you should be doing this on your digital properties to make sure you’re actually solving problems users are facing.

Forrester Banking

 

Something like 90%+ of experiments “fail”. How can organizations harness this large body of testing as insightful data, rather than waste it?

That’s an easy one, conceptually. ‘Failure’ is in the eye of the beholder. I actually recently did a webinar with Experiment Nation on how to run a ‘test to learn’ program. Effectively, if you run a program to ‘win’ with every test, you will lose far more than structuring your test so you ‘learn’ with every test. Effectively, with every test you run, even if it ‘loses’ in key metrics, you still ‘win’ because you learned something about your audience. I’ll shamelessly plug that y’all should check out my webinar to learn more!

 

Test velocity, or the amount a brand experiments and shares insights, is a key indicator of business success. What stops teams from scaling their optimization programs? What would you tell them?

Great question. This is what I’ve been working on a bit in terms of helping companies scale up in their experimentation maturity. There isn’t an easy answer, unfortunately. It really depends on the company, their resourcing, their org matrix… I could write a whole essay on this. Things which tend to limit experimentation team growth tend to be development resources, lack of collaboration within functions, lack of experimentation culture (i.e. let’s just do it, testing is too slow), and having far too many cooks in the kitchen.

Solutions are extremely dependent, but outsourcing tasks (e.g. having Speero build your experiments for you), fostering collaboration with internal teams, and running a ‘test to learn’ program are all fantastic solutions to scaling velocity up. One thing I will call out though, is a focus on quantity over quality. Simply scaling your velocity up doesn't mean anything if the quality of experiments is low (e.g. 30 button color tests). There has to be a balance between quality and quantity. So a fixation on velocity can be as misleading as a fixation of an experiment simply on ‘conversion rate’

Some fun stuff...


What is one book not directly related to optimization that you would recommend to CROs?

Books? What are those? Heh. For real though, I’ve been reading Crucial Conversations on the side and I’ve really enjoyed it. Politicking, as much as we don’t want to do it, is important. Building relationships, making sure stakeholders are happy, are all important. I’m on a chapter talking about ‘building safety’ within conversations, and I think that’s crucially important when it comes to being an optimizer. Making sure everyone is empowered to contribute to experimentation is a powerful weapon in building the best possible digital experience.

 

You’re at a conference (remember those?)—what’s your go-to networking strategy? Any icebreakers?

Honestly, I just walk up to people and introduce myself. Keep it simple I guess.

Topics covered by this article
Daniel Boltinsky
Daniel Boltinsky
Kameleoon, Managing Editor, North America