Skip to main content
Amber Kirkwood

The important thing to remember about testing is not to stop at a failure: Amber Kirkwood, Envato

February 7, 2022
Reading time: 
10 mins

This interview is part of Kameleoon's Expert FAQs series, where we interview leading experts in data-driven CX optimization and experimentation. Amber Kirkwood is a product manager at Envato, a leading creative marketplace.

How do marketers see experimentation vs. product managers? Who's gaining more influence inside of companies? 

While both product managers and marketers typically view conversion rate as a defining metric, there are key differences in how they approach experimentation.

For marketers, experimentation typically involves optimizing and testing campaign messaging and imagery to improve click-through rates, as well as creating consistency on-site between ads and landing pages. This is where a lot of experimentation sits, through optimizing campaigns and getting the best return on their marketing spend.

Where marketers are focused on getting more traffic to the website, PMs look at how to convert that marketing traffic on-site. This is normally through reducing user anxiety and distractions as well as providing better clarity and confidence in the product.

Previously, we’ve seen marketers really own a lot of the experimentation across websites, but there’s been more of a shift for product managers to really own the strategy in the past few years. Why is this? I’d say it’s because of the learnings that you get from running tests, which are invaluable for product managers to better understand their users and products and really inform the product roadmap going forward.

How important are developers on your team? How far can you get an experimentation program with only marketers?

Experimentation has really come a long way in the past few years, and it’s more accessible than ever.

I’ve seen a lot of companies really avoid having an internal testing function because it’s been seen as a really complex, tech-heavy initiative in the past. With all the platforms we have available today, like Kameleoon, this eliminates the need for a developer-led team.
Amber Kirkwood
Amber Kirkwood
Product Manager, Envato

If you use these testing platforms, there are very few to no developers required when making small changes, usually with the aid of a visual editor. This makes rapid experimentation really quick and saves a lot of time on analyzing test results too. Only thing is, when it comes to shipping the winning variants, developers can come in really handy!

More complex tests besides text or color changes will always require developer capacity. For example, if you want to test hiding a field on a form, which may have consequences with any system integrations set up that can’t be seen. When it comes to these bigger changes, it’s equally important to consider the feasibility of a test (effort) as well as impact, and really consider whether a few days of work for developers are likely to see the expected returns.

What are some essential tools for non-developers looking to scale their experimentation using client-side testing?

There are so many great tools out there for non-developers looking at experimentation. One recommendation I make is to invest in a platform that can allow you to accurately QA your experiment before your users see it. Many testing platforms have the ability to test via an IP address to ensure there are no issues with the test before launch. This is important if you don’t have developers to check the code or identify breaks before your users do.

Another great tool I’d recommend for anyone looking to scale their experimentation is to ensure you really have a solid understanding of the basics of testing. Hubspot has a great resource on this.

Another tip: Ensure you’re really investing in a testing archive. One of the most important factors when it comes to running tests is to document your learnings and have previous tests recorded properly. I use Airtable for this, but other tools include Trello or even just a spreadsheet. Tests don’t typically have a great win rate the first time, usually around 10-20%. But it’s so important to iterate and apply your learnings from the first round. This is a common mistake made by companies, they test once and move on. From my experience, the second follow-up test typically has a success rate of around 60-70%.

Tools such as CrazyEgg or Hotjar are incredibly useful in sourcing test ideas too, the click maps and scroll maps can really highlight key areas where your users are spending a lot of time. This is especially useful for teams that may not have sophisticated tracking set up for analytics.

These tools can help identify huge areas of opportunity. For example, you might see a high percentage of users clicking on something that looks like a button but isn’t linked. One test idea could be removing it entirely, another could be improving the look/feel to ensure users understand it isn’t a link. Some test ideas can seem straightforward, and may not seem like they will have an impact, but more often than not, removing an obstacle in the path of your users can have the biggest impact of all.

Another great tool I use is the Behaviour Economics testing archive on Convertize. This website is great for useful test ideas that relate back to behavioral economic principles, which are the factors that are at play when we make decisions. These are a great area of focus for tests at a lot of leading websites, like Amazon, Google and Envato as well.

Envato is a huge marketplace with dozens of products. What are some challenges when experimenting on such a large scale?

It can be easy to want to test the same thing on multiple sites, to save time and streamline the experimentation plan more efficiently, but it can also be a waste of time if your products have variances in user behavior. When it comes to juggling multiple products, it’s important to understand what can be tested at once across all sites, as well as when to break it down more.

For example, if there is an aspect of your products that are very similar, for instance, they share the same search functionality which frequently returns zero results to users, this is an area you may want to optimize at scale by adding in suggested searches or adding in a pre-search filter.

When it comes to actual searches, you may see a difference in how users on each product use it, and this is where you want to start testing more granularly. One site may show that customers have a tendency to sort search results by newest first, the other, users may sort by highest rated. This might be a good opportunity to test setting the default sorting order differently site-by-site.

This can, of course, slow down testing velocity when you have a large product line, so something that is normally recommended is to focus the testing efforts quarter by quarter on a specific product or set of products with similar users and behavior.

You used to do retail sales. Did you learn any valuable things for your current role with that background?

It’s always fascinated me how much retail purchasing and online purchasing behavior overlaps. Working in retail really gives you a first-hand experience of the emotions of customers when buying, their frustration when they have difficulty finding an item they are after, as well as their joy when purchasing something they’ve had their eye on for quite some time.

It’s easy when working on a digital product to see numbers instead of faces, and I think working in retail has really helped me remember who I’m creating an experience for, that real people experience the same emotions when purchasing online.

For me, this is a great example of how important it is to not lose sight of user experience, in favor of conversion rate. Sometimes the two don’t always align and it’s important not to lose sight of that.

Could you walk us through some of the most interesting experiments you’ve run?

There’s been many over the years! Sometimes the most interesting ones aren’t always the winning tests either. Something I’ve always found fascinating is any work I’ve done on optimizing forms or checkout pages. One test I remember was reducing the number of fields on an inquiry form from 12 to 4. Naturally, you’d think this would be a great improvement in UX, saving the user time on filling out unnecessary fields, as well as removing their anxiety over providing more personal information than they were comfortable with. We looked at the data, and saw these fields had huge drop-off rates so we thought this was a sure-fire win. This test, despite my extreme confidence in it, failed miserably.

The important thing to remember about testing is not to stop at a failure, because normally on the second or third try you succeed and the insights from the first test are invaluable. For this one, we continued to iterate until we landed on a hypothesis: Customers were confused by the inquiry form, they thought it was an application form and found that fewer fields were more distrustful because they were used to seeing a lot of fields on applications they had seen.

It took us about 5 iterations to get to this point, but when we tested this hypothesis to make it more clear it was an inquiry form, we saw a 23% uplift in conversion.

There's a lot of convergence of experimentation and feature management these days. Which is better, a feature management solution that has A/B testing? Or an A/B testing solution that adds feature management? 

There are pros and cons to both approaches. With an A/B testing solution, you normally see an entire team or multiple teams focused on improving conversion rate through rapid experimentation. This is great for quick learnings, but when you’re looking at testing everything and anything, it’s easy to lose track of the bigger picture and how one change in a small area impacts the rest of the customer journey. Features are great, but it can also lead to thinking of testing as an afterthought, and sometimes, spending months on creating something and testing it at the end can fail, especially if you aren’t incrementally testing and validating work along the way.

There’s a third option, which is having a testing team dedicated to improving conversion rate through focusing on themes of tests, which considers the holistic user journey more and addresses related pain points you may have identified. For example, a feature could be focused on surfacing more trust signals to users to help them make a better purchase decision. Something like this can have dozens of tests related to it but also keeps in mind the user experience as well.

What would you recommend to others who want to learn CRO and enter the industry right now?

When I started in CRO I had to start from scratch, working at a relatively small company that hadn’t done much in that space before. One thing I regret when looking back at it is thinking it would be fine to dive right in and learn as I went. What I recommend is making sure you have a plan.

A. How will you source your test ideas? 

 

B. How do you plan on prioritizing test ideas? 

  • My personal favorite is the PXL Framework, but there is also PIE and ICE which are very popular.)
     

C. What is your experiment design template going to look like?

D. Where will you store and record your test results? 

  • As previously mentioned, I use Airtable, which has some great starting templates for testing archives

 


To find out if your store can increase conversion rates and provide better customer experience with an advanced experimentation tool like Kameleoon, book a demo call.

Kameleoon is an advanced client and full stack A/B testing and personalization tool with Shopify Plus integration to make customer experience optimization easy for eCommerce stores. Its unlimited, flicker-free A/B/n testing, AI personalization and real-time data reporting help mid-size and enterprise brands create world-class online shopping experiences.

Topics covered by this article