For the user, nothing is more frustrating than a badly thought-out website. If they have to look hard to find what they need, or end up on an error page or have to go through a never-ending form, you can be sure the (already low) good will they had when arriving on your website is going to disappear pretty quickly.
Unless you don’t want them to stick around (and more), UX must be one of your primary focus. Before we continue, let’s get the difference between UX and UI out of the way:
- UI is short for User Interface. It’s how people interact and use your product or website.
- UX (acronym for User eXperience) is how effective and enjoyable the use of your site or product is.
Think of it that way, if your UI is excellent but your UX is horrible, people will think your site pretty and that everything works but won’t find what they need, or they’ll have a hard time understanding how to use it and most likely loose patience.
If you want to learn more on the differences between UI and UX, you can check out this article by Emil Lamprecht from Careerfoundry. The Bible of UX is a book titled “Don’t make me think”. You don’t even have to read the book; just the title give you an idea of the mindset you have to adopt. UX is like a joke, if you have to explain it for people to understand, you failed. A study (corroborated later by Google) showed that people form a first impression of your website in 50ms. It’s an uphill battle for you from this point. It’s your job to win them over, make their visit enjoyable and fruitful. This is all fine and dandy but how do you evaluate or optimize your UX? How do know what people understand and like best?
A/B Test everything on your website
A/B Testing is the best way to make informed design/marketing decisions and improve your UX. Today, we have the technology and tools to know exactly what people are doing on your website. But then what do you do with this data and how can you leverage this visibility for your UX? You can test everything on your website (navigation, design, specific elements, etc.) and get actionable insights. The days where you make new designs, push them live, pray, then endure the aftermath are over. Every change must be tested before being implemented, and confronted with your audience.
For those of you not so familiar with A/B Testing, here is an example. You saw in your analytics that a considerable number of visitor left at the second step in your checkout process.
Before jumping the gun and pushing live a redesign, you should make an A/B test. You would formulate a hypothesis as to why people are leaving.
Then create a design to answer this problem. After that, you would divide a portion (or all) your traffic into 2 equal parts, and send people randomly either on the current version of your step 2 (A) or your new design (B). You let things run for the appropriate length of time (more on that in a future article); then you compare which design performed better on your goals. You can now safely implement your new design, backed by data, not on a gut feeling, without risking a drop in conversions.
A/B Testing will also guide you in all your redesigns. You don’t want to surprise your users or implement things to see your engagement and conversion rates drop significantly. We’ve seen cases where a company completely redesigns its site, implements it and saw a dramatical decrease in conversions. Which usually surprises them, as they all liked the new design better than the old. BUT, they didn’t test their new design. As a result, people were confused by the new navigation, thus left. Sometimes, what converts better is counter-intuitive.
Everyone in your team could find a design way better and see it perform worse than the ugly one. 88% of online consumers are less likely to return to a site after a bad experience. (Study “Why web performance matters”, The Gomez Report). Yes, be afraid, very afraid. It’s hard to gain people’s attention—let alone their trust, but extremely easy to lose it. And good luck gaining it back. Long story short, you better not make design / UX / marketing decisions based on gut feelings, you could pay for it greatly. So test, test and test again.
Don’t change everything at once
Particularly if you have a loyal, recurrent audience make sure you don’t catch them off guard with a sudden, complete redesign. Even if you’re convinced it’s better than what you currently have. That’s what Basecamp did in February 2014 when they officially became a company. They did a complete rebranding, stopped some of their products and redesigned basecamp.com, without proper research and testing. One of their change was to remove the signup form they had on the homepage. What happened? They watched their signup rate decrease without knowing why exactly for about six months before narrowing down what had happened. They lost millions in the process.
And won them back by A/B Testing with the old signup form. Unless you’re Facebook, with “Move fast, break things (now changed to Move Fast with stable infra)” written on your wall, of course. They push major changes (launch of Messenger for example), ignore the screams of everyone and watch what people are really doing. Do they use it every day even if they vent everywhere saying they aren’t happy or did they really stop using/converting as a result of your change? Chances are you’re not Facebook so tread carefully. There is a lesson here, though, about how you should handle feedback. Which is our next point.
Ask your audienceIt might seem a tad obvious, but sometimes you just gotta ask. Qualitative data can be a benediction—if you know what to ask, how and when not to listen. It’s not uncommon to ask, and immediately regret it or not how to discern what is actually valuable. So here are eight guidelines you can follow:
- Ask to see if you’re right or wrong
- Ask why they did something
- Ask very specific questions
- Ask only what you need, not what would be nice to know
- Keep it short, five questions is good, over 10 is too much
- Don’t ask open-ended questions in short surveys
- Don’t ask what to do
- Don’t ask what people want
- Surveys (SurveyMonkey, Typeform, MyFeelBack)
- Feedback boxes (Qualaroo, Hotjar)
- User activity in your analytics
- Reach out manually
- Usability tests (usertesting.com)
Eliminate single points of failure
A single point of failure is a part of a system that, if it fails, stops the entire system from working. How does it translate for websites? Every event/element that will stop a visitor in his tracks and make him leave, thus not do what you wanted him to accomplish, is a single point of failure. You must track these like your life depends on it—and it kinda does, figuratively.
The devil is in the details, so even it could seem insignificant, sometimes it’s the difference between a sale and an exit. But what to look for?
- Bugs/error messages/404 pages
- Site not optimized for mobile (61% of users said that if they didn’t find what they were looking for right away on a mobile website, they’d quickly move on to another site.)
- Your site is slow to load (40% of people leave a website if it takes more than 3 seconds to load.)
- You’re using outdated technologies on your website (Using Flash? urgh)
- Too many ads
- Your Content is hard to read
One of our E-commerce client had a situation like this. They sold out on a product (good no?) but then people ended on a 404 page (yikes). Which resulted in exits of people who were going to buy, dissatisfaction, not to mention that if a page stays too long with a 404 error, it will be de-indexed by Google.
Don’t get attached to ideas, focus on process, impact and data One characteristic team making decision based on guts have, is that they focus on ideas. It’s often, “I saw this opportunity/problem… I had this idea”. If you attach ego to ideas, you will try to defend and rationalize when challenged. It’s human nature. You will give more importance to input and not output.
To avoid that, have a rigorous team process, from ideation to analysis. You can adopt something like: A quick reminder before you jump in the process: A successful UX has two components—in different proportions depending on your identity and industry: emotion (did they enjoy using your site/product) and effectiveness.
- Analyze quantitative and qualitative data
- Look for problems, points of improvement
- List and prioritize them by impact on emotion, effectiveness and correlate them with your business goals
- Ask the Brian Balfour question: “What is the highest-impact thing we can work on right now given our limited resources? Whether that’s people, time, or money.”
- Formulate hypothesis on how you could address what you found
- Test said hypothesis
- Analyze the results
- Learn why it worked or why it didn’t
What this does is, every action you take will come from data, will have been prioritized in function of their impact, will be tested and analyzed scientifically. Everything goes through several steps and your whole team, removing ego and is not based on individual ideas. It’s not perfect by any means. You can’t completely remove ego nor can you not be affected when something you had hopes for flops. But focusing on why, data, impact and having a strict process will definitely help.
You’ll maybe notice something missing in all we’ve said so far. Particularly if you often work with designers. We didn’t mention intuition and creativity. Well, in fact, we kinda implied it. But we’re focusing on optimizing, so they come into play later in the process. In what I called “5. Formulate hypothesis on how you could address what you found” in the process proposed above. This is where those two will shine.
The solutions you’ll come up with to answer the issues/opportunities you spotted will necessitate a whole lot of creativity and intuition. Qualitative and quantitative data will give you what and why. Intuition, creativity will help you find how you address what you found.