With any website project, it’s critical to gather and evaluate performance metrics in order to identify what works, what doesn’t work, and determine potential changes to help improve either the user experience or stated goals. But analyzation can be tricky - what should you track and, most importantly, how do you effectively interpret and respond to that data?
Gathering Metrics: What’s happening on our page?
It’s important to dig into how the pages on your site are performing in order to determine how the pages can be further optimized. This involves taking a comprehensive view of all of the user activity metrics being gathered by your analytics software, such as Google Analytics. In some instances it also involves identifying what metrics aren’t currently being gathered but would help to shape a more comprehensive view of user activity. In those instances we advise taking the time to gather those missing metrics to ensure a fully informed decision for next steps.
As an example we once had a client come to us with concerns about the performance of a new landing page. Our client noticed that several gated downloads, located lower on the landing page, weren’t being downloaded. Their conclusion was that users weren’t traveling down the page to get to the gated downloads and, as a result, wanted to add a navigation aid above “the fold” to help guide users further down the page. While this is was a logical assumption to make from the data at hand, that data didn’t tell the full story of user behavior. In order to be able to make a more definitive conclusion about how users were behaving on the page, we recommended holding on the navigation aid and instead setting up additional behavior tracking to answer several questions that didn’t have a clear answer based upon the metrics at hand.
- How far down are users scrolling? Do users stay above the fold or do they get to helpful resources farther down the page?
- Were users clicking on resources available to download but stopping when realizing the resource was gated?
To answer these questions, we turned to Google Analytics to track events. The basic structure of event tracking is that data is sent to Analytics via a Category (e.g., “Resources”), an Action (e.g., “PDF Download”), and optionally a Label (e.g., the name of the PDF downloaded). This event tracking can be added two ways:
- Manual coding - for an identified interaction (say that PDF Download example), this involves added a line of code to the href portion of the link to the PDF itself.
- Google Tag Manager - a powerful tool to implement a wide variety of tracking such as Analytics itself, our event tracking, and more through a web interface rather than coding each individual piece. You can learn more about Google Tag Manager here.
Our approach in this situation was to manually code the tracking needed, starting with triggers that could tell us more how far down users were scrolling the page in question. These triggers were set on three conditions - the page was loaded, the user scrolled down about ⅓ of the way , and the user scrolled all the way down to the bottom.
Second, we expanded upon the click tracking the client already had in place. They already were tracking how many users logged in to access gated content, but they weren’t tracking what happened after login. To understand better what users were doing, we set up additional triggers to identify the specific resource being downloaded once a user logged in.
Analyzing and Interpreting: What do all of these metrics mean?
Once all data is gathered, it’s time to analyze and interpret in order to see how users are interacting with the site. Going back to our client example, after gathering data for a few weeks metrics revealed behavior slightly different from what the client first believed. Users were scrolling down the page. “The fold” did not appear to be the problem, which meant the navigation aid wouldn’t have solved the problem.
We did see that users did not make it all the way to the bottom of the page. Scroll fatigue rather than “the fold” appeared to be the problem. Additionally, we saw that a lot of users were clicking on the different downloadable resources but they didn’t proceed to fill out the form to get past the gate. The problem here was user hesitation to submit personal information.
Responding and Improving: How do we improve based upon what we know?
Solving the “problem” become a lot easier once it’s accurately identified. We can see what the hold ups are and then make UX adjustments around those hold ups.
Returning to the client example, after we identified that the gated downloads weren’t being downloaded because of a combination of user fatigue and the gate, we recommended that an informative page navigation be created to clearly identify the different layers of the page (to give the user a glimpse into the content available on the page as well as a quick and easy way to get to that content) and, as the client wanted the download gate to remain, changing the language surrounding the gate as well as the fields of the gate itself (to make it clear to users what their information would be used for and why they should submit the form as well as to make it easier on a user to submit a form).
Better understanding leads to better decisions
As this client example shows, metrics can be dangerous if they only show a partial view. When analyzing and responding to metrics it’s important to identify what questions remain unanswered and then seek to answer them through additional tracking in order to make a well-informed adjustment.