Photo by filmingilman on Flickr.com
Every now and then someone asks me how our new media activities influence people’s engagement and participation with our museum. In a quarterly internal report we try to quantify these intangible concepts for the sake of decision-making and project design. For instance, it helps us talk about ROI of different media efforts. In this post (and probably some future ones) I’d like to share some of the experiments we did in measuring engagement, participation and other tricky statistics. They’re by no means perfect, and with your comments I hope to further develop tools to measure online success.
A model to determine different levels of interaction
Not every hit to your website or online collection is similar. Some visits have more interaction, and others less. To make a distinction between different levels of interaction I use a simple model I was first introduced to by Marco Derksen (see below).
At the far left are all visits to your website. ‘Reach’ I define as true visits (not the ones that bounce within a couple of seconds). Good content gets people engaged, and an invitation has them participate. Finally, when participation is acknowledged, some visitors will become enthusiasts about your website or institution and spread the word.
Although you might use different terminology, you probably recognize the rationale behind this model. Every step to the right means more interaction as well as a smaller number of people who actually reach that phase. For a handful of enthusiasts you might need to welcome thousands of visitors.
Indices for reach, engagement and participation
I combined the above model with an interesting thought by Seb Chan (building upon work by Lois Beckett and Eric T. Peterson & Joseph Carrabis) about engagement metrics. In these posts the sum of a number of indices is used to measure the percentage of people “engaged” with your website.
The equation uses 7 indices. We’ve adjusted their values to make them suit our museum website.
- Ci — Click Index: visits must have at least 3 pageviews.
- Di — Duration Index: visits must have spent a minimum of 2 minutes (the time it takes to read our average article).
- Ri — Recency Index: visits that are repeated within a month.
- Li — Loyalty Index: visits coming from visitors who have come at least 7 times, never more than 10 days ago.
- Bi — Brand Index: visits that come directly to the site by either bookmark or directly typing our URL or come through search engines with keywords that directly refer to our brand(s).
- Ii — Interaction Index: visits that interact with the site via commenting, adding stories or photos, etc.
- Pi — Participation Index: visits that participate on the site via social sharing, liking, making connections, etc.
If you read the mentioned articles, you will find our settings are less strict. That is because I would like to measure the transition of a visitor through the interaction model. Here’s how I matched the above indices to the different phases of the interaction model:
- Reach: The sum of all variables.
Every visit that matches at least one of the above criteria is a true visit. (This means a >100% score is possible, as a single visit can match several criteria.)
- Engagement: The sum of Ri, Li, Ii and Pi.
Both returning visitors and interacting/participating visitors are engaged with our content.
- Participation: The sum of Ii and Pi.
Obviously, only visitors that actually do something with our content are considered participants.
- Enthusiasm: ?
Apart from Google Alerts and careful tracking of visitors (something beyond our capabilities right now) I don’t know how to measure enthusiasm.
Most of the above indices can be relatively easily translated into Google Analytics Advanced Segments. Additionally, I use our website’s own notification system and tools such as Twitter search and Facebook widgets to track visitor behaviour.
How to read the outcomes of the tool?
Before I show you some results, I’d like to stress that a tool like this basically can only help you to discover trends. There’s no way of saying you need at least a 2% score on participation to have a successful website. It all depends on your objectives and strategy, as well as how your website (or parts of your website) compare to others.
Before, I used to think a 10% conversion between the phases was all right. So 1.000 people reached translated in 100 engaged, 10 participating and 1 enthusiast. However, Facebook like buttons have made participation very easy, among many things, so I don’t use this rule of the thumb anymore.
In short: you can only use this tool to say something about your website in comparison to another website, another time frame, etc.
The experiment: testing the tool on three situations
With that being said, I ran the above tool on two different datasets: 1) Our website innl.nl, and 2) a subproject within this website about architecture that had a targeted communication strategy and 3) old data from the weblog we had before we launched our current website. Here’s what I found:
|Reach||63.7 %||100+ %||69.9 %|
|Engagement||8.2 %||26.4 %||5.3 %|
|Participation||0.6 %||4.9 %||0.4 %|
The first thing that strikes me is that our “old” weblog had a better reach than our shiny new website. That’s mostly due to the Brand Index though (34.2 for the weblog versus 16.7 for the website). All other indices are lower. The low Brand Index of innl.nl has to do with the high percentage of search engine traffic. The tool seems biased against larger websites such as online collections that get a lot of Google traffic. Yet, one might ask: are these people really engaged?
The second, obvious thing that stands out is the performance of the architecture subsite. It’s a convincing case for targeting your communication towards special interest groups if you’re looking at anything from true reach to participation.
A third thing I see is participation levels much higher than I’d have expected. As I said before, tools such as the Facebook like button have made interaction easy, which is probably partly the reason behind these high numbers.
There’s more in the data, especially if you know the specifics of each of the datasets and websites involved. I’ll save these details for later.
Conclusion of the experiment
Does this tool help (me) to quantify participation with the objective of policy-making and project design? It has given me some great new insights, although I think its true strength is in comparing many different datasets, from many different projects over a long period of time. I haven’t yet had the datasets at hand to do so (our website only launched November 2010).
I’m not entirely sure about the Brand index. I understand why it’s there, but I’m not entirely certain it weighs as heavy as it should. People checking opening hours aren’t really visiting your website, I guess, but the index counts them. Or maybe they are visiting your site, but not equally as the person reading one full page after receiving a tweet (who isn’t necessarily counted by any of the indices).
Regardless of the specifics of the tool, I think doing this experiment in measuring engagement has proved valuable for the experiment alone. Seb’s post that inspired this all is titled “Testing an engagement metric and finding surprising results”. My results were surprising as well. To me this means, that some of the ideas we have about participation and engagement online actually might not be entirely right. And I guess the only way we’re ever going to find out if we find a decent way to measure these intangible things a bit better than in hunches.