June 20, 2011 ↘︎

Elusive engagement – Part II – Visitor scoring

This is a follow on post to my previous one about measuring that elusive engagement.  This post focuses on the aspect of applying a score to visitor interactions, as they interact with your content and applications.

Visitor scoring is fairly simple – especially in SiteCatalyst, and by leveraging the data in Discover through segmentation, (and ultimately in SiteCatalyst 15), it’ll give you even more insight into visitor engagement.

Visitor scoring measures and assigns a relative value to individual customers and prospects based on their actions and behaviors over time. You can determine intent and engagement – even before visitors convert.

Once you’ve identified your most valuable visitors, you can dissect their actions to determine the campaigns, keywords, referring sites and offline touch points that engage them – and invest more on these efforts.


TVisitor Scoring Tag Cloudhe basic premise of a visitor score is to give them “points” for certain activities.  What score you decide to give them is entirely up to you…the important thing is that they accrue points over time, and you can use those points to create segments of visitors that have exceeded certain thresholds.

In doing so, you’ll be able to compare different visitors, for example, those with a high score, to those with a low score, from different traffic sources, or across different campaigns.

The theory behind this is that visitors with higher scores are likely to be more engaged – they’ve accrued more points over time, by doing the things you want them to do.  It’s not just looking at conversions, although they will probably feature in your scoring methodology.

Imagine for a moment that you have a number of visitors:

Visitor 1 – comes to your site from Organic search, looks at 5 different products, watches 2 videos, signs up for your newsletter, but doesn’t buy anything.

Visitor 2 – comes to your site from Organic search, looks at 1 product and buys it.

Visitor 3 – comes to your site through an email campaign, and views 8 of your products.

Visitor 4 – comes to your site through Paid Search, looks at searches for a product, views it once and then looks at 3 FAQs.

Which visitor is more valuable to you?  Who is more engaged?  Visitor scoring will assist you to better understand the answer to that – and when you slot this Visitor Scoring methodology into the Interaction Index, and use Discover, you really do get a good proxy for engagement.

Ultimately most sites exist for a specific reason, whether it be to convert the visitor to a purchase, or allow them to self serve, or engage them with your content etc.   And we all want them to do that.  But only a small percentage will.  In fact, in general, it’s around 3-5% of visitors that actually “convert”.

First things first

What you need to do initially is agree on a set of interactions across your site, and then apply an arbitrary score to them, either between 1 and 10 or 1 and 100.

For example:

  1. Home page viewed or Landing Page Viewed = 1 pt
  2. View a general content page = 5 pts
  3. View a Product = 10 pts
  4. Search = 20 pts (I tend to think that if a visitor is going to take the time to search for something they can’t find, they’re more engaged, hence the higher score)
  5. Watch a video = 30 pts
  6. Use an interactive tool = 40 pts
  7. Sign Up for a newsletter = 50 pts
  8. Provide feedback/rate and review/comment = 75 pts
  9. Buy a product or conduct a self service transaction = 100 pts

Don’t worry about the actual scores, but do leave room for additional interactions that you can add in if you need to (but remember to communicate it out if you do add in others, as scores will go up).

Now that you’ve scored various activities, you need to implement the code to measure those interactions.

Implementing the scoring

In SiteCatalyst that’s easy enough to do – you need an eVar and a success event.

Set up the success event as a Counter Event.

Note that you can now have counter events where they take a value other than 1, which is what is needed for the  scoring.  However, I believe you’ll need version H23 of the s_code to support this.

Create the eVar as a Counter eVar, not a text eVar.

Visitor scoring is done by simply passing the score value as a +number into the eVar and the success event is set with the =number (yes, you no longer need to pass the event through the s.product string).

So, for example, on your homepage, you simply add the following:


On your search page, you’d include:


(The above obviously assumes you’ve used event1 and eVar1 for Visitor Scoring.

Complete that process for each of the key interactions and you’re set.  We implemented ours directly into our s_code through a variety of s.pageName value matches, or product views, or other success events occurring.

To put this in context, we’ve implemented the following scoring:

  1. Homepage view = 1pt
  2. Searched for something = 5pts
  3. Viewed a course = 20pts
  4. Completed a form = 30pts
  5. Used one of our interactive tools = 50pts
  6. Opted in to something = 70pts
  7. Submitted an application = 100pts

Making it legible

First thing you need to do when you implement this type of thing is make it all a bit more usable.

When you look at the raw data (your eVar against conversions), you’ll see something like:

Visitor scoring raw report

Remember that the success event (Application Submitted) is associated back to the value seen in the eVar, when the success event was applied – so, in the example above, 37 apps were submitted when visitors had a score of 190; 22 had a score of 211, etc.  Notice also that as we’re showing both Apps and Leads, we also see that they don’t correspond; Leads has a very different scoring result (see below).

Overall, it doesn’t tell you much – you actually want to group the eVars together.  So, use SAINT to classify them into buckets.

Important Tip: When you use Excel to classify your eVar, the key column will be the interaction score.  You need to keep that column showing 2 decimal places – when you export from SAINT, it has the decimal places on it, when you open in Excel, the decimal places disappear, and if you re-save without decimals and upload again, they keys will be different, and the reports won’t work.  So, put the key column to 2 decimal places and then classify it.

That brings about the problem of what your buckets should be…well, that’s up to you.  Only your data will tell you that.  But, you can experiment with different buckets to see what works best.  To be honest, it’s best to do this in Excel, rather than SAINT, as you’ll want instant gratification.  And you’ll obviously need data collected before you can classify…so it’s best to run it without classifications for a while.

We classified ours following a bit of analysis using Excel to figure out the best buckets for Applications Submitted and Optins.  As it turned out, after a lot of playing around, we chose a logarithmic scale, as it seemed to group everything the best:

Application based scoring

What we see in Excel is that with these buckets, 80% of applications occur before a visitor has a score of 320.  We also note that most apps occur when visitors have a score of between 160-320…our sweet spot.

So, using SAINT, we classified our Engagement Value eVar into our buckets and uploaded back to SAINT.

Tip: Due to the size of the file, we typically use an FTP upload now – it took less than 10 minutes to classify the data in the reports.

Now re-run the above SiteCatalyst report, this time using your buckets:

SiteCatalyst Visitor Scoring Report

Remember above I said that leads obviously have a very different score?  You can see that in the above report, that Lead Completes tend to happen when the visitor has a score of between 40 and 160.

Leads scoring

What we see from Excel with Leads is that 80% of leads occur before a visitor has a score of 80 – which means that those that become leads, do so pretty quickly; they haven’t visited much other content (otherwise their score would be higher) – which is great news for us!

Another way to classify the scores is to set “low, medium, and high” buckets for engaged visitors.  I’m still trying to figure out what we should use as those buckets, as we have a very large spread.  Standard Deviation will probably assist in that one eventually.

Calculated Metrics

At this point in time, you probably also want to create a few calculated metrics to get some averages out.

Ones that I’d recommend are:

  1. Score per Visit = [Engagement Score] / [Visits]
  2. Score per Search = [Engagement Score] / [Instances (Report-Specific)]
  3. Score per Referrer = [Engagement Score] / [Instances (Report-Specific)]

Why is Instances in there twice?  Because a) Visits are not available across every report type and b) the naming convention just makes it a bit easier to understand (IMHO).

Did you know you can also trend calculated metrics?

Now that you’ve got all this wonderful new capability, what do you do with it…?

Segment it (of course)

Discover Geo ScoringThere’s all sorts of different ways you can use this…all basically segmentation based.

You can look at traffic sources, campaigns, keywords, geographics, user type segments, etc.  You can trend and compare your segmented values over time.

In the example on the right, we see that while Australian visitors have an average score of 5.82, the highlighted items have much larger scores, indicating that they are engaging with more of the things we want them to do.  The example on the right was extracted using Discover, as in SiteCatalyst 14 the capability with geo-demographic reports is somewhat limited to country/visits.


In the next example, we’re looking at scores by course interest.  The course category is actually an eVar that is set in various places across the site – similar to a product category.  When a visitor browsers different bits of content, we’ll set their course category differently, which is then used for Test & Target purposes…but works well here too.

Product Interest Scoring

We see that while Undergrad had more course views during this short time frame, they scored actually slightly less than Postgrad visitors.  This would indicate that PG visitors tend to read more, which would also make sense, as it’s a bigger purchase decision for them.

And as a final example, scores by different campaign types (Organic, Paid, Campaigns, etc).

Campaign Scoring

Hmmm…seems Paid Search is doing really well; not a huge amount of traffic during this period, but they are interacting a lot.  Break that down by keyword or Adword group and you’ll get even more insight.

Here we’re looking at Branded vs. Non Branded keywords (and their top keywords):

Branded Non Branded Search Term Scoring

Seems that branded search terms generate a higher interaction, and in fact when they type in the full name of the university, they have the highest interaction.

Multi-scoring techniques

You’re not limited to having just one scoring methodology.  We’re actually in the process of implementing a second one for a completely different reason.  All you need is a spare eVar, another success event, and bit of time.  Rinse and repeat!

Comparative and Trending

Of course, you can also trend on the calculated metrics to ensure your overall interaction score is going in the right direction, and you can trend by different segments too.

Likewise, you can do comparative analysis using different dates to compare the interaction scores.

Extending the scoring

In SiteCatalyst 15, you can leverage the segments for visitors with scores higher than X – you can either use the value of the event, or you can use the SAINT classifications.

Additionally, you can use these in DataWarehouse and Discover – or start from Discover and put the segments back into SiteCatalyst.

And, you can leverage the segments and scores in Test & Target to further optimise user journeys and conversions.

There’s just a whole heap of different ways to use this information.

Next post

In my next post, I’ll combine the above interaction Visitor scoring into the Discover segments for a full on Engagement Analysis, replacing the previously described interaction score with this one.

DB logo
DB logo
DB logo