Everything you think you know - every best practice - concerning digital measurement is probably wrong, according to Gary Angel, president of digital metrics firm Semphonic, who here not only explains his inflammatory comment but explains the field's current failings and how to work around them. This article is copyright 2012 The Best Customer Guide.

If you're a savvy marketing executive with your hands on the digital channel, you may already know the key best practices in digital measurement: a need to focus on a small set of KPIs, to measure your web site's satisfaction against industry benchmarks, to follow the trend of your Net Promoter Scores, and to track your 'mentions' and level of engagement across social media, among many others.

"But what you probably don't know is that every single one of these so-called 'best practices' is basically wrong. In fact, the digital dashboard you rely on is probably quite misleading or fundamentally flawed," argues Angel. "The key to successful dashboarding and reporting is finding a small set of site KPIs that are understandable and immediately actionable. And your measurement department has probably delivered exactly that - a small set of key metrics like Site Conversion Rate, Total Visits Trend, Overall Site Satisfaction, and so on. But these reports deliver neither true understanding nor actionability."

Suppose an analyst walks into your office and tells you that your Site Conversion rate is up 5%. You'd probably be delighted. Now suppose they tell you that your search engine traffic is down 20%. That's bad, surely? But in all probability, the two metrics are related and are in fact telling you exactly the same story: as you drive less early-stage traffic to your site via natural search, your Conversion Rate will go up. But, in this instance, that isn't a good thing. In the real world, there are many different reasons why your Conversion Rate might change. There is simply no way to know which of the nearly infinite explanations might be true, and by removing all but a few key metrics from your dashboard, it is almost impossible for you to ever find out.

The simple fact is that site-wide metrics, from Conversion Rate to Total Traffic, are nearly all worthless in their own right. To be meaningful, a metric needs to be placed in the context of "which customers it's about and what they were trying to accomplish".

So, when the analyst walks back in and say something like "Traffic is up 5%", you should ask, "With whom?" Because if you don't know the audience behind that traffic increase, you don't know anything meaningful.

Next comes the obvious question: "Why were they there?" Understanding what your customers are trying to accomplish on the web is vital to understanding whether you are successful or not. Your chances of making a product sale during a customer support visit? Zero. So why are those visits included in your Conversion Rate? In other words, if your digital metrics dashboards aren't built around this simple two-tiered segmentation concept, they probably aren't very useful.

So, if there is so little utility in site-wide metrics reported from web analytics systems, you might be tempted to steer your interest toward another staple of digital measurement: the online intercept survey. Online survey research is based on a sample of your site visitors, but have you ever thought through the implications of that? Every time you launch a new marketing campaign, every time you improve (or worsen) your Search Engine Optimisation (SEO), you effectively change the demographic coming to your website and, therefore, the sample of visitors in your online survey.

The result is that site-wide trends in satisfaction are also quite useless - it's like comparing apples with oranges, chalk with cheese. Instead of tracking true changes to Site Satisfaction, you're actually tracking changes to the site population caused by your marketing programme.

But what of those who use Net Promoter Scores instead of Sitewide Satisfaction metrics? Sadly, Net Promoter Scores suffer from exactly the same problems with sampling. Trending your top-level Net Promoter Score is no more valid than trending your top level satisfaction. Without careful controls on sub-populations, every online survey metric is basically devoid of meaning.

So what about those cool new 'Brand Mention' and 'Social Share' metrics on the digital dashboard? Unfortunately, social media measurement is just as prone to failure: If your marketing team is really on the ball, you probably have a social media programme along with a set of reports showing you how well it's doing. At the heart of those reports is something called Total Mention Counts - a simple measure of how often your brand is mentioned in the social realms. It might be compared to other brands to demonstrate Brand Share, or trended over time to demonstrate Social Success.

But these are very poor measures, and yet they are used by nearly every social media agency. Social media chatter is nearly always tainted by commentary driven by professional influencers with some stake in a given industry. When you include all these mentions in a count, you are measuring some impossible combination of PR messaging and actual consumer sentiment. Unless your team has aggressively, manually pruned the counts of all professional posters and commentators, your figures have little real correlation with actual consumer sentiment. In that Total Mention Count, you've added mentions in the New York Times to blog mentions by paid professionals in your industry to tweets to your customer support reps. And there's the problem: we know that we're being talked about, but we still can't split out the "why".

Marketing executives interested in the digital channel must realise that the figures used by measurement teams are not even that useful for viewing, let alone business-critical decision making. The problem is actually quite basic in its origin: those people who never need to actually use the data are all too happy to accept it without question and, odd as it may sound, your measurement team probably never really uses the data.

But worse still, the idea that senior decision makers need simple, clear numbers has become the guiding mantra of the measurement community. Although there's nothing wrong with simple, clear numbers, when they lack meaning, misrepresent reality, and inadvertently hide the truth, then this degree of simplicity has no merit.

In his next article in The Best Customer Guide, Angel charts out the ultimate solution to the Digital Metrics problem, and explains how to change the digital dashboard to better support actual decision making - click here.