Research Targeting Web Analytics Websites

Never the Twain Shall Meet: Qualitative v. Quantitative Analytics

Posted by Andrew Edwards on May 25th, 2011 at 12:19 pm

Most of the time, if you are at an industry seminar and a “panel” is “debating” an issue, you can rest assured that no great drama will pass as you check and recheck your email to see if anything interesting has come in. Most industry panels are exhibitions of cautious meandering such that the panelist gets to tout some pet notion while being careful not to undermine or offend what may be a powerful potential adversary (or just plain friend) at the other end of the dais.

Not so at this week’s Web Analytics Association gabfest at Waltham, Massachusetts when the subject of one panel turned to the relative values of “qualitative” analytics versus “quantitative” analytics.

I should probably translate. “Qualitative” means the study of attitude via panels, surveys and other non-numerical data, the better to understand what is sometimes gratuitously called “the human element”. “Quantitative” means the study of actual behavior via the capture of behavior-caused numerical data from web site servers and the log files they create without human intervention—the better to understand not attitude, but actions that were in fact taken by a web-site visitor. Both methods are purportedly critical to a marketer’s ability to effectively predict outcomes and to somehow induce those outcomes by manipulation of content.

The two camps—the believers in qualitative data and the “quant” folks--tend not to see sufficient value in the “other method”. This curiously tense and rather long-running debate was entertainingly on display during one of the panels. For good measure, a generational gap was also in evidence, as the “human element person” was a bit more senior in her years than the youngish turks manning the quant guns.

She was a very accomplished woman with a company called KD Pearson, and, one would presume, had spent the better part of her career studying consumers the way it had always been done—by asking them what they thought, by splitting demographics into finer and finer slices, and via other useful but non-binary methods.

The rest were a fairly typical blend of thirty-something entrepreneurial types who had grown up digital and had a predilection for quantitative data as a guidepost on the way to predictive marketing.

The conversation would go something like the following (these are not direct quotes):

She: “It is important to understand how your customers feel, otherwise they may complain, and in a social-media world, that can cause damage.”

He: “But what is the importance of what they feel unless it somehow shows up in my behavior models? If there’s annoyance, can that be tied to diminished sales? If it can, then we have to do something. If not, then I don’t care.”

She: But you have to take the human element into account otherwise humans will eventually rebel.

He: I only care if it affects my numbers. And if I spend time trying to fix something that has not affected my [numerical] outcomes, then I have wasted my time.

To me, these are mutually exclusive points-of-view.

It would be easy enough to side with the quant folks, as I am typically wont to do, in that the satisfaction of desired outcomes in sufficient numbers is the only type of verifiable success available to a marketer. The rest can be dismissed as voodoo.

But then, what about the fact that nearly every day a study is mentioned in the mainstream media where the headline begins: “Experts Baffled by [fill in the blank]”. This month it’s tornadoes. In March it was a nuclear meltdown due to, of all things, a tsunami. In February we had an uprising in Egypt that seemed to have been predicted by exactly no one.

Chances are the experts were relying on many different sources, but chances are even better that they relied rather too heavily on quantitative data to develop their predictive models. And the trouble with predictive models is that they get pretty badly banged up when the news starts to come in.

Reality is three and four-dimensional. At the cutting edge of quantum physics, some might even add another couple of dimensions to the pile. And who can say they know the nature of reality, really? Or of human motivation? Does human motivation show up in numbers? Only by inference. Does human motivation impact bottom-lines all across the business spectrum? You bet.

Does it make sense to care about your bottom line or your KPI execution to the exclusion of non-measurable attitudes? There is a strong argument that it does. It has often been demonstrated that cold calculations based on numbers and attempts to improve those numbers, almost always improve those numbers.

But what about the tornado? Or the group that suddenly decides they are going rogue and you’ve got no idea why? Would something like this have bubbled up, perhaps, in a qualitative investigation? Not sure—but certainly it has a better chance of showing up there first, than later in plummeting sales.

My suggestion is that we live in a hybrid world of both digital and analog dimensions. People are hopelessly analog of course, and their innate complexity leads to an essential lack of predictability. That said, tempering the study of the human animal’s attitude with a healthy dose of the trending data associated with the myriad behaviors exhibited by customers, makes the most sense as far as I can tell.

We have not come far enough down the road with analytics to dispense with study of “the human element”. But that study needs to be justified in the face of quantitative analytics because, for businesses, if it never shows up in a key business indicator, it is kind of like that tree that falls in the forest. And I am a proponent of the notion that said tree has not made a sound.

So: if your trees (customers) are making movements, the movements have to be near enough the campground in order for you to hear it. Otherwise, more or less, as far as the business is concerned, it might as well not have happened.

Both sides have something to learn from the other. Quant needs to admit that softer insights might sometimes help them avoid the “baffled expert” syndrome while survey-folks need to admit that without tying soft data to hard numbers, they may well be wasting time and effort on study of dubious import.

So it is just as I have suggested: never the twain shall meet.

2 Responses to “Never the Twain Shall Meet: Qualitative v. Quantitative Analytics”

  1. R Fried says:

    Bob -

    Qual. v. Quant. isn't the issue. It's whether the research is "good" or "bad."

    Richard Fried

  2. Thanks for the thoughtful post on the social media panel, Andrew. As somebody who was there (, I was struck by how little the panel focused on, well, analysis (being a fellow quant). I was also struck by the current inability to provide harder social media metrics across the industry (hence, the discussion seemed to resemble idealogy more than anything else). However, I did learn about Pirate Metrics, which was a nice takeaway!

Leave a comment