Tuesday, June 9, 2009

When numbers just don't add up

I read about the row over Nielsen’s low American Idol ratings between Fox Network and Nielsen last month with some amusement. This came hot on the heels of my sister's comment that networks really need to deepen their understanding of what's going on with the ratings of shows because of so much riding on them. She works with a large network and is responsible for a couple of big reality show on the same; she voiced the same angst as Fox’s CEO Tony Vinciquerra. The frustration echoed by the networks is something that I have heard time and time again usually when the ‘unexplainable’ happens. I’ve also been privy to the war of methodology between two TV rating providers in the Indian market (they finally merged through acquisitions).

Personally, having been on the agency’s end for years now, I must take Nielsen’s side for no better reason than to talk about how 'glitches' in analysis are looked at by agencies and why this may just be a case of bad communication on both sides along with a product recalibration issue.

Analytical solutions provided by agencies in the form of products or customized research are usually well accepted by their clients till something goes wrong. The chaos that ensues is more in the case of established research and products vs. new customized solutions. Usually agencies are the first ones to catch the ‘glitch’ and after the dreaded news is broken to the client, many a nights are spent arriving at an internal explanation and fixing it or accounting for it.

The ‘what went wrong’ analysis usually centers on three areas at the agencies end:

  1. Quality control(human error is the first thing that is checked out)
  2. ‘The non normal event’ that could have caused the error
  3. Methodology-usually sample design and representation

While the first two are the easier items to find and fix or explain, it’s the third that is usually more troubling to discover and to correct.

On the Nielsen rating issue with Fox, the first thing the agency would probably do is to check if the reporting of Idol ratings were correct. For this they would look at the generation of the report itself or may sample various breaks to re-tally results.

If this does not throw up anything, they would then search for other events that could explain what happened. For this, they would use information already available through other studies or look at issues in the past which pointed to the problem. In Nielsen’s case, my conjecture is that one of the hypotheses generated would have been the noncompliance in homes i.e. people using their meters incorrectly. This insight that Nielsen stated was generated from another research would form one of the many hypotheses that the agency would explore. I’m not sure if this was the only reason for the low ratings communicated to the networks, but my feeling is that it was the one picked up and blown apart. Nielsen, on their part is right to contend that the 8% discrepancy they talked about is not a number that can trickle down to the network show rating since it was garnered in an absolutely different study done for different reasons. But I doubt the networks are listening.

With so much on the line in terms of money, the way that the client sees it is that ‘he better have answers or someone will pay’. In all fairness, if I were the client I would feel the same way i.e. enraged, puzzled and frustrated. The problem is that ‘fixing the issue fast’ may not be as easy or appropriate unless there is a real causal link established between the households that are not in compliance and the low ratings. To do this the agency will have to test this hypothesis out in a comprehensive manner and then arrive at the x% number for non compliance or under reporting and then link it to the fact of low ratings across some disputed and non disputed shows. Not a simple task.

This brings me to the third point and one which most agencies dread, questioning their design if all else fails. Since established research and products go through a thinking and creation phase, casting doubts on or revamping design is not the preferred approach with most outages. The need to re-look at methodology only arises if the issue raised is not resolved and if the agency fundamentally comes to believe that a change in methodology would benefit clients. Methodological issues especially those of sample selection, type of sampling, weighting, margin of error reporting and power calculations in studies especially large ones like TV ratings still leave room for improvement and require another blog entry. Nielsen may very well be in the process of thus investigating and learning from the first two possible reasons for the data error.

That the Fox CEO feels things are very unclear are feelings that a lot of clients echo when issues arise with research results. Some pointers that may help clients manage these situations better for the future are:

  • Understand the various elements of the design of the study especially sampling, the type and representative aspect and weighting. Ask the agency for a pros and cons analysis for the design in place or proposed.
  • Evaluate the margin of error in reporting not just at an overall level but by necessary segments and what it means at a ground level.
  • Invest in a statistician/(s) or another analytics agency at your end (if you don’t have one already). Their job should be to slice and dice the numbers, give you more insights and raise pertinent questions while working with the agency.
  • Calibrate results got from the agency with other findings internal or external. This is harder in a monopolistic situation but past data and parallel studies should guide you.
  • When data issues arise, work with the agency to fix them, once they are fixed test the new situation. Play devil’s advocate; don’t rest after things stabilize. Try and get the agency to establish a cause and effect analysis for the data blip controlling for other factors.
  • Question, Question, Question-it keeps the agency on its toes and helps preempt disasters.
  • Ask for the process of quality control employed by the agency in it's data collection and reporting. Review it, check for loopholes and facilitate correction.
  • Be kind (unless the agency is a repeat offender). Recognize that in the data analytics game, agencies usually try and give you the best that they’ve got. Data outages are also frustrating and traumatic for them.

No comments: