Question: Can research prove to me whether or not a particular ad will be effective before I run it?
Answer: No, it can’t. This is a judgment that you are paid to make, having heard the advice and arguments of your agency. Research canhelp you to understand what are the strengths and what are the weaknesses of a particular ad or treatment and to understand the market background against which the campaign has to work. The final decision is yours, however, and you have to accept that it must be taken against a background of very incomplete and imperfect information—like most other business judgments.
From The Institute of Practitioners in Advertising (IPA), Testing to Destruction:A critical look at the uses of research in advertising
Data Is King
In our frenzy for data to guide and direct our marketing programs and tactics, testing is all the rage. Some would even proclaim, “If you can’t test/measure it, then don’t do it.” As this frenzy to test continues to gain momentum, the question at hand is, does testing ads really get us better results? Or, does testing give us an easy out if an ad fails, because we can say, “well we tested it.” Or, does testing cause us to create safe, or worse yet, mediocre ads with ho-hum results?
Regardless of your position on testing, we all do it sometimes. So when you do want to test, what type of test should you use? With advances in neuroscience, marketers can use eye tracking, functional magnetic resonance imaging (FMRI) or electroencephalogram (EEG) research to get into a buyer’s mind; these tests probe into a buyer’s thoughts and result in more emotionally centric data that is believed to be more reliable. So while these methods can be more accurate predictors of ad fitness, they can be too costly for many marketers; thus the self-report method of ad testing continues to reign supreme.
A meta-analysis of 880 (IPA) Effectiveness Award-winning case studies revealed that the effectiveness of ads that had been quantitatively pretested was 27 percentage points lower than those that had not. (“Neuromarketing: Boost ad response through neuroscience,” Admap: February 2011.) In addition, according to a November 2010 report titled “WARC Briefing: Pre-testing”: (1) More than half of all pretested TV campaigns lose money in both the short and long terms. (2) Using recall as the main pretesting metric risks rewarding campaigns that are “boring, ordinary and even unlikable,” a Unilever study found.
In the past few decades, ad testing using self-report measures has been the main method through which advertising campaigns have been quantitatively pretested. The accuracy of assessing a customer’s cognitive and emotional states regarding an ad through a questionnaire should be concerning for anyone in marketing. The bottom line, according to numerous studies, is that consumers do not behave as they say they will in testing. They do not say what they think, and they do not think what they feel. So, self-reports may give marketers some ideas about customer behavior, but the results should not be blindly trusted as a way to validate advertising.
So, does this mean that testing is a waste of time? Absolutely not!
Research and testing can heighten our understanding of the market and of the consumer so that we can better define the job that advertising has to do and the climate in which it has to operate. If this is done properly, it not only guides and stimulates the creative process but also provides a much better basis for eventual decisions about the likely worth of a campaign. Early research can guide and stimulate the creative process; it can ensure that the efforts of the creative department are not wasted by pursuing a wrong direction.
Research and testing should help inform the creative process, not determine its fate.
Research properly carried out and applied can be invaluable. Research at the earliest stage of a campaign or ad needs to be illuminating; insightful so as to ignite the creative idea to reach the target audience. It must enrich everyone’s understanding of the sort of responses that particular ideas and executions are likely to provoke in the market place. Research must be seen as a learning experience, not as a black-and-white mandate.
by Carolyn Ladd
Vice President Account Planning and Digital Strategy
Cross posted at Ignite Something on the Forbes CMO Network