Despite operating in an increasingly complex and nuanced world, too many cultural organizations maintain an overreliance on simplistic research methods that can impede progress instead of empowering it.
Cultural organizations are striving to become more data-informed – and that’s great news! After all, we insider experts are not our audiences. If we want to expand and diversify attendance to be more representative of the US population, then we need data to understand not only our current audiences, but also those inactive visitors who are choosing not to visit us.
Becoming data-informed means much more than simply “having” data, though. This article is going to make some people angry because it calls out select shortsighted practices that risk our sector’s competitive abilities and long-term growth potential. This said, it’s worth the ire if it inspires even a modicum of critical thinking when it comes to the research our sector relies upon to inform decision-making processes.
Not everything that can be measured is worth measuring or is necessarily helpful. It’s important to consider not only fundamental flaws in broad sector numbers but also to understand the difference between a key performance indicator and a diagnostic metric. Key performance indicators (KPIs) are measurements that show progress toward business- or mission-based goals. They provide critical checkpoints on the route to reaching desired objectives, and include things like visitor satisfaction, value-for-cost perceptions, and how well an organization is believed to be executing its mission. These indicators are often composed of composite metrics that collectively allow for deeper analysis and a better understanding of audience behaviors and perceptions.
However, most metrics are diagnostic metrics, not KPIs. Diagnostic metrics are data points that contribute to assessing KPIs and pinpointing opportunities for organizations. They are best used as quick check-ins – and they can be incredibly valuable when properly designed, deployed, and assessed. However, these metrics cannot stand in for KPIs because they are not as closely related to desired outcomes and can be misleading in more comprehensive efforts to understand complex motivations and behaviors. They are best considered occasional flashes of light rather than guiding stars.
As cultural organizations strive to become more data-driven, we’re spotting some diagnostic metrics – and the flawed methodologies related to them – serving as primary decision-making inputs. When these specious data points or methodologies are the driving force behind strategic decisions, cultural organizations risk alienating audiences rather than engaging them. These bad practices tend to support a false sense of being “data informed” when, at best, these bad practitioners are merely “data adjacent.”
Here are three bad practices that cultural organizations disproportionately rely upon at their own peril:
1) Championing Net Promoter Score
Net Promoter Score claims to “measure customer experience and [predict] business growth” by using a 1-10 scale to assess how likely someone is to recommend an organization’s experience, product, or service to a friend. The calculation is simple (get used to this choice of word – it’s the best descriptor related to NPS in both its positive and negative connotations) and its related ratings have a straightforward classification. Those giving a rating of 9 or 10 (promoters) “are loyal enthusiasts who will keep buying and refer others, fueling growth.” Those scoring 7 or 8 (passives) “are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.” Finally, those giving a score of 1-6 (detractors) “are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.”
While Net Promoter Score can be an interesting diagnostic metric in certain circumstances, its innate simplicity risks interpretative flaws – so much so that we often find ourselves flabbergasted by the widespread misunderstanding of and overreliance on this very basic metric. We’re hardly the first ones to note the limits of Net Promoter Score as a meaningful information resource for business. Reviews and resources ranging from the Harvard Business Review to the MIT Sloan Management Review and The Wall Street Journal have warned executives of the “dubious management fad sweeping corporate America” for good reason.
NPS can be helpful – when rightfully contemplated amidst a robust array of research. But NPS has two major flaws that must be taken into account.
If you’re a museum data geek, you’ve spotted the first one immediately and you’re already cringing: Having a “2” experience is definitely NOT the same as having a “6” experience! One person needs immediate brand first aid and stands to provide very active negative feedback, while the other may just be slightly ambivalent. Not only that, having a “9” experience is different than a “10” experience! A “10” experience is perfect! How do these count the same?! Numbers matter, and NPS fuzzies up the important nuance of the actual visitor experience such that it cannot be further assessed or more productively understood without additional analysis.
And there’s the second issue with treating something as simple as NPS as anything more than the most basic of diagnostic metrics: The simplicity of the system doesn’t allow for robust additional analysis. Someone gave you a “7”? You may never know why. You can observe a potential problem, but you lack the information to fix it.
Let’s apply the flaws of this simple measure to our complex contemporary condition. In our analysis of 224 visitor-serving organizations in the US, we have observed a 17% decline in a recent visitor’s willingness to recommend visiting the same organization to a friend or family member during the past 30 months. (In our research, “willingness to recommend” is a measure that closely approximates the basic premise of the NPS construct.)
This is stunning, right? Doomsday stuff. Get to the lifeboats because our ship is sinking.
Except, the logical follow-up question to those visitors withholding endorsement – “Why not?” – yields important context. Why are people hesitant to recommend a visit to a friend or family member? The number one answer is that they are concerned about the health of a potentially vulnerable friend or family member during a continuing pandemic and don’t like suggesting an increased exposure to the coronavirus to someone they care about. The second most popular response is also related to the pandemic – they don’t want to recommend a visit to a crowded, public gathering place to people whose children may not have been vaccine-eligible at the time.
Taken at its most simple measure, the observed 17% decline in willingness to recommend visiting an organization could send leadership into a spiral of fake crisis that could lead to a rush of directionless activity and low morale. Considered with more context, however, the same metric might merely suggest tweaking a mask policy or making a modest change in hours of operation to accommodate more vulnerable populations.
Simple data risks simple-mindedness. But folks, our world is rarely simple.
2) Relying on non-representative, limited data collection methodologies
Let’s cut to the quick: Data collected by unmanaged, unregulated means such as at onsite kiosks (e.g., “booths with buttons” or touchpads), in response to a mass email solicitation, or even as part of a simple onsite sampling of visitors can certainly provide near-immediate feedback to very specific conditions. In this way, these findings can be helpful in gathering broadly directional information. However, we should also recognize the limitations and weaknesses of these types of data collection processes.
Like outputs such as the Net Promoter Score metric, data collected via methods and means that favor rapidity over representation should be best contemplated as directional in their insight. By their very nature, these data collection methods risk being neither random nor representative because the people who input responses are neither random nor representative. Thus, data derived from these methodologies tend to be among the least scientific – and also the least reliable in terms of speaking for a diverse constituency.
These methods are also subject to complex challenges such as survey bias, where the design or method of the survey tends to elicit or encourage a less representative or inaccurate measure of opinion. Survey biases can be flaws of design such as unartfully phrased questions that risk “leading” a customer to default to more favorable responses (i.e., acquiescence bias), or even flaws of location such as placing a kiosk within easy reach of a youngster with a predilection for pressing buttons or touching interactive screens!
Likewise, email blasts may have greater appeal to extreme respondents. Those who had a great time and those who had a horrible experience alike may be more likely to let you know their feelings than the much greater number of people who may have had a perfectly pleasant afternoon at your organization but are too busy making dinner, packing lunches, or checking homework to respond to your email survey.
Some of the largest enterprise in the world seem to be aware of the limits of these types of data collection methodologies …but retain them more as “research theater” to imply concern for their constituencies than as proper feedback tools. Have you had the following experience at an international airport? After waiting thirty minutes uncomfortably close to strangers in a security line, partially disrobing, unpacking electronic devices, and having your hands swabbed for illicit particulate, you were then invited to press a button indicating your level of satisfaction with the experience. Despite the millions of responses collected over several decades, has your security experience meaningfully improved? Probably not – but for some, it might feel good to press the bright red frowny face button twelve times on the way to your departure gate.
Cultural organizations are better than ever at reaching the same kinds of people and relying largely on only audience research (vs. market research) is a reason for this narrow success. Entities have gotten so great at speaking to people who are already visiting that they have a rather good pulse on the types of programming that are most successful for reaching current visitors, and intelligently use this information to develop even more programs that current visitors love. This has its advantages in cultivating a honed group of supporters that can be strategically leveraged to encourage membership opportunities and grow this increasingly critical audience.
The downside, of course, is that current audience research does not help us to create programs that welcome new visitors or to overcome barriers to attendance in order to expand visitor engagement. When organizations rely primarily on audience research without also considering market research, market potential risks shrinking rather than expanding.
Successful market research meets people where they are. And, of course, onsite visitors are onsite. That’s a blessing and a curse of kiosk data whether it includes simple buttons or more advanced touchscreens: It only succeeds in monitoring onsite visitors – and only of those onsite visitors inclined to participate in the research process. Does this wholly invalidate the findings of these processes? Of course not! But it does suggest a limit to the appropriate applicability of the research. Should these less-scientific methodologies inform an organization’s strategic planning processes for audience expansion or long-term strategy? Probably not.
These diagnostic metrics can certainly provide quick and affordable flashes of insight. (“There might be an issue here!”) When these data repeatedly suggest an issue, they represent an opportunity to develop a comprehensive research process to understand, assess, and respond to the situation using more scientifically robust research methodologies that enable an organization to understand if the issue is “real” (as opposed to an outcome of the methodology) and also provide analysis into its root cause.
3) Only collecting information in English
No survey methodology deployed solely in English is representative of the United States. Period. The US is increasingly diverse and cultural organizations need to expand audiences to sustain – let alone grow – attendance over time.
Today, nearly 70 million US residents speak a language other than English at home. And this number has more than doubled since 1990. Indeed, nearly half of the residents of America’s five largest cities speak a language other than English at home, those same large cities that host many of America’s most prominent and popular cultural organizations.
Organizations seem to understand the need to expand their respective audiences to be more welcoming and accessible to a greater diversity of their constituents, and many have developed programming or delivered content and advertising in languages other than English to be more relevant to their target audiences. Generally, cultural organizations have grown better at talking at people in multiple languages, but too many listen solely in English.
When you’re only listening in English, you cannot hear some of our most underserved voices. We double-dog dare you to find an industry report within the cultural sector that is delivered in the required multiple languages to be truly representative of American audiences. Why? Because gathering data in multiple languages (and especially using the sophisticated methodologies that allow for open-ended queries and appropriate follow-up queries) is expensive and time-consuming. While we at IMPACTS Experience understand these mitigating rationales as an explanation for incomplete research, we don’t think they are an excuse. Don’t call data representative if it’s not.
Let’s return to a theme: Collecting data only in English is relatively simple. But doing so ignores the complexity and diversity of our markets and imperils our ability to benefit from the valuable feedback of important emerging audiences. An obligation of market research is the ability to amplify the voices of many – not only those with the privilege of living in a major metropolitan market, having reliable broadband access, and speaking English. Throughout our industry, the organizations who invest in better understanding the totality of their market opportunities will be the ones that successfully engage these audiences.
So, where does all this demystification of troublesome data practices bring us? Hopefully, they bring us to a point of better understanding the advisable limits of certain types of simple, data-related processes. Net Promoter Score can be a helpful directional signal. Kiosk and email-collected data can indicate potential issues. And collecting data exclusively in English can inexpensively highlight a (arguably overserved) subset of current constituents. But each of these methodologies and practices are only helpful when they are kept in their own lane with an appropriate-sized seat at the table and awareness of their often less-scientific qualities. Sure, these practices may be simple, cheap, and easy – but rarely do these descriptors point to a positive long-term and reliable result.
This is far from an extensive list of research-related practices to avoid, but these three in particular are alarmingly prevalent within the cultural sector and represent immediate opportunities for improvement. Sound and scientific data help organizations soar toward their ambitions. Bad, misused, or misunderstood data tethers organizations to distractions and misinterpretations. Improving practices in these three research areas can immediately help organizations gain a better, more accurate understanding of their audiences.
Did we ruffle some feathers? Good. A little feather-ruffling is prerequisite to taking flight.
IMPACTS Experience provides data specific to organizations or markets through workshops, keynote presentations, webinars, and data services such as pricing recommendations, market potential analyses, concept testing, and Awareness, Attitude, and Usage studies. Learn more.
We publish new national data and analysis every other Wednesday. Don’t want to miss an update? Subscribe here to get the most recent data and analysis in your inbox.