A few baseline misunderstandings about data and analysis may be diverting cultural executives’ paths toward institutional success.
If you started off 2018 thinking, “This is my golden opportunity to brush up on the fundamental aspects of data analysis so that I can lead my organization successfully this year,” then I like the way that you think. We’re brainwave buddies.
If you did not, then you were probably a lot cooler than I was in high school.
The three misunderstandings that I lay out here are not subjective encouragement for opening your mind. I’m not saying, “I suggest that you perhaps kindly consider please pondering these other alternatives for how to think about data.” No. These are three, very fundamental realities for accurately analyzing data within the nonprofit, visitor-serving industry and beyond.
…. And I’m a bit worried that there is some confusion about them.
It’s an important thing that more and more entities are considering how they can become data-informed cultural organizations! Indeed, developing a data-informed professional culture may be our pathway to success in effectively and efficiently engaging new audiences.
But good data can be challenging. Ours is still largely an “I think” industry, and evolution is slow on the whole. When faced with challenging information, it may be natural to become a bit defensive at first. After all, with the difficulties already attendant to operating nonprofit organizations, the plates of many cultural executives are already constantly overflowing. Good data can help cultural entities to see the decisions they’ve been making that may be unknowingly holding them back…and help them to move forward confidently!
I am pinch-myself lucky to have the opportunity to share data with executive leaders around the world, and I have noticed that three, common misunderstandings sometimes arise. I’m not sure if they arise as a defense against difficult findings, or if they are simply the neural-firings of brains that are not as nerdy in the same way as mine.
Either way, let’s tackle them.
Here are three, fundamental misunderstandings about data analysis that may be holding cultural executives back from maximizing success.
1) Thinking sample size is more important than a representative sample
Sample size is important, no doubt about it! And, in fact, that might be the most popular question that I get about data: “What’s the sample size?” But the popularity of the role of sample size may be overshadowing something that often is much more important: That the sample is representative of the market. In other words, are the data collected coming from people who are representative of the audience from which you are trying to gather information in terms of demographic, psychographic, and behavioral attributes?
If you’re trying to understand how favorably the US population perceives Donald Trump, you are more likely to get accurate data by surveying 2,000 people from a representative group that matches US demographics than you are from sampling 2,000,000 Republicans.
In fact, the sample size in political approval polling from highly-credible sources such as The Washington Post tend to hover around a total sample size approximating 1,000 adults. More important than the relative difference between a sample size of 1,000 or 1,500 adults is that the people being polled comprise a true representation of the nation. (Here’s one of my favorite articles about how to think about approval rating polls.)
More data is not better. More representative data is better.
More NON-representative data is often distracting and misleading – and when cultural organizations are not taking steps to make sure that their data is representative, they can collect more distraction than insight.
Your cultural entity could theoretically obtain survey data from every single person who walks through your door, but it is unlikely that those data will represent what the US population or even your regional audiences (which includes people who do not come in your door) want, think, or expect. That’s okay! These data are still incredibly valuable for better understanding items such as repeat visitation and visitor satisfaction among current audiences. However, these data are not necessarily valuable for informing programs and initiatives to engage engage audiences.
Audience research is not market research.
A survey at the end of a special exhibit may not reliably inform an organization as to what general audiences want from special exhibits. It informs an organization what the specific people who cared enough to attend this special exhibit by overcoming visitation barriers want and think. These data do not represent “people.” They represent “a subset of people who have done X, Y, and Z.”
Data is math, not magic. But when it is collected, analyzed, and applied correctly, it can work like magic.
Lest anyone here think that I may be saying this in defense of anything shared here: With over 108,000 respondents, the National Awareness, Attitudes and Usage Study – where most of the data shared on this website comes from – is believed to be the largest, continuing data collection concerning visitor-serving enterprise in the United States…but that’s not what’s cool about the NAAU. What makes it cool is that it is representative of the United States adult population.
Sample size is important. Being representative is arguably every bit as important. When organizations collect and analyze their own data and boast about sample size but cannot indicate the strategy for ensuring that the sample is representative of those that they are trying to survey in the first place, it’s a red flag.
2) Going blind to numbers when data includes visual comparisons
It seems that perhaps the easiest way to make some folks go “data blind” is to show data with comparisons in it…
Here’s an example of a bit of data that I share very frequently, and is probably the best for explaining this misunderstanding. Take a look. It shows primary sources of information for those who profile as likely visitors to cultural organizations cut by generational cohort. As you can see, digital sources of information (including social media, web, and mobile web) comprise the top three leading sources of information for all three age groups.
…So you can imagine my shock when this data is used by people to “prove” that Baby Boomers do not use digital information sources. (Yes. That truly is a popular misinterpretation of this chart).
I’ve come to learn that these folks may be seeing the comparison between Baby Boomers and Millennials and – because the value is lower for Baby Boomers than it is for Millennials – the takeaway seems to be something like, “Baby Boomers use digital engagement platforms less than Millennials.” *Cue data brain meltdown * “See?! Baby Boomers do not use digital engagement platforms!”
Baby Boomers use digital engagement platforms less than millennials and also digital engagement platforms represent the top three sources of information for Baby Boomers.
This is but one example that I’ve chosen because I cut and share this information frequently, and I’ve seen comparisons cause confusion. For instance, I’ve heard folks look at the data below and say, “See! Current visitors do not use digital platforms! Inactive visitors do!” HUH?!
And I’ve heard people look at the data cut below and say, “See! We’re a symphony so we don’t need to care about social media! Museums do!” Wait. WHAT?!
Visual comparison, folks! I’m not sure if it’s that we’re simply an eyeball-driven industry or desperately seeking defenses against inconvenient information (as humans may have want to do), but when comparisons are on a chart, some folks go data blind. Make sure it’s not you!
It’s often valuable to consider the relationship between data sets…but it’s arguably more valuable not to let the comparison blind you to key takeaways from the individual data sets. Don’t let your eyes trick your brain – keep them focused on the data!
(If you are interested in the data itself shown in this example, you can read more here.)
3) Disregarding meaningful findings because initiatives will not yield magic bullet results.
There are many data-informed best practices that are increasingly necessary in order to run a stable cultural organization, but magic bullets are myth. If I (or anyone else) had a single, simple magic bullet, cultural organizations would not have a shrinking visitor base.
The closest thing that I have to an overarching magic bullet is this: Success is about strategy, not tactics.
But the lack of a single, data-informed “magic bullet” that will reliably triple attendance forevermore and send folks flying from all over the country to your organization in droves for decades does not mean that it’s smart to give up on data. Data suggest that certain strategies/initiatives are meaningful, impactful, and increasingly necessary in order to thrive…even though they are not single-item, tactical “magic bullets.”
Personalized interactions between attendees and staff or volunteers can increase visitor satisfaction by over 10%. A person who uses their mobile device onsite to look something up or post on social media related to the experience have, on average, 6% higher satisfaction rates than does a person who does not use social media while onsite. These are huge potential increases in visitor satisfaction that drive the entire visitor engagement cycle! These findings matter!
And, yet, I sometimes hear folks say, “It’s only 10%! You have to admit that increase doesn’t really matter!”
It super matters! A 10% increase in a critical measure of your audience’s satisfaction – a key measure that impacts value for cost, pricing, intent to re-visit, and willingness to recommend to a friend metrics – is incredibly noteworthy!
Fact: Data suggest that building a new, multi-million dollar wing is not a reliable strategy for increasing long-term attendance…but training guards and ushers to be friendly certainly does. It tends to be those initiatives that increase satisfaction in increments that meaningfully increase satisfaction and visitation long-term.
Because numbers may not be “big” without additional context does not mean that they should be written off. Don’t be that person who leads their organization in a slow decline to obsolescence as they await the emergence of a single, tactical, magic-bullet opportunity to effortlessly triple attendance forevermore.
Even a curious leader with the best of intentions may fall into these common traps! They might forget for a moment that collecting representative data of the audience they are trying to understand is every bit as important as sample size (if not more), and that’s okay…as long as it’s only for a moment.
Representative data is important, data comparisons should not make us blind to key takeaways, and having realistic expectations – not only about data findings, but about the path to success – may be critical.
Becoming more of a data-informed industry is a long game, but it’s one that we are learning to play with more skill every week. Let’s keep ours eyes on the ball and our data fundamentals in check.