It is as obvious as it is profound: data has surpassed oil as the world’s most valuable resource.
Ever since The Economist wrote about this in 2017, I have thought about the many things that oil and data have in common. Both have an enormous impact on the world economy. Both have the capacity to do great harm to us if we’re not careful. And quite often, both start the same way: over the years they transform from being long-forgotten nothingness to becoming something precious.
Given the fact that data is (a) valuable and (b) often originating as perceived garbage, it’s important for us to occasionally consider the data we’re ignoring.
I am not referring to the data that you put in your CRM–you already know that’s valuable or you wouldn’t be keeping it. I’m referring to the data you probably discard. Whether a specific constituent opens a particular email on a particular date may seem like data that is both too insignificant and too voluminous to keep, but you may want to reconsider both sides of your ROI equation.
Today it is neither insignificant nor expensive to keep. Things change. Just ask the long-vanished dinosaurs now living in your gas tank.
When I wrote this article two weeks ago, I didn’t anticipate that it would become a two-part series. But it’s been so great to hear your thoughts that I wanted to continue what is really an extended conversation.
The conversation began a few weeks ago when my colleague, Mike Brucek, posted this on LinkedIn: “Honest question, asked with the hope of learning from our Advancement Services friends – Why aren’t we centralizing access to more granular Annual Giving data?”
Mike went on to illustrate how CRMs don’t capture a donor’s full call center experience at all.
Despite the enormity of data we keep on alumni and donors, advancement services teams have had to be very choosy about the data they store. Traditionally, there were two key factors: (1) whether data was valuable enough to occupy available storage capacity, and (2) whether it was valuable enough to invest staff time to enter and maintain it. In that calculus, it’s clear why phonathon call logs were typically ignored.
But as I mentioned in part one, the ROI calculations are changing for two reasons. One, cloud data is cheap. And two, the data can be uploaded as it is. It can be stored in the exact form it is received, and it will always be correct–no updates required. CRMs are not designed to store enormous amounts of microdata about email open rates and video views. But this doesn’t mean that such data should not be warehoused somewhere in case it can be put to good use.
Mike and I heard back from a lot of you on this topic. And so, by sharing some of your questions below, we offer this second article in the hope of continuing our conversation.
How does data convert from garbage to value?
Once advancement teams put raw data into the cloud, how does the data get used, and what tools are needed to use it?
We can all realistically assume there won’t be a day in the future when someone will want to know whether a specific person opened an email twenty years ago. Given that, there is no need feed it to the CRM. Instead, the data can simply be joined to data from your CRM whenever it’s being analyzed.
A data scientist can achieve this using software built for statistical analysis and/or business intelligence.
Should shops without data scientists still keep raw data?
Most advancement shops don’t employ full-time data scientists. But even if there isn’t a data scientist down the hall, you should still preserve raw engagement data. There are plenty of companies on the for-profit side with data scientists working on a per-project basis. In fact, I happen to know one.
And right now he is so eager to work with call center data that he’s out on LinkedIn telling EverTrue customers he’ll analyze theirs for free.
Regardless of whether you want to take him up on it, the fact that he made the offer underscores just how valuable that data is.
Why not just keep phonathon data in the aggregate?
A report summarizing phonathon performance is of short-term value to the phonathon manager. A file with microdata on every call attempt, caller, date, time, constituent, etc. is of long-term value to everyone on your team.
It gives you an ocean of information about how your call recipients engage with you. That data can then be converted into predictions about which constituents are most likely to answer your calls and recommendations on when and how to call them.
Should we be working on an AI solution to standardize aggregated microdata across organizations?
You are speaking my love language. In my original piece, I envisioned projects where custom models could be developed for each organization. That would have value for the organization in question.
However, if there’s an appetite in our industry to understand this type of alumni and donor behavior in the aggregate, we need to translate all of our idiosyncratic codes into a standard taxonomy. If we were to do that, and obviously we’d be doing it to understand which practices are best, we would also want to categorize specific appeals/campaigns by type.
This way, we’d know how recipients of early bird LYBUNT mail campaigns typically compare to, say, giving day email audiences.
What if we had access to all the data?
In other words, imagine a world where we could integrate advancement data with cross-campus datasets from sources like the athletics ticketing booth, enrollment management recruitment content, museum exhibit openings, etc. Wouldn’t it be amazing if we could see the complete picture of how a constituent engages with the entire institution? (Answer: yes!)
Theoretically, it’s possible for institutions to do this. However, it would require some staff person leading a cross-campus project to bring all of that together, with all of the meetings and consensus-building exercises that such a project would require. And so, realistically, I think most institutions would struggle to accomplish this today. But let’s keep these kinds of “what if” ideas in the public square so that we can collaborate on creating the future we want to have.
If we do this, will people really act on the findings?
Can you hear that soft, faraway hissing sound? That’s the sound of all of my dreams being deflated. Because what’s implicit in your question is true: we so often analyze our data without taking a next step. Information is just noise unless we act on it. Too often we invest in acquiring or developing data only to find ourselves chasing our tails, being derailed once more by an angry text from an alum or an emergency email fire drill. We all bear the battle scars.
But here is the dream: what if we can work together as a community to shed light on our work and what we are finding? What if phase one is understanding our own audiences, and phase two is understanding advancement audiences generally? If we were to do this, we would transform all of our work for the better. Eventually, even people who are easily distracted from operating plans will be forced to admit that they are not behaving in accordance with (say it with me everybody) established best practices.
If we create the capacity to know as an industry that specific time-intensive, value-ambiguous projects have been proven to be ineffective, maybe they will finally be abandoned. (I’m looking at you, donor rolls and hand-signed departmental holiday cards!)
We are excited to talk more with you on March 27 at 12:00 ET / 9:00 PT when Mike and I will be joining Louis Diez of the Donor Participation Project for a LinkedIn Live. So, if you’re interested in talking with us about how the value of data is a changing equation, keep your eyes open on our feeds and we will share the link soon. And when you’re ready to change the industry with us, let us know!