Blog Post

February 2, 2013 Krista Donaldson

The Quest for Measurement and its New Champion. Let’s Leverage It.

Bill Gates

We love that Bill Gates is banging the measurement gong. Looooove it! For those of you who may have missed it, Mr. Gates’ 2013 Annual letter came out a few days ago. He mentioned measurement or variations of it of 38 times (compare to 23 times for innovation, last year’s theme). His letter was preceded by a high-visibility My Plan to Fix The World’s Biggest Problems on front page of the Wall Street Journal’s Review section last Saturday.

The need and desire for measurement isn’t an issue in the social sector. Not supporting measurement is sort of like not liking babies. The challenge is: how do we as a global community use data and measurement to fix problems, as Mr. Gates writes, not just talk about it? At D-Rev, we want to support and push our and our communities’ efforts in getting the most out of measurement. We love that Mr. Gates simplified measurement to the actionable and digestible: you have a goal, you figure out the best approach, you measure and you refine. (In design-speak though we’d say iterate rather than refine allowing for more risk.) Yet believing and measuring isn’t enough. The issue isn’t should we measure, it is the measuring the right thing to understand impact—and funding.

Measuring the right thing(s)—what would fall under ‘approach’—is about getting and using the right data. (Insider test: Who is famous in the social sector for saying ‘Measure the right thing’? Answer here.) From my perspective, there are three areas where the “right thing” translates to useful data and measurement:

  1. What you are measuring—and what it means. Linking meaningful data to your goal is key. While that sounds obvious, impact and outcomes aren’t necessarily simple to measure even with a clear intervention. Too often, we see activities (output) used as a measure of impact. Activities are indicators of potential impact and often provide context to the work, but they aren’t measures of impact. For example, too much impact reporting cites the number of nurses trained or libraries/schools/hospitals built when we wish they also reported outcome metrics like reduced maternal mortality or increased literacy. The metrics must be meaningful for the goal.

  2. What is being used for comparison. No one talks openly about poor or inaccurate baseline or control data—or poor conclusions drawn from perfectly good data. Sometimes poor data means meaningful data isn’t available or readily collectable (think: kernicterus—brain damage or death from severe jaundice—in rural areas of low-income regions). Insufficient data doesn’t mean there isn’t a problem—it means we need to collect more and better data. Inaccurate data are data that were never fully vetted in first place—statistics that sound compelling, may even often be cited, but once digging in, can’t be verified or are lacking valid references. It benefits our sector and our users if we are transparent about these real challenges.

  3. What the data means to users and the problem. DALYs (Disability Adjusted Life Years), a public health measure for impact, is useful with meta-data analysis—but it is unhelpful to 99.9% of doctors and patients on the ground. Similarly, the Millennium Development Goals have united professional communities and given us much-needed common goals, but we can’t lose sight of what we are doing and why. Too often in our work at D-Rev, we see a focus on a single MDG (#4.2, infant mortality or death), at the expense of infant morbidity (sickness/disease). The on-the-ground reality is that morbidity issues can be easier to solve and bring as great or better benefit to families living in poverty. We’d like to see better goals and measurements that benefit our users.

Finally—and most critical for our and others’ ability to use measurement effectively—the reality is that there is next to no funding to support impact assessment in the social sector. The only way we have found to fund it is through unrestricted grants or by forward-thinking groups like Stanford’s SEED program that partially supports our Director of Impact. If measurement is going to be (better) used in social sector, there needs to be specific and directed funding of it. From a practitioner standpoint, particularly a young and innovative one, funding to support great measurement with meaningful data is—hands down—the greatest challenge to truly addressing some of the world’s biggest problems.

Related + recommended (Send us yours!):

Back To Posts