Blog Post

April 22, 2014 Sara Tollefson

How Fieldwork Has Improved the Way We Calculate the Impact of Brilliance

Brilliance Fieldwork

Photo taken by D-Rev Fellow Barrett Sheridan of a baby being treated with a Brilliance device in Kerala, India. D-Rev recently updated how it estimates impact based on fieldwork conducted in early 2014.


Key takeaways from this post:

  1. D-Rev has updated the algorithm used to measure the impact of Brilliance devices.

  2. The change has resulted in an overall reduction of previously stated impact estimates—a 28% drop.

    • Number of babies treated who otherwise would not have received effective treatment: now 10,876 vs. 15,065 previously

    • Number of deaths and disabilities averted: now 257 vs. 355 previously

  3. The long-term effect of the change is a more nuanced and accurate impact data measure.

  4. The commitment to measuring impact is central to D-Rev’s strategy and work; therefore, this may not be the end of the changes to the algorithm.


As many of you know, D-Rev is an impact-driven organization. This means that understanding the impact of our products is central, not incidental, to our work. Impact is the number one factor that drives our strategy and decision-making. For this reason, we are always looking for ways to improve how we measure impact. We also believe strongly in transparency and knowledge-sharing, so when we make major findings that influence how we measure and understand our impact, you can count on us to share the lessons that we’ve learned.

We recently conducted two months of fieldwork in order to update the algorithm that we use to estimate the impact of one of our products, Brilliance, and we hope you might be as interested to hear the underlying reasons as we were. The end result was a 28% reduction in the three numbers that we track to measure the impact of Brilliance, but a significant increase in the confidence we have in our estimations and a much more nuanced understanding of how Brilliance is being used in hospitals.

Background

The problem that we are tackling with Brilliance is the fact that over six million babies requiring treatment for severe jaundice each year are not receiving the treatment they need. One of the main reasons for this is a lack of access to affordable devices that provide phototherapy, the standard treatment for severe jaundice. By introducing a low-cost, high-quality phototherapy device to the global market, D-Rev aims to increase the number of babies receiving treatment who otherwise would not have been treated effectively, and thereby, reduce the number of deaths and disabilities due to untreated severe jaundice.

To measure our progress against this goal, we track three indicators:

  1. The number of babies treated with Brilliance.

    20140423-Indicator.001

  2. The number of babies treated with Brilliance who otherwise would not have received effective treatment.

    BR-impact-equation.001.002

  3. The number of deaths and disabilities averted through the use of Brilliance.

    20140423-Indicator.003

We calculate these numbers on a per-unit basis, using an algorithm based on machine data and assumptions drawn from fieldwork and academic research, and then sum the results to determine our total impact. For a step-by-step explanation of how we calculate impact, click here (updated 10/20/14).

Listening to the field data

One of the key data points in our impact calculations is the total number of hours that each Brilliance device is turned on. This information (known as “total machine time”) appears on Brilliance’s LCD screen, and is available to anyone with access to the device. During Brilliance’s first year on the market, we were able to collect this information (via phone calls or in-person visits) for 37 (17%) of the 216 units installed through the beginning of November 2013.

By the end of that first year, we felt we had enough data and resources to evaluate the accuracy of our assumptions and take action if necessary. We ran our analyses, and determined that while we had been assuming, for purposes of our impact calculation, that hospitals were using their Brilliance devices 14 hours per day, the data we had collected from units in the field suggested that hospitals might actually be using their Brilliance machines about 3.8 hours per day.

This was a red flag for us: if true, it meant we needed to update our algorithm as soon as possible. Instead of simply adopting the 3.8-hour daily average in place of our existing assumption, however, Neonatal Jaundice Initiative Product Manager, AJ Viola and I concluded that we needed to understand better how hospitals were using Brilliance, and that underlined the need to put someone into the field.

Gathering additional data

In January, we sent D-Rev Fellow Barrett Sheridan to India, where 93% of D-Rev’s Brilliance devices were installed, to survey doctors and nurses on how Brilliance units were actually being used in hospitals. (Until we are able to conduct additional country-specific research, we planned to apply our research findings in India to the other lower-middle-income and low-income countries where Brilliance devices are installed.) Specifically, Barrett was tasked with investigating two main questions: (1) how much Brilliance units were being used, and (2) how long doctors were using Brilliance, on average, to treat a baby with jaundice. Answers to these questions would directly inform what action, if any, we needed to take to update our algorithm.

During January and February of this year, Barrett visited 33 hospitals in six states across northern and southern India. He collected data from 53 Brilliance units installed in India between November 7, 2012 and January 25, 2014 (or 17% of the 306 units installed in India at the time), and interviewed dozens of medical personnel, including doctors, nurses, administrators, and equipment managers. (For more about his fieldwork experiences, check out his blog post, “Ground Truthiness.”)

Fieldwork findings and action taken

Based on Barrett’s detailed, on-the-ground findings, we’ve been able to update how we measure and think about the impact of our Brilliance devices. Figure 1 shows the updates (highlighted in yellow) that we have made to assumptions underlying our impact algorithm based on this new research.

Figure.1

Figure 1. Key Brilliance impact algorithm assumptions and their sources

Here, we describe the findings in more detail and explain how we used them to update two key assumptions in our algorithm: (1) average utilization rate and (2) average treatment time.

(1) Average utilization rate

Based on the early Brilliance installations, we determined that a more accurate average utilization rate is 5.4 hours per day. The raw data that Barrett collected showed an average utilization rate of 3.6 hours per day— consistent with the average (3.8 hrs/day) that we had noted earlier. Yet data about how people use products are never straightforward. The following field observations convinced us that we needed to filter, factor, and weight this average:

  • New hospitals tend to underutilize their machines at first because of delays in set-up, or because they are not operating at full capacity for the first year or so.

  • Some doctors in Northern India reported lower utilization in the winter months. These doctors said that when it was colder, they preferred to use fluorescent lamps that gave off more heat than Brilliance’s LED lights. Since many of the units surveyed had been installed in late summer, their data were drawn primarily from the winter months and therefore skewed the year-round results downward.

  • In public hospitals, doctors often treat multiple babies at once—sometimes up to 75% of the time. While we strongly encourage hospitals to treat only one baby at a time, the reality is that these target hospitals are over-burdened and demand for phototherapy outstrips supply, so they will often treat multiple babies at once.

To account for the above, we filtered the data to exclude hospitals less than a year old, and weighted the remaining data to account for seasonality (factor of 1.3) and doubling-up at high-burden public hospitals (factor of 1.5). This gives us an average utilization rate of 5.4 hours (see fig. 2 below). Now, when we are using our algorithm to estimate the total machine time (total usage hours) of a unit, we multiply the number of days that the unit has been installed by 5.4, instead of 14.

Figure_2

Figure 2. Average utilization rate using filtered and factored data

(2) Average treatment time

We also determined that doctors are treating babies for an average of 40 hours. Before launching Brilliance, we estimated that the average treatment time would be about 48 hours. What we learned through fieldwork was that the average treatment time was closer to 40 hours. Just to be clear: we aren’t saying that average treatment times decreased, simply that our initial estimate as to how long doctors would treat babies, on average, differed from what we saw in the field. In fact, phototherapy dosage is not one-size-fits-all; treatment times can vary from a few hours to multiple days, and depend on a number of factors. We believe that clinicians are in the best position to know how long their patients need to be treated, so our update reflects the average we saw in the field.

Since we estimate the total number of babies treated based on the machine time (i.e., lamp time) it takes to treat one baby, however, we realized that we needed to understand how much of the 40 hours of average treatment time a baby actually spends under the Brilliance lamp (what we refer to as “active treatment time”). Two insights helped clarify the adjustment we needed to make to the 40-hr average treatment time to get to the “active treatment time” number that we needed for our impact calculation:

  • Babies are removed from Brilliance for about six hours a day for feedings, diaper changes, and evaluations. In our fieldwork, we found that babies were removed from the lights for 20 to 30 minutes every 2 to 3 hours (a range of 2 hours and 40 minutes to 6 hours per day). The higher bound of this finding (6 hrs/day) was consistent with the general estimate provided to us by Dr. Vinod Bhutani, a leading neonatal jaundice expert at Stanford University’s School of Medicine. Accordingly, using the 6 hrs/day figure, we now discount the average treatment time by 25% to determine the “active treatment time” (i.e., actual time spent under the lamps) value that we require for our calculations.

  • Brilliance lights are turned off when not in use. According to our fieldwork, nurses turn the units off when they are not in use; the blue light is visually disrupting, and many indicated that they were energy conscious. This means that we can safely assume that when the unit is on, it is treating a baby.

For these reasons, we feel confident that 40 hours, corresponding to 30 hours of active treatment time, is a better estimate of the average time we should assume it takes to treat a baby with jaundice. Now, to estimate the number of babies treated by a Brilliance unit, we divide the unit’s total machine time (usage hours) by 30, instead of 48.

What does this all mean?

These valuable field data and findings have enabled us to update our impact algorithm and have greater confidence in the impact numbers we calculate for Brilliance.

In Figure 3, you can see how the updated variables changed our impact estimates for the number of babies treated by the 372 Brilliance units currently installed. (Another lesson learned from fieldwork—it can take up to 270 days for a sold device to be installed!)

Chart3

Figure 3. Difference in estimated number of babies treated by Brilliance after April 2014 algorithm update (12,881 vs. 17,886)

Similar results (a reduction of 28%) can be found in estimates of our other two main indicators:

  • Number of babies treated who otherwise would not have received effective treatment: now 10,876 vs. 15,065 previously

  • Number of deaths and disabilities averted: now 257 vs. 355 previously

At D-Rev, we are always looking to improve how we do our work. For AJ and me, this meant that when we saw that red flag in our impact review, we were determined to get to the bottom of it. We feel relieved and re-energized now that we better understand our impact and the opportunities before us. We know it isn’t the last time that we’ll update our impact assessment methods, but we know that every time we do, we understand and serve our users better.

Back To Posts