Blog Post

August 28, 2014 Randy Schwemmin

Effective Phototherapy is Effective: Why Clinical Trials are not always the Best Evaluators of Health Innovations

At D-Rev, we often face a conundrum when we approach large foundations or government agencies to support our work. In the global health community, funding committees are frequently staffed by academics or global health experts who have spent their careers publishing peer-reviewed papers. They are familiar with the standards of academic publishing, and expect to see clinical data to prove the effectiveness of a specific product as a prerequisite to supporting its scaling activities. Clinical trials can be useful because they demonstrate that a new or novel technology is more effective than what is currently available.

However, this is not always the case. For Brilliance, our phototherapy device that treats neonatal jaundice, D-Rev did not invent a new treatment paradigm. Instead, we applied our innovative product design approach to adapt a proven technology (LEDs) for a radically affordable version of an existing product (fluorescent tube phototherapy lights). Because Brilliance is not introducing a new treatment method, we have a difficult time justifying the use of our scarce resources on a clinical trial when its only purpose is to satisfy the academic expectations of grant reviewers. At the same time, D-Rev has received explicit feedback from some institutions that our grant applications were rejected because we did not have clinical outcomes data for Brilliance. Hence our dilemma—do we spend time and money on a clinical trial that will not provide any new scientific information, but will satisfy potential donors?

Scores of peer-reviewed journal articles demonstrate that blue light with a wavelength peak of 460-490nm and power density of 30 uW/cm2/nm will decrease the total serum bilirubin in newborns with hyperbilirubinemia. There are minor debates about the minimum and maximum power densities, and whether a slightly greener wavelength of light could increase efficiency by 15-20%, but there is no disagreement that blue phototherapy works. The therapy has been in use since the late 1950s. The American Academy of Pediatrics (AAP) publishes the above specifications in its treatment recommendation for delivering intensive phototherapy to jaundiced newborns.

Because the scientific basis for phototherapy treatment is settled, the only question we believe worth answering is whether Brilliance can meet the AAP standard. Fortunately, this is easy to prove with a bench top test using a calibrated flux meter and a grid to measure surface area. By measuring the power output at the designated wavelength range along points within the treatment surface grid, we can show that Brilliance can cover an entire full term baby in therapeutic light. For illustration, an irradiance heat map for Brilliance can be seen below.

Heat map

We believe bench top results, combined with the AAP standard and the product’s CE mark, are sufficient evidence to prove that Brilliance is a safe and highly effective product. If we were to conduct a trial, it would be proving that “Effective Phototherapy is Effective,” or repeating the work that went into setting the AAP recommendations in the first place.

Similarly, we question whether it is ethical to subject babies to the local “standard of care” as the control arm of a randomized trial. Doctors treating babies in low resource countries are doing the best they can with malfunctioning and substandard equipment that is expensive to purchase and maintain. These doctors know that their patients would have better outcomes if they could get brighter, more reliable lights. Withholding effective equipment from the control arm of a trial to prove an undisputed point just doesn’t feel right. Incidentally, our customers agree. When we show the Brilliance irradiance map to neonatologists, they are quickly convinced that the device will work because they are intimately familiar with phototherapy and standard clinical practice. We have never heard a potential customer ask for clinical data before deciding to buy Brilliance, yet the absence of clinical data specific to Brilliance is the reason most funders cite when rejecting our grant applications.

Fortunately, we are starting to hear more arguments against measurement for measurement’s sake from trusted colleagues at these same agencies and in the global health community. This excellent article in the Stanford Social Innovation Review by Mary Kay Gugerty and Dean Karlan points out the need for pragmatism in measuring the impact of social programs, and advises against measuring things that are already known. Our hope is that this concept of appropriate measurements for impact will catch on with institutional funders, rather than a one-size-fits-all approach that can be wasteful and run counter to the social good we are all trying to achieve.

Back To Posts