My blog is a forum for discussing timely issues in nonprofit management, often citing demonstrations of excellence in the field. This post features the opposite—an example of a nice idea gone wrong.
Charity Navigator (CN) is a nonprofit with the ambitious objective of helping charitable donors understand the performance of nonprofits and make informed donation decisions. CN has been around since 2001, and has developed a system for evaluating public charities. The evaluation process consists of a set of calculations and checkpoints that result in ratings, which then create a sort of “report card” for individual organizations. The idea is that an unbiased, objective system can help donors direct funds to organizations with capability to execute on their missions, which in turn can make the nonprofit sector more effective. CN’s process is designed to allow direct comparisons between nonprofits.
CN is perhaps the most prominent of several organizations that rate and purport to validate nonprofits, and in fact describes itself as “America’s leading independent charity evaluator.” Many organizations publicize their CN scores, and CN’s ratings are widely accessible. To the extent that CN ratings may influence resource allocation in the sector, it’s essential that its evaluation system be fundamentally sound. However, CN has received surprisingly little scrutiny.
Public Interest Management Group published a white paper this month, titled “Evaluating the Evaluator: Unpacking Charity Navigator’s Rating System.” As the title suggests, this is a critical look at CN’s system. The paper goes into detail about the components of the CN rating process. Our findings are that, despite noble intentions and one beneficial contribution, CN’s rating system as a whole is deeply flawed. We recommend that, until the system is properly overhauled, donors disregard much of its information and nonprofits take alternate steps to communicate their performance to the public. (Note that subsequent to this post CN made modest tweaks to its methodology, which I discuss in a follow-up blog entry; our findings and recommendations are unchanged.)
That’s pretty scathing, and deservedly so. To be fair, however, I’ll acknowledge two positives about the CN evaluation methodology. First, CN’s broad framework is logically sound—it consists of three elements: a “financial health” rating, an “accountability and transparency” rating and a “results and impact” rating. Each nonprofit, in principle, could receive ratings on all three elements, and donors would then have a balanced view of an organization’s performance. A donor would then compare the “grades” of nonprofit A to nonprofits B and C. Second, the “accountability and transparency” rating process establishes a helpful standard for the sector and a specific way to measure compliance.
There, the good news ends. CN’s “financial health” rating is poorly constructed, and does not measure financial performance in a useful way. Worse, the rating reinforces a disquieting concept known as the “The Overhead Myth,” and fails to constructively address a common (and toxic) problem, coined “The Nonprofit Starvation Cycle.” CN’s third component, the “results and impact” rating is not operational at present (it’s been wrapped up in a multi-year development process), and offers no comparative data.
Regarding the Overhead Myth, we identify a curious paradox. In 2013, three prominent national organizations signed a joint letter to “Donors of America,” attempting to debunk the myth that the percentage of funds nonprofits spend on administration and fund raising is related to their effectiveness. The letter calls this metric “a poor measure of a charity’s performance.” It goes on to say that many nonprofits should spend more on these functions, and that “Research shows that the overhead ratio is imprecise and inaccurate.” One of the three signatures on this letter is that of Charity Navigator’s CEO. Yet, three years later, CN continues to reinforce the idea that overhead spending is bad; the more a nonprofit spends there, the lower its rating.
In other words, CN has publicly criticized the Overhead Myth, while continuing to perpetuate the myth through its ratings!
That oddity aside, the net result of the CN’s rating methodology is that its evaluation of nonprofits is and has been a nearly hollow exercise. This is no crime in itself, but some folks pay attention to this “information,” and more than a few take it seriously.
In the white paper we dissect CN’s metrics and overall evaluation system, discuss the context for its use, and issue a range of recommendations.
While I acknowledge (and hope) that Charity Navigator’s evaluation platform can be improved in the future, it’s particularly troubling to me that a system so unsound could have become widely used and respected. What does it say about nonprofit culture that we so readily accept imposed standards and “best practices” without examination?