Navigating the Evaluators

I recently gave a keynote address for a conference of nonprofit leaders in Oregon. At one point, I asked people to raise their hands if they thought that the evaluation methodology of Charity Navigator, the country’s most popular nonprofit rating system, had validity. Nobody moved a muscle.

Then I asked: “So for those of you whose organization has received a top 4-star rating from Charity Navigator, raise your hand if you placed that rating on the home page of your website.” Dozens of hands reluctantly rose, accompanied by embarrassed laughter.

That moment crystalized both the charitable sector’s lack of respect for nonprofit evaluators, and its recognition of the evaluators’ influence.

Indeed, over the past decade nonprofit evaluators have gained a considerable following. Charity Navigator’s website gets nine million hits a year. The other major evaluators, the BBB Wise Giving Alliance and Charity Watch, have their fans as well. (I will leave aside, for now, GiveWell, which follows a different model: rather than review and pass judgment on thousands of domestic nonprofits, GiveWell singles out a handful of international aid groups as the most effective charities in the world.)

Charity Navigator, Charity Watch, and the BBB Wise Giving Alliance differ in what they measure, how they report their results, and how they are funded. For example, Charity Navigator places a great emphasis on the financial strength of organizations, and therefore rewards groups that have a lot of money in reserve. Charity Watch, on the other hand, prefers that nonprofits deploy their capital to maximize impact, and so if a particular charity has what Charity Watch considers too much money in reserve, the rating drops. (So, yes, an organization can get rewarded by one evaluator and dinged by the other for the same action.)

As for the results, Charity Navigator ranks organizations on a zero- to four-star basis. Charity Watch grades on an A+ to F scale. The BBB Wise Giving Alliance takes a standards-based, pass/fail approach: if an organization meets all 20 financial and governance standards, it receives status as an approved charity.

And how are these groups funded? Charity Navigator seeks grants and individual donations. (It’s not unusual to find a pop-up solicitation window while perusing the Charity Navigator website.) Charity Watch invites individuals to become members, and donor/members then help set the agenda by asking Charity Watch to review particular organizations. The BBB Wise Giving Alliance, meanwhile, actually sells “Accredited Charity” seals for use by organizations that have passed all the standards. The charities pay $1,000 to $30,000, depending on their size, for the right to use these seals. (Those seal sales underwrite virtually the entire BBB Wise Giving Alliance budget.)

But what all three organizations have in common is that they evaluate the nonprofits entirely on their financial performance and measures of good governance, and not at all on impact.

Obviously, it’s important for charities to be in the black, not to overpay their top executives, and to direct an appropriate amount of their resources to programming. It’s also important for nonprofits to have independent boards of directors and to be transparent in their decision-making.

But there’s so much that the evaluators ignore. Most notably, they don’t – and really, can’t – evaluate the impact and effectiveness of the organizations. The evaluators dabble in this territory, but they don’t really take it on. For example, the BBB Wise Giving Alliance simply requires that nonprofits provide the results of program evaluations to their boards of directors at least once every two years – but the results need not be shared publicly, and it doesn’t seem to matter whether the evaluation methodology is valid or the results positive.

For its part, Charity Navigator floated an impact evaluation matrix a couple of years ago called “CN 3.0.” The draft included calls for each nonprofit to share its “logic model” and “theory of change,” jargon drawn from the upper echelon of the private foundation world and bearing little relevance to nonprofits attempting to carry out their missions of feeding hungry people or comforting veterans with PTSD. There must have been some real blow-back to CN 3.0, because Charity Navigator pulled back its plans and now reports that its impact measurements are “in research and development.” I’m guessing they’ll remain in R&D for some time.

And so we have highly influential evaluation reports that don’t touch on the single most important question: what kind of impact is the charity having?

Imagine picking up Car and Driver Magazine to research the latest model Toyota Camry. You’re expecting to see test drive results. You’re looking for estimates of annual maintenance costs. You’re hoping to see the Camry compared in terms of cost and performance to the Honda Accord, and you want to know how the Camry handles in the snow, how the air bags perform in a collision, and whether the deluxe upgrade package is worth the money.

Instead the magazine reports how much Toyota is spending to market the Camry, how well the corporate board minutes are kept, and whether the company has a whistleblower policy.

This scenario is ridiculous, of course. Those measures fail to tell consumers what they need to know. But that’s more or less how Charity Navigator and the other self-appointed evaluators go about reporting to donors about America’s charities.

If you’re now expecting me to offer a simple way to measure charitable impact, you’ll be disappointed. I think it’s hard enough to measure the impact of a single nonprofit. To evaluate thousands and to then assign a comparative grade is an almost impossible task. Yet the major evaluators demonstrate great certitude in proclaiming that they are the go-to source for all things charitable, even though they don’t measure the most important aspect of nonprofit work: impact.

In the 1960s, sociologist William Bruce Cameron said, “Not everything that can be counted counts, and not everything that counts can be counted.” Charity Navigator and its ilk do a good job of counting the things that don’t count all that much. It’s a shame that their evaluations count at all.

Copyright Alan Cantor 2016. All rights reserved.

Print Friendly, PDF & Email

8 Comments. Leave new

  • Hi Al,
    This is all very enlightening and important, but you must know that critical thinking in America has gone the way of blacksmiths and corsets. Whatever of it might be left is sure to be outlawed by the incoming administration. Your trying to make us think critically is only going to cause a lot of trouble, and make many good Americans unnecessarily self-conscious and uncomfortable. If we are deprived of our fantasies, what will we have left? Why can’t we just count how many stars an organization gets and leave it at that?? Huh, huh?

    Reply
  • Paul VanDeCarr
    December 13, 2016 3:12 pm

    Break it down, Al! Another excellent post. I’m tempted to give it a ranking, but I’ll refrain.

    Reply
  • Not surprisingly, this excellent treatise makes me even prouder of having been involved for so long with United Way. They were innovative, then evolved into results, and can still pass the CN kind of tests!

    Reply
  • William Bruce Cameron must have worked passionately in a small social service non-profit. How else could he have made such a profound statement? While foundations, trusts and donor advised funders ask us for a host of validating metrics, so often the most telling verification of impact stands before us, when the beneficiary of our programs and services returns to say, “thank you for changing my life, thank you for saving my life, I could not have made it without you.” How many stars is that worth?

    Reply

Leave a Reply to Dan Moran Cancel reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.