The responsible use of artificial intelligence (AI) is a hot topic for many of us. We’re moving quickly to better understand what it can do, where limitations remain, and how to use it ethically and thoughtfully in our daily work. These conversations take me back to 2001, when Wikipedia was first introduced. We were urged to proceed cautiously, we were unsure if the information could be trusted, and we were taught it was a useful resource but not one that should stand alone.
Across nonprofit organizations and government agencies, AI is increasingly positioned as a solution for complex challenges: improving service delivery, targeting resources more effectively, and demonstrating impact to funders and communities. The promise is compelling, but the results remain uneven.
At MRC, we consistently see that when organizations begin to adopt AI before strengthening their measurement foundations, the outcome is frustration rather than insight. AI does not clarify unclear systems. It reveals them.
AI systems rely on existing data to detect patterns and forecast outcomes. If that data is incomplete, inconsistently defined, or disconnected from real‑world context—as is often the case in public and nonprofit systems—AI will not correct those limitations. It will scale them.
What once might have been a small reporting inconsistency becomes a system‑wide conclusion. Decisions are made quickly, confidently, and incorrectly. For organizations accountable to the public, boards, funders, and communities, this is not just a technical issue, it puts the organization’s credibility and mission at risk.
A colleague recently shared eye-opening insights about the hidden math of poor data quality. More than three decades ago, George Labovitz and Yu Sang Chang introduced the 1‑10-100 rule of data quality, a framework that continues to hold true for mission-driven organizations today:
- $1 to prevent an error at the source of data collection.
- $10 to correct it after it is recorded in the data system.
- $100 if the error is never addressed and is used to make decisions.
While the exact figures vary, the principle is clear: the longer data issues go un-addressed, the more expensive and damaging they become.
AI dramatically accelerates this cost curve.
This is where the work of MRC differs fundamentally from software solutions. Measurement of impact is not solely about the data but is a discipline grounded in governance, alignment, and decision‑making. AI can analyze information, but it cannot determine which outcomes matter most, reconcile competing definitions across programs, or surface the assumptions embedded in reporting systems.
Our work at MRC focuses on helping nonprofit organizations and government agencies clarify what success means, build shared definitions, assign accountability for critical metrics, and design measurement processes that board members, leaders, and program staff trust.
This work must happen before automation. Without this prior work, AI increases noise rather than insight.
Many organizations hope technology will bring discipline to fragmented data environments. In reality, technology reflects the systems and frameworks it is built on. When measurement foundations are weak, AI pushes organizations into the most expensive stage of the 1‑10‑100 rule, causing them to waste resources, erode trust, and mask what is actually happening on the ground.
When measurement foundations are strong, problems are caught early, context is preserved, and data becomes a tool for learning rather than compliance. At MRC, we can help ensure your organization’s data reflects reality before it is used to predict it. This work moves missions, strengthens accountability, and creates the conditions for technology to support, rather than distort your impact.




