Last updated 2 days ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
Every year, your government spends trillions of dollars and employs millions of people. But how do you know if all that activity actually makes your life better?
The answer lies in understanding two completely different ways of measuring what government does: outputs and outcomes. One tells you how busy government is. The other tells you whether that busyness actually worked.
Most citizens hear about government through outputs – the concrete things agencies do. Your city filled 500 potholes this month. The school district hired 50 new teachers. The health department conducted 1,000 restaurant inspections.
But outputs don’t tell you if those activities solved any problems. Did filling potholes actually make roads safer? Did hiring teachers improve student learning? Did restaurant inspections reduce food poisoning?
That’s where outcomes come in. They measure whether government actions created real change in people’s lives and communities.
The Difference That Changes Everything
What Government Does vs. What Government Achieves
Outputs are the “what” of government work – the direct products, services, and activities agencies produce. Think of them as government’s deliverables. When a city agency maintains roads, outputs include “number of potholes filled” or “miles of road resurfaced.” These are direct measures of agency activity.
Outputs are typically easy to count because they’re concrete and happen quickly. Splunk.com notes that outputs are “the tangible or direct results of a process, task or activity” that “can be directly measured and often include deliverables such as products or services which can be measured in terms of quality and quantity.”
The Washington State Office of Financial Management defines output measures as “the number of units of a product or service produced or delivered.” Examples include “Eligibility interviews conducted,” “Children immunized,” or “Number of non-compliant woodstoves replaced.”
Because outputs are visible and easy to measure, they dominate public reports and media coverage. Politicians love announcing how many miles of road they paved or how many people they served. But this visibility creates a trap – focusing only on outputs can hide whether these activities actually solved problems.
The “So What?” Question
Outcomes answer the crucial “So what?” question. They measure actual changes, benefits, or impacts on people, communities, and the environment that result from government programs.
Going back to road maintenance: while “miles of road resurfaced” is an output, related outcomes could be “reduced vehicle damage reported by citizens,” “shorter commute times,” or “fewer traffic accidents on those roads.” These outcomes show the real-world value of the resurfacing work.
Outcomes often take longer to appear and are harder to measure than outputs. Splunk.com explains that outcomes “measure the long-term effects of a process, task or activity and may not be directly observable. Outcomes can take a long time to manifest and may be difficult to measure, which is why so many people overlook measuring the outcomes.”
The New Zealand government’s social services guide defines outcomes as “what you want or expect to happen as a consequence of a service.”
The U.S. Department of Justice’s Office of Juvenile Justice and Delinquency Prevention explains that outcome measures “document the benefit or change in an individual’s life or the result of a change in a program, system policy, or practice.”
Quick Reference: Outputs vs. Outcomes
| Feature | Output Measures | Outcome Measures |
|---|---|---|
| Definition | Direct products, goods, and services produced by a program | Results, effects, benefits, or changes that occur for individuals, communities, or the environment |
| Focus | What was done? How much? How efficiently? | What difference did it make? What was achieved? |
| Timeframe | Short-term and immediate | Often intermediate or long-term |
| Question Answered | “Did we deliver the service as planned?” | “Did the service achieve the desired change?” |
| Nature | Tangible, countable, directly observable | Often less tangible, may require complex assessment |
| Examples | Number of workshops conducted, miles of road paved, applications processed | Increased participant skills, reduced traffic congestion, improved public health |
Why This Matters for Your Tax Dollars
The Busy Work Trap
Government that focuses only on outputs risks engaging in elaborate busy work – lots of activity that doesn’t solve problems or improve lives. High output numbers can create an illusion of productivity while failing to address root causes.
Consider a city that boasts about filling thousands of potholes each year. If roads remain in poor condition, causing vehicle damage and unsafe driving, the city isn’t achieving its real goal of safe, reliable transportation. The focus on filling potholes might mask the need for comprehensive road reconstruction that would deliver better long-term outcomes.
Splunk.com warns that “many people focus on only the outputs, missing the bigger picture from the outcomes.” This narrow focus can be dangerous because it lacks context for why goals were set, potentially misaligns priorities, leads to inefficient resource allocation, and fails to adapt to changing needs.
Public administration scholars note that “policymakers often make the mistake of focusing on outputs rather than on outcomes or impacts.” A classic example: simply adding more police officers to patrol streets doesn’t automatically reduce crime or increase public safety if underlying issues like poverty or lack of opportunity aren’t addressed.
An exclusive focus on outputs creates perverse incentives within government agencies. Employees and managers prioritize hitting easily measurable output targets, even when those actions don’t contribute to desired outcomes. If an agency is judged solely by the number of inspections completed, inspectors might rush through them to meet quotas, potentially compromising thoroughness and the actual outcome of improved public safety.
This output obsession also erodes public trust. If citizens observe lots of government activity but don’t experience corresponding improvements in their lives, it breeds cynicism about government effectiveness. Citizens experience government through outcomes – are there accessible community facilities? Are trained individuals finding good jobs? – not through outputs like “we built 10 community centers” or “we trained 500 people.”
Smarter Spending Through Better Measurement
Understanding outcomes helps government make informed decisions about program design, resource allocation, and policy adjustments. When agencies have reliable data on what actually works, they can spend taxpayer money more efficiently.
If a program consistently produces outputs but fails to achieve intended outcomes, it signals a need for re-evaluation, redesign, or discontinuation. Outcome data identifies successful strategies that should be scaled up and ineffective ones that should be modified or eliminated.
The Washington State Office of Financial Management “requires agency budget requests to be linked to performance measures so budget analysts can understand what results or improvements to expect from an investment of resources.” This practice connects outcome-oriented thinking directly to budget allocation, compelling agencies to articulate the expected impact of proposed spending.
Federal performance management, shaped by the Government Performance and Results Act and its Modernization Act, encourages using performance information, including outcome data, to inform budget decisions and improve program effectiveness.
A genuine commitment to measuring outcomes fosters continuous learning and adaptation within government agencies. When outcomes are the primary focus, poor results become valuable learning opportunities, prompting analysis and adjustments rather than being recorded as failures.
Accountability Through Results
Output and outcome measures provide the foundation for government accountability and transparency. They give the public, elected officials, and oversight bodies necessary information to assess whether agencies are achieving stated goals and using public funds responsibly.
The Government Performance and Results Act of 1993 and its 2010 modernization mandate that federal agencies develop strategic plans with long-term goals, set annual performance goals, measure progress, and report results publicly.
Performance.gov serves as a central hub for federal agency strategic plans, Agency Priority Goals, and Cross-Agency Priority Goals, providing “an integrated view of agency strategic goals and objectives.”
Transparency in government operations is widely regarded as fundamental to democracy. It “demands that public institutions disclose their activities and decisions to citizens, directly impacting their accountability and contributing to improved public administration performance.” Performance measures are a key component of this disclosure.
The U.S. Open Government initiative, managed by the General Services Administration, works to “advance the principles of transparency, accountability, and citizen engagement.”
True accountability requires more than data dumps. Information must be usable and interpretable for average citizens. Government reports can be dense, technical, and filled with jargon inaccessible to non-experts. If citizens can’t understand what’s being reported or how it relates to their concerns, transparency efforts fall short.
Genuine accountability also requires responsive mechanisms where underperformance leads to demonstrable corrective action and public explanation. If poor outcomes don’t lead to tangible changes or clear explanations, accountability is weakened regardless of how much data is made transparent.
Real Examples from Your Government
Education: Teaching vs. Learning
Education provides clear examples of the output-outcome distinction.
Common education outputs include:
- Number of students enrolled in specific programs
- Number of teachers trained in new curricula
- Hours of tutoring provided
- Number of educational workshops conducted
- Number of Pell Grants awarded
Related education outcomes include:
- Increases in student reading proficiency levels
- Higher high school graduation rates
- Increased college enrollment and completion rates
- Successful placement in employment or advanced training
The U.S. Department of Education and National Center for Education Statistics track these measures. For instance, NCES monitors postsecondary outcomes like retention rates, persistence rates, and graduation rates.
Education faces a significant challenge: considerable time lag between interventions and desired long-term outcomes. While the ultimate goal might be enhanced lifetime earning potential, intermediate outcomes like improved test scores or higher college enrollment provide more immediate feedback on program effectiveness.
The Gaining Early Awareness and Readiness for Undergraduate Programs tracks the “percentage of program participants that enroll in college” as a key outcome measure, bridging the gap between immediate outputs and long-term impacts.
Healthcare: Treatment vs. Health
Healthcare distinguishes between process measures (outputs) and health improvements (outcomes).
Healthcare outputs (process measures) include:
- Number of patients receiving vaccinations or screenings
- Number of health clinics established
- Number of informational brochures distributed
- Adherence to clinical guidelines like “Timely Initiation of Care”
Healthcare outcomes include:
- Reduction in vaccine-preventable diseases
- Lower rates of chronic illnesses in specific populations
- Increased rates of early cancer detection
- Improved patient-reported health status and quality of life
The Centers for Medicare & Medicaid Services, Agency for Healthcare Research and Quality, and Department of Health & Human Services provide federal healthcare performance data.
CMS’s Home Health Quality Reporting Program uses outcome measures derived from patient assessments and Medicare claims. The HCAHPS survey captures patients’ perspectives on hospital care, providing valuable patient-reported outcome data.
Healthcare increasingly emphasizes patient-reported outcomes, recognizing that clinical indicators alone don’t capture the full picture. Patients’ assessments of functional improvement, pain relief, or satisfaction provide vital dimensions for understanding true outcomes of medical interventions.
A particular challenge involves measuring preventive services impact. While tracking “percentage of people receiving preventive services” is straightforward, the desired outcome is often the absence of illness. Demonstrating such “non-events” as direct results of specific interventions requires sophisticated, longitudinal data collection strategies.
Public Safety: Patrols vs. Safety
Public safety outputs describe law enforcement activities, while outcomes relate to actual community safety and security.
Public safety outputs include:
- Number of police patrols conducted
- Number of arrests made or citations issued
- Number of security training sessions conducted
- Number of emergency preparedness drills held
- Number of “high-visibility enforcement campaigns”
Public safety outcomes include:
- Reduction in specific crime rates in particular areas
- Increased citizens’ perception of safety and trust in law enforcement
- Faster and more effective emergency response times
- Improved community resilience to disasters
The Department of Justice, including the Bureau of Justice Assistance and FBI, along with the Department of Homeland Security, provide public safety performance data. SchoolSafety.gov focuses on safety outcomes within educational settings.
Public safety faces strong political pressure to focus on visible outputs like arrest numbers or police presence intensity. However, these activities may not correlate directly with desired outcomes like reduced crime rates or increased community safety if underlying socio-economic issues aren’t addressed.
Measuring citizens’ “perception of safety” or “trust in law enforcement” is crucial for assessing public safety outcomes. Actual crime statistics and how safe people feel can diverge significantly. A community might have statistically low crime rates, but if residents don’t feel secure due to past incidents, media portrayals, or lack of trust in law enforcement, their quality of life remains negatively impacted.
Transportation: Building vs. Moving
Transportation programs involve infrastructure outputs and service operations, while outcomes focus on system efficiency, safety, and accessibility.
Transportation outputs include:
- Miles of highway resurfaced or repaired
- Number of new bridges built or modernized
- Number of new buses or railcars purchased
- Number of traffic signals upgraded
- Number of runway safety briefings conducted
Transportation outcomes include:
- Reduced traffic congestion and shorter commute times
- Decrease in traffic accidents, fatalities, and injuries
- Improved public transit ridership and customer satisfaction
- Enhanced efficiency of goods movement
- Improved runway safety with fewer incidents
The U.S. Department of Transportation, including the Federal Highway Administration, FAA, and National Highway Traffic Safety Administration, provide transportation performance data.
Transportation projects, particularly large infrastructure initiatives, often involve massive capital investments. Success is ultimately judged by long-term outcomes like sustained economic benefits, significant quality of life improvements, enhanced environmental sustainability, and lasting safety improvements. These outcomes can take decades to fully materialize and are subject to numerous external economic, demographic, and technological factors.
The strong emphasis on “safety” in transportation demonstrates clear prioritization of a critical outcome: reducing fatalities and injuries. This focus drives specific output measures directly linked to achieving safety outcomes, illustrating how universally prioritized and measurable outcomes can effectively drive targeted strategies and resource allocation.
Environmental Protection: Actions vs. Results
Environmental protection involves regulatory outputs and cleanup activities, with outcomes focused on measurable environmental quality improvements and public health benefits.
Environmental outputs include:
- Number of environmental compliance inspections conducted
- Number of permits issued for new technologies
- Tons of recyclable materials collected
- Number of communities assisted with green infrastructure planning
- Number of SO2 allowances allocated under cap-and-trade systems
Environmental outcomes include:
- Measurable reductions in air or water pollutant concentrations
- Restoration of contaminated land to safe, productive use
- Increased populations of endangered or threatened species
- Higher national recycling and composting rates
- Reductions in greenhouse gas emissions from various sectors
The Environmental Protection Agency provides federal environmental performance data, covering pollution prevention, acid rain programs, risk assessment, and Superfund cleanups.
Environmental outcomes often involve complex scientific measurements and unfold over very long time horizons. Restoring contaminated water bodies or achieving measurable air quality improvements can take years of sustained effort. Environmental conditions are frequently affected by factors beyond direct governmental control, such as global weather patterns, international industrial activities, or natural geological processes.
Market-based mechanisms, like the SO2 allowance trading system under the Acid Rain Program, represent innovative approaches to link outputs more directly to desired outcomes. The success of such programs in achieving “significant emission reductions” demonstrates their potential effectiveness, though they depend on robust monitoring and enforcement to ensure integrity.
| Government Sector | Example Output Measure | Example Outcome Measure | Key Data Sources |
|---|---|---|---|
| Education | Number of students completing federal job training programs; Number of teachers receiving professional development | Percentage of job training graduates securing employment within 6 months; Improvement in student achievement scores | Department of Education, National Center for Education Statistics, Performance.gov |
| Healthcare | Number of children vaccinated through federally supported programs; Number of hospitals implementing new patient safety protocols | Reduction in vaccine-preventable diseases; Lower rates of hospital-acquired infections | Department of Health & Human Services, Centers for Medicare & Medicaid Services, AHRQ |
| Public Safety | Number of law enforcement officers receiving federally funded crisis intervention training; Number of community safety workshops conducted | Reduction in use-of-force incidents by trained officers; Increased citizen-reported feelings of safety in targeted communities | Department of Justice, FBI Crime Data, Department of Homeland Security |
| Transportation | Miles of interstate highway resurfaced with federal funds; Number of airports receiving federal grants for runway safety improvements | Reduction in highway fatality rates on resurfaced sections; Decrease in runway incursions at improved airports | Department of Transportation, FHWA, FAA, NHTSA |
| Environmental Protection | Number of EPA inspections of industrial facilities; Number of Superfund sites where cleanup construction is completed | Reduction in illegal pollutant discharges from inspected facilities; Number of Superfund sites where human exposure to contaminants is controlled | Environmental Protection Agency, EPA Superfund Performance Measures |
How Government Actions Connect to Real Change
The Results Chain
Government programs aren’t designed randomly. They’re based on a “theory of change” – assumptions about how specific actions will lead to specific results. This theory maps out as a results chain or logic model showing a sequence: inputs (resources like funding and staff) conduct activities, which produce outputs, which then lead to outcomes.
The Administration for Children and Families explains that logic models “clearly and concisely show how interventions affect behavior and achieve a goal” and provide a “visual way to illustrate the resources or inputs required to implement a program, the activities and outputs of a program, and the desired program outcomes.”
The Institute of Education Sciences emphasizes that logic models show “how each component will influence another to attain the intended outcomes.”
The “if-then” logic in these models represents assumptions made by program designers. For example: IF government provides funding for job training programs, AND training providers conduct workshops for unemployed individuals, THEN individuals will complete training. IF individuals complete training, THEN their job skills will improve. IF job skills improve, THEN they’ll find and retain employment, leading to reduced poverty and increased economic self-sufficiency.
Effective government performance management involves testing whether these “if-then” links actually hold true. If data shows individuals complete training but their skills don’t improve, or improved skills don’t translate into better employment rates, the program’s theory of change is flawed and needs re-examination.
It’s Not Always a Straight Line
The journey from output to outcome is rarely simple or guaranteed. Many external factors influence whether government programs achieve desired outcomes, including economic conditions, societal trends, actions of other organizations, unexpected events, and complex individual and community behaviors.
Government outcomes are often “systemically complex,” with agencies frequently relying on “collaboration with other public, private and nonprofit sector organisations for an outcome to be achieved.” This complexity makes “isolating the specific impact of a policy from other contributing factors” a major challenge.
The Government Accountability Office acknowledges this by noting that impact evaluation is employed “when external factors are known to influence the program’s outcomes, in order to isolate the program’s contribution to achievement of its objectives.”
External factors mean government agencies can’t always be solely credited for positive outcomes, nor blamed when desired outcomes aren’t achieved. A well-designed program might fall short due to economic downturns or counteracting efforts by other entities. Conversely, positive outcomes might be partially attributable to favorable external conditions.
This reality complicates direct accountability based solely on outcome achievement. However, this complexity doesn’t negate the importance of striving for outcomes. It highlights the need for agencies to be highly adaptable, continuously monitor their environment, and engage in robust collaboration with other relevant actors.
The Challenges of Measuring What Really Matters
Defining Complex Outcomes
Many important government goals are inherently complex and intangible. Desired outcomes like “improved public trust,” “enhanced national security,” “greater social equity,” or “increased community well-being” are difficult to define in precise, measurable terms.
“Government outcomes are often intangible. Most government agencies exist to influence social stability, justice or welfare; not to produce widgets.” Policies may have multiple, sometimes conflicting goals or vaguely stated intended outcomes, making it challenging to determine appropriate metrics and data sources.
This challenge often leads agencies to use “proxy measures” – more tangible indicators believed to correlate with desired intangible outcomes. Instead of directly measuring “community well-being,” an agency might use park usage rates, volunteerism levels, community satisfaction surveys, or local crime statistics.
The choice and validity of proxies are critical. A proxy isn’t the outcome itself, and its relationship to the true underlying concept must be carefully considered. Park usage might increase while overall community well-being declines due to economic stress or social fragmentation. Proxy selection can sometimes be driven more by data availability than conceptual strength, potentially creating distorted pictures of true outcomes.
Data Collection Challenges
Even with clearly defined outcomes, practical data collection presents significant challenges. Gathering reliable, accurate, and consistent outcome data can be costly, time-consuming, and require specialized expertise.
Key questions agencies face include: Is necessary data currently being collected? If not, what new data is needed, how will it be collected, and what are the associated costs? Is the benefit of collecting new data sufficient to justify added expenditure and effort?
In many government areas, existing data systems are inadequate for robust outcome measurement. Criminal justice sector reporting on critical outcomes like recidivism is often “inconsistent at best,” with many states issuing only periodic reports or lacking ability to compare current performance with past trends.
Data may be incomplete, inconsistent across sources, or not originally designed for research purposes, requiring significant effort to clean, harmonize, and validate before reliable use. USAFacts.org, dedicated to making government data accessible, notes that “limited or deficient data makes it difficult to address key issues” facing the nation.
These limitations can perpetuate focus on outputs. If high-quality outcome data is difficult or expensive to obtain, agencies may default to measuring what they can measure easily and dependably – which is often outputs. This creates a challenging cycle where lack of good outcome data reinforces output-centric approaches to performance management.
The Attribution Problem
Perhaps the most persistent challenge in outcome measurement is attribution: proving that specific government programs directly caused particular outcomes, especially broad societal changes influenced by many factors.
The Federal Highway Administration notes regarding safety programs: “Outcome evaluation is challenging because the link between SHSP implementation and crash reduction is indirect. Causality is difficult to establish because scientific evaluation conditions and controlled studies are simply not possible in the transportation safety field.”
Government departments often “struggle to isolate their contribution toward outcomes they can only partially influence” because they operate within complex systems involving multiple actors and forces. Many confounding variables and external factors can influence observed results, making it hard to pinpoint specific policy or program impacts.
Program evaluation methodologies, particularly “impact evaluation,” attempt to address this challenge. The GAO defines impact evaluation as assessing “the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program.”
However, conducting rigorous impact evaluations can be expensive, technically complex, and not always feasible for every program. In many government performance reports, “evidence of influence” or “demonstration of contribution” often replaces definitive “proof of causation.”
Time Lags in Seeing Results
Many significant and desirable government outcomes don’t materialize quickly. Improvements in public health, higher educational attainment leading to better economic opportunities, ecosystem restoration, or sustained poverty reduction often become apparent only over extended periods, sometimes years or decades after program implementation.
“Government outcomes are often long-term… Some outcomes can take many years to change.” This “response lag” is well-recognized. The full economic and social benefits of major early childhood education investments may not be seen until those children reach adulthood and enter the workforce.
This inherent time lag poses significant challenges for performance measurement and accountability. Political and budget cycles typically operate on much shorter timeframes than many important outcomes require. This creates fundamental mismatch: the time horizon for important outcomes extends far beyond typical tenure of elected officials or duration of specific budget allocations.
This can pressure agencies and political leaders to demonstrate quick, visible results, often meaning focus on outputs or very short-term outcomes, potentially at the expense of investing in long-term strategies that yield more significant but delayed societal benefits.
Making Measurement Work Better
Best Practices for Government Performance
Organizations and experts have outlined principles for effective public sector performance measurement. These best practices ensure measures are meaningful, reliable, and useful for decision-making and accountability.
The Washington State Office of Financial Management, citing the Governmental Accounting Standards Board, highlights that good performance measures should be:
- Relevant: Clearly relate to the activity being measured and matter to the intended audience
- Understandable: Clear and easy to comprehend
- Timely: Available soon enough to be useful for decision-making
- Comparable: Allow comparisons over time or with similar entities
- Reliable: Accurate and consistent
- Cost-effective: Benefits justify collection and reporting costs
The U.S. Office of Personnel Management adds that performance standards should be “objective, measurable, realistic, and stated clearly in writing.”
The Government Finance Officers Association offers comprehensive best practices, including:
- Focus on outcomes, not just outputs
- Use variety of data sources for accuracy and completeness
- Use metrics with time series to track performance over time
- Communicate performance internally and externally
- Ensure measures are Useful, Relevant, Adequate, Collectible, Consistent, and consider the Environment
- Clearly identify Responsibility for data collection, storage, and dissemination
Other experts emphasize:
- Establishing Clear Objectives and Metrics: Define specific, measurable, achievable, relevant, and time-bound goals
- Adopting Mixed Methods: Utilize both quantitative and qualitative data for comprehensive understanding
- Ensuring Transparency and Accountability: Maintain thorough documentation, engage diverse stakeholders, consider peer review
- Timeliness and Continuous Feedback: Integrate ongoing monitoring and use feedback loops for iterative learning
- Keeping Purpose Scoped: Avoid “vanity metrics” by focusing on indicators that directly impact outcomes
A common thread is the critical importance of communication and collaboration around performance measures. Effective performance measurement isn’t solely technical data collection and analysis. It’s also a social and political process requiring buy-in, shared understanding, and ongoing dialogue among diverse stakeholders.
Federal Oversight and Improvement
Within the federal government, several key entities and legislative frameworks guide efforts to improve performance measurement and ensure agencies focus on achieving results.
The Government Accountability Office, an independent, nonpartisan agency working for Congress, plays a crucial role. GAO’s mission includes helping to “improve the performance and ensure the accountability of the federal government for the benefit of the American people.” It conducts evaluations of federal programs, identifies high-risk areas, and provides recommendations on making government more efficient, effective, ethical, equitable, and responsive.
GAO has produced extensive guidance on performance measurement and evidence-based policymaking, such as Performance Measurement and Evaluation: Definitions and Relationships and Evidence-Based Policymaking: Practices to Help Manage and Assess Federal Efforts.
The Office of Management and Budget, part of the Executive Office of the President, oversees agency performance and implements performance management legislation. OMB provides detailed instructions to federal agencies through OMB Circular A-11, Part 6, covering strategic planning, performance plans and reports, and performance reviews.
Legislative foundations include the Government Performance and Results Act of 1993 and its 2010 modernization. These laws require agencies to:
- Develop multi-year Strategic Plans with long-term, outcome-oriented goals
- Prepare annual Performance Plans with specific, measurable performance goals linked to strategic objectives and budget
- Produce annual Performance Reports detailing progress made toward goals
The Foundations for Evidence-Based Policymaking Act of 2018 further strengthened efforts by requiring agencies to develop evidence-building plans, designate evaluation officers, and improve data management to support evidence-based decision-making.
The federal government maintains Performance.gov as a central public-facing website providing information on government-wide priorities, including Cross-Agency Priority Goals and agency-specific Agency Priority Goals and Strategic Objectives.
Data Collection and Evaluation Strategies
To effectively measure outcomes and understand program impact, government agencies employ various data collection and evaluation strategies.
Robust data collection systems should facilitate data sharing across different programs and organizations, where legally and ethically appropriate, to provide holistic views of individual or community interactions with government services. Data can be sourced from administrative records, professional expertise and judgment, or directly from individuals through surveys, assessments, exit interviews, and follow-up communications.
The Economic Development Administration uses specific data collection instruments to gather performance data on construction and non-infrastructure grant awards, tracking both outputs and outcomes annually for several years post-award.
Logic models provide foundational tools for planning data collection and evaluation. They illustrate how project activities are expected to lead to desired results, helping clarify assumptions and identify key measurement points.
Regular data reviews are key components of performance management, involving leadership and program staff examining performance data to assess progress, diagnose problems, identify opportunities, and make course corrections.
Different types of program evaluation answer different performance questions:
- Process evaluations assess whether programs are implemented as intended and activities delivered efficiently
- Outcome evaluations focus on extent to which programs achieve intended outcomes and explore how those outcomes are produced
- Impact evaluations determine causal effects – what changes occurred because of the program, distinct from changes that would have happened anyway
Outcome monitoring systems track key program outcomes over time, allowing detection of trends, identification of lagging performance areas, and assessment of improvement or deterioration.
A critical aspect of effective outcome measurement is ability to link and share data across different systems and organizations. Many societal challenges are complex and addressed by multiple government programs. Understanding true cumulative outcomes for individuals or families interacting with various services often requires integrating data from different programs.
This is exemplified by calls in criminal justice to assign unique identifiers and link data across agencies to better track recidivism outcomes and intervention effectiveness. However, achieving data interoperability involves technical complexities, differing data standards, bureaucratic silos, and important privacy considerations.
Your Guide to Government Performance Information
Where to Find the Data
The federal government provides extensive performance information to the public. Here are key starting points:
Performance.gov serves as the main public-facing website for U.S. government performance management. It provides an “integrated view of agency strategic goals and objectives, and detailed information on each Agency Priority Goal” as well as Cross-Agency Priority Goals. It functions as a “one-stop shop for links to agency performance information.”
Agency Websites feature required strategic plans, annual performance plans, and annual performance reports. Federal agencies must post multi-year Strategic Plans, annual Performance Plans, and annual Performance Reports on their websites.
USAFacts.org provides “a data-driven portrait of the American population, US governments’ finances, and governments’ impact on society” by compiling and standardizing publicly available government data from federal, state, and local levels. Their goal is making government data more understandable and accessible.
Government Accountability Office Reports feature numerous evaluations of federal programs and operations, often including agency performance assessments and improvement recommendations.
The sheer volume and technical nature of government performance data can overwhelm many citizens. Centralized, curated portals like Performance.gov and independent initiatives like USAFacts.org play crucial roles in aggregating, standardizing, and presenting complex information in digestible formats.
Questions That Cut Through the Spin
Armed with understanding of outputs and outcomes, citizens can ask more insightful questions when encountering government performance data in official reports, news articles, or public statements.
Outputs or Outcomes?
- Is this measure describing an activity or product (output), or change or impact (outcome)?
- If outputs are reported, what are the intended outcomes? Is there evidence linking these specific outputs to desired outcomes?
Meaningfulness and Measurement:
- How is this outcome being measured? Is it direct measurement or a proxy?
- If it’s a proxy, how well does it represent the actual outcome we care about?
- Does this outcome measure something that truly matters to the community?
Context and Trends:
- What was baseline performance before this program?
- Is the trend improving, declining, or staying the same over time?
- How does this year’s performance compare to previous years or similar jurisdictions?
Influencing Factors and Attribution:
- What other factors might be influencing this outcome?
- What is the agency doing to address these external factors?
- How confident can we be that this program actually caused the observed change?
Equity and Distribution:
- Who is benefiting from this program?
- Are benefits distributed equitably across different groups?
- Are there disparities in outcomes, and what’s being done to address them?
Instead of accepting a reported output like “X miles of road paved,” an informed citizen might ask, “How has this paving improved average travel times or reduced vehicle accidents on these specific roads compared to last year or similar roads that were not paved?”
Your Role in Better Government
Understanding the distinction between outputs and outcomes, knowing where to find performance information, and being equipped to ask critical questions empowers citizens to play more active and effective roles in governance.
The U.S. Open Government initiative states, “Empowering informed citizens to actively engage with their government ensures it remains a government of, by, and for the people.”
Access to understandable government performance information is a precondition for citizens to hold decision-makers accountable and participate effectively in democratic processes. When citizens can discern whether government efforts are producing outputs or achieving meaningful outcomes, they can:
- Better interpret government actions and priorities
- Ask more informed questions of elected officials, candidates, and public servants
- Advocate more effectively for policies and programs likely to deliver real results
- Contribute to public discourse focused on impact and effectiveness rather than just activity or spending
- Support initiatives promoting transparency and robust performance measurement
The journey toward more outcome-focused government isn’t solely top-down administrative reform driven by laws or initiatives. It’s also powerfully propelled by bottom-up citizen demand for meaningful results and genuine accountability.
An educated citizenry that understands the critical difference between outputs and outcomes, and consistently demands evidence of positive impact, serves as a vital catalyst for better governance. By seeking out and critically engaging with performance information, every individual can contribute to making government more accessible, responsive, and effective in serving the public good.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.