Last updated 3 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
Every first Friday of the month, the U.S. Bureau of Labor Statistics releases its Employment Situation Summary. The report contains the headline “jobs number” that immediately gets dissected by financial markets, policymakers, and the public.
Over the following weeks, months, and years, these numbers get revised through a series of planned updates. Revisions sometimes confuse people or create suspicion that the initial numbers were wrong or manipulated.
But to anyone familiar with this process, the revisions aren’t mistakes. They’re a deliberate, transparent part of a statistical process designed to balance two competing goals: providing timely economic information while ensuring the greatest possible accuracy.
The initial release is a first draft of economic history—a quick snapshot based on incomplete data. Later revisions add detail and clarity, creating a progressively clearer picture of the U.S. labor market.
This process has four primary layers: monthly updates from additional survey responses, comprehensive annual benchmark revisions, routine seasonal adjustments, and periodic updates to population estimates.
How the Jobs Report Works: Two Different Surveys
The monthly jobs report combines data from two distinct, large-scale surveys. Each has its own methodology, scope, and purpose. While their different approaches can occasionally produce conflicting short-term results, this system provides a more robust picture than any single source could.
One survey might show strong growth in company payrolls while the other reveals weakness in self-employment. This gives policymakers a more complete view of economic currents.
The Establishment Survey: Counting Jobs
The number most frequently cited in headlines—the change in nonfarm payroll employment—comes from the Current Employment Statistics (CES) survey, often called the “establishment” or “payroll” survey.
The CES is a monthly survey conducted as a federal-state cooperative program. It gathers data from approximately 121,000 businesses and government agencies, representing about 631,000 individual worksites across the country.
The survey estimates the number of nonfarm jobs on business and government payrolls. It also collects data on average weekly hours worked and average hourly earnings. Because it counts jobs at the “place of work,” it reflects employment within a specific geographic area, regardless of where workers live.
The CES includes any full-time or part-time employees who received pay for any part of the pay period that includes the 12th of the month. A person holding two jobs at two different companies gets counted twice.
The survey excludes several categories of workers: the self-employed, unpaid family workers, agricultural workers, and private household employees like nannies or housekeepers.
The Household Survey: Counting People
The official unemployment rate comes from the Current Population Survey (CPS), also known as the “household” survey.
The CPS is a monthly survey of about 60,000 eligible U.S. households, scientifically selected to represent the entire country. The U.S. Census Bureau conducts it on behalf of the BLS.
The survey gathers data on the labor force status of the civilian noninstitutional population aged 16 and over. Based on their activities during a specific reference week, individuals get classified as employed, unemployed, or not in the labor force. This data calculates the unemployment rate, labor force participation rate, and employment-population ratio.
The CPS counts people, not jobs. An individual with multiple jobs gets counted only once as “employed.” Its scope is broader than the CES, including groups the establishment survey excludes: self-employed individuals, agricultural workers, and unpaid family workers who work at least 15 hours in a family business.
The survey provides demographic data, allowing analysis of employment trends by age, sex, race, and ethnicity. It excludes individuals in the Armed Forces and those in institutions like prisons or nursing homes.
| Characteristic | Current Employment Statistics (CES) | Current Population Survey (CPS) |
|---|---|---|
| Common Name | “The Payroll Survey” or “Establishment Survey” | “The Household Survey” |
| Source of Data | Survey of ~121,000 businesses & government agencies | Survey of ~60,000 households |
| Key Metric Produced | Change in Nonfarm Payroll Employment (“Jobs Added/Lost”) | The Unemployment Rate |
| Unit of Measurement | Counts Jobs | Counts People |
| Multiple Jobholders | Counted for each job held | Counted only once as “employed” |
| Key Exclusions | Self-employed, farm workers, private household workers | Active-duty military personnel, institutionalized persons |
Monthly Revisions: Why First Numbers Change
The most frequent and publicly visible revisions are monthly updates to the CES payroll numbers. These occur in the two months following the initial release due to practical realities of data collection.
The Speed Problem
Policymakers, financial markets, and the public want the earliest possible information on the economy’s state. To meet this demand, the BLS publishes its first estimate quickly. The Employment Situation report typically comes out on the first Friday of the month, covering data from just a few weeks prior.
This speed comes at a cost. At the time of the first preliminary estimate, the BLS hasn’t received survey responses from all businesses in its sample. Historically, the average collection rate for the initial release is around 73%.
To produce a headline number, the BLS uses available data and assumes that employment trends at businesses that haven’t reported yet are similar to those that have. This makes the first release a quick but lower-resolution snapshot of the job market.
How Late Data Changes Numbers
Over the next two months, BLS staff continue collecting responses from remaining businesses. This additional information gets incorporated into two subsequent releases:
- The Second Preliminary Estimate: Released one month after the initial report
- The Third and Final Sample-Based Estimate: Released two months after the initial report
By the third estimate, the collection rate typically rises to about 95%, providing a much more complete and accurate picture.
Monthly revisions occur when employment data from late-responding firms differs from data provided by initial responders. If businesses that submitted data late had weaker job growth or more layoffs than early responders, the initial jobs number gets revised down. If they had stronger growth, the number gets revised up.
This process is standard and expected, designed to systematically improve data accuracy as more information becomes available.
The Response Rate Problem
A significant challenge facing the BLS is declining survey response rates from businesses over the past decade. This has direct consequences for public perception of the jobs report.
As fewer businesses respond by the initial deadline, the first estimate becomes more reliant on statistical modeling and assumptions about non-responders. This increases the probability of larger differences between the initial estimate and more complete data that arrives later.
These larger revisions, while statistically understandable, are more easily framed in the political arena as “errors” or “incompetence.”
Case Study: May and June 2025 Revisions
A clear example occurred with data for May and June 2025, which saw unusually large downward revisions. As reported in the July 2025 Employment Situation Summary:
May 2025: The initial estimate of 144,000 jobs gained was revised down by 125,000 to a final sample-based estimate of just 19,000 jobs.
June 2025: The initial estimate of 147,000 jobs gained was revised down by 133,000 to a final sample-based estimate of just 14,000 jobs.
Combined, these revisions meant employment growth in May and June was 258,000 jobs lower than first reported. This provided a significantly weaker picture of the economy’s momentum than initial headlines suggested.
Annual Benchmark Revisions: The Gold Standard
While monthly revisions refine data based on a more complete sample, the most comprehensive correction comes from the annual “benchmark” revision. This process moves beyond survey data altogether and aligns estimates with a near-complete count of American jobs.
Beyond Surveys: The QCEW
The foundation of the benchmark revision is the Quarterly Census of Employment and Wages (QCEW) program. Unlike the CES, the QCEW isn’t a sample survey. It’s a comprehensive census derived from administrative unemployment insurance tax records that nearly all employers must file with their state workforce agencies four times a year.
Because it’s based on mandatory tax filings, the QCEW covers approximately 97% of all nonfarm payroll jobs in the United States and is considered the gold standard for employment counts. The main drawback is timeliness—this comprehensive data is only available with a five to six month lag.
How Benchmarking Works
Once a year, the BLS undertakes the benchmark revision to re-anchor its monthly CES survey estimates to the more accurate QCEW data.
The process works as follows:
Establish a Benchmark Point: The BLS uses comprehensive QCEW data to establish a highly accurate count of total nonfarm employment for March of the previous year.
Calculate the Revision: This March QCEW count gets compared to the sample-based CES estimate for the same month. The difference between the two is the total benchmark revision. The size of this revision measures the overall accuracy of the monthly survey over the past year.
Adjust the Historical Series: The BLS assumes this total error accumulated steadily over the prior year. It uses a “linear wedge-back” procedure to smoothly distribute the revision across the 11 months leading up to the March benchmark. Data for the nine months following the March benchmark also get revised, using the new, more accurate March level as their starting point.
This annual recalibration corrects for any sampling error or modeling error that may have crept into monthly estimates over time.
Fixing the Birth-Death Model
A key source of potential error in the monthly CES survey is accounting for the U.S. economy’s dynamism. There’s an unavoidable lag between when a new business opens and when it appears on official lists from which the survey sample gets drawn. Likewise, businesses that close may not be immediately removed from the sample.
To account for this, the BLS uses a statistical model known as the “birth-death model” to estimate the net effect of job creation from new businesses and job loss from closing businesses each month. This model is based on historical data from the QCEW.
The annual benchmark process provides a vital check on this model’s performance. By comparing the model’s estimates to actual job changes captured in comprehensive QCEW data, the BLS can measure any error and recalibrate the model for the upcoming year.
Economic Turning Points
The size and direction of the annual benchmark revision can serve as a powerful indicator of major turning points in the economy that monthly survey models failed to capture in real time.
The birth-death model relies on historical trends to project future job gains from new firms. Economic inflection points—like the beginning of a recession or start of a robust recovery—represent a sharp break from past trends. During such periods, the model is most likely to be inaccurate.
As an economy enters a downturn, the model may continue projecting job creation based on recent good times, even as firm creation slows and closures accelerate. This leads to systematic overestimation of jobs in monthly reports. The QCEW data, based on actual tax filings, eventually reveals this overestimation months later.
The result is often a large, negative benchmark revision that corrects the historical record. A prime example was the revision for March 2009, during the Great Recession, which retroactively erased 902,000 jobs from initial estimates. Similarly, the large preliminary downward revision announced for March 2024 was consistent with a period of slowing employment growth that monthly models hadn’t fully grasped.
This makes the benchmark revision a crucial economic signal in its own right.
The Revision Schedule
The benchmark revision process follows a predictable annual schedule:
Preliminary Announcement: In late August or early September, the BLS releases a “preliminary benchmark revision” announcement. This gives data users a first look at the likely size and direction of the upcoming revision for the previous March.
Final Release: The final, official benchmark revision gets incorporated into the full historical dataset and released in early February of the following year, concurrent with publication of the January Employment Situation report.
The table below tracks how a single data point—the employment level for March 2024—evolved over time:
| Date | Event | Data Point | What’s Happening |
|---|---|---|---|
| Early April 2024 | Initial Release | Example: +250,000 jobs reported for March | First look based on ~73% of survey responses |
| Early May 2024 | Second Estimate | Example: Revised to +230,000 jobs | More late survey responses are incorporated |
| Early June 2024 | Final Sample-Based Estimate | Example: Revised to +235,000 jobs | Nearly all (~95%) survey responses are included |
| August 21, 2024 | Preliminary Benchmark Announcement | Preliminary revision for March 2024 is -818,000 | First look at how survey data compares to QCEW tax records |
| February 2025 | Final Benchmark Release | March 2024 level revised down by a final 598,000 | Final alignment with QCEW counts. All data from April 2023 to Dec 2024 is revised |
Statistical Adjustments Behind the Scenes
Beyond revisions driven by new data from surveys and tax records, two other important statistical processes occur behind the scenes. These adjustments refine both CES and CPS data and lead to regular data revisions.
Seasonal Adjustment
Many economic data series exhibit predictable patterns tied to the time of year. Retailers hire for winter holidays, construction activity slows in cold weather, and school employment drops in summer. These regular fluctuations are seasonal effects.
Seasonal adjustment is a statistical technique used to estimate and remove these predictable patterns from data. By filtering out seasonal noise, analysts can more clearly see underlying cyclical and long-term trends in the economy.
This process leads to revisions for two main reasons:
Evolving Patterns: Seasonal patterns aren’t perfectly static—they can change over time. To capture these shifts, the BLS re-estimates its seasonal adjustment factors at the end of each calendar year, incorporating a full new year of data into its models like the widely used X-13ARIMA-SEATS program. This annual recalculation leads to revisions in seasonally adjusted data for the previous five years. The revisions are typically largest for the most recent year’s data, as initial seasonal factors for that year were based on projections.
Handling Major Shocks: Seasonal adjustment models are built to handle normal fluctuations. Unprecedented events, like the economic shutdown caused by the COVID-19 pandemic in 2020, create massive outliers that aren’t seasonal in nature. To prevent these shocks from distorting seasonal factors, the BLS had to make special, real-time modifications to its models. These interventions were then reviewed and refined in subsequent annual revisions as more data became available.
Population Control Adjustments
Data from the household survey (CPS) is based on a sample of about 60,000 households. To ensure results are representative of the entire country, the data must be weighted to match independent estimates of the U.S. population. These official estimates, known as population controls, are produced by the U.S. Census Bureau.
Each year, the Census Bureau updates its population estimates to reflect the latest information on births, deaths, and net international migration. With the release of the January jobs report each year, the BLS incorporates these new, updated population controls into the CPS weighting process.
This annual update can cause a significant one-time adjustment, or “break,” in the data series for the total size of the labor force, number of employed people, and number of unemployed people. This adjustment represents the cumulative correction for any under- or over-estimation of population growth since the last decennial census.
This process highlights a critical distinction between economic levels and economic rates. A large population control adjustment can dramatically change the levels—the raw number of people in the labor force or employed—while having minimal impact on rates that economists typically use to gauge labor market health, such as the unemployment rate or labor force participation rate.
This is because these rates are ratios. The unemployment rate is the number of unemployed people divided by the total labor force. When population controls are updated, both the numerator and denominator of this ratio are adjusted proportionally, based on characteristics of the newly accounted-for population.
This can lead to a large change in raw numbers but only a small change in the final rate. This disconnect is a major source of public confusion and can be exploited to create misleading political narratives about the data.
Case Study: January 2025 Population Adjustment
A powerful example occurred with data for January 2025. The BLS incorporated new population controls from the Census Bureau that reflected, among other things, higher-than-previously-estimated levels of net international migration.
The effects were substantial:
- The estimated civilian noninstitutional population (age 16+) increased by 2.9 million people
- The estimated size of the civilian labor force increased by 2.1 million people
- The level of employment increased by 2.0 million, and unemployment increased by 105,000
These are massive changes in data levels. However, the impact on key rates was negligible. The labor force participation rate, employment-population ratio, and unemployment rate each changed by only 0.1 percentage point.
This demonstrates that while the estimate for total U.S. population size was significantly updated, the underlying health of the labor market as measured by the survey’s ratios remained essentially unchanged by the adjustment. As is standard practice, the BLS didn’t revise historical data for prior months, meaning January 2025 data isn’t directly comparable to data from December 2024 and earlier.
When Data Meets Politics
While data revisions are a standard and necessary part of producing accurate economic statistics, they have recently become a flashpoint for political controversy. The size and direction of some recent revisions have been used by critics to question the competence and even integrity of the Bureau of Labor Statistics.
The Accusation: Incompetence or Manipulation
The central accusation leveled against the BLS is that its data revisions are evidence of either gross incompetence or deliberate political manipulation. Critics point to large downward revisions as proof that the agency initially overstated job growth to benefit the incumbent political administration.
This narrative gained significant traction following several events:
The March 2024 Benchmark Revision: The BLS announced a preliminary benchmark revision for the year ending in March 2024 that downwardly adjusted employment by 818,000 jobs, one of the largest such revisions on record. This was used as primary evidence that job growth had been systematically exaggerated.
Large Monthly Revisions: The significant downward revisions for May and June 2025, totaling 258,000 fewer jobs, further fueled these claims.
The Firing of the BLS Commissioner: The controversy reached its peak in August 2025 when President Donald Trump fired BLS Commissioner Erika McEntarfer, a Ph.D. economist with a long career in federal service who had been confirmed with bipartisan support. The firing came hours after release of a weaker-than-expected July jobs report that included large downward revisions for May and June.
President Trump explicitly accused the commissioner of having “faked the Jobs Numbers” for political purposes, stating, “Important numbers like this must be fair and accurate, they can’t be manipulated for political purposes.”
The Defense: A Hallmark of Integrity
In response to these accusations, a broad coalition of economists, former government officials from both parties, and non-partisan organizations mounted a vigorous defense of the BLS and its processes. Their core argument is that the act of revision itself is a hallmark of the agency’s integrity and commitment to accuracy, not a sign of failure.
The defense rests on several key points:
Institutional Safeguards: The process for producing the jobs report is designed with numerous safeguards to prevent political interference. The data is collected by thousands of career civil servants at the BLS and Census Bureau and is kept under strict lock and key until its official release. The BLS Commissioner doesn’t see final numbers until just a day or two before publication, making it virtually impossible for them to personally alter the outcome.
William Beach, who served as BLS Commissioner under President Trump, stated unequivocally that there was “no way” for a commissioner to rig the figures and that there were “no grounds at all for this firing.”
The Danger of Politicization: Experts warned that attacking and politicizing the work of federal statistical agencies is a dangerous path that undermines public trust in all government data. The non-partisan group Friends of the Bureau of Labor Statistics, which includes two former BLS commissioners, condemned the firing, stating, “When leaders of other nations have politicized economic data, it has destroyed public trust in all official statistics and in government science.”
This sentiment was echoed across the political spectrum, with officials warning that such actions threaten the “gold standard” reputation of U.S. economic data.
Revision as a Sign of Honesty: Far from being a cover-up, the process of revising data to align with more comprehensive sources like the QCEW is an act of transparency. The agency openly publishes its revision schedule and methodology, and the revisions themselves are proof that the BLS is committed to correcting its initial estimates with the best available information, regardless of political implications.
A Collision of Statistical Uncertainty and Political Polarization
A complete picture requires acknowledging that the modern controversy over BLS revisions isn’t simply about politics. It’s the result of a collision between two powerful forces: a genuine increase in statistical uncertainty and an environment of extreme political polarization.
The BLS faces real and difficult methodological challenges in the post-pandemic economy. Declining survey response rates make initial data snapshots inherently less precise. Modeling a dynamic economy with significant structural shifts is more difficult than modeling a stable one, making the birth-death model more prone to error at turning points. These statistical realities mean larger revisions are more likely to occur.
This increased statistical uncertainty is happening within a political environment of intense polarization and declining trust in institutions. In this context, any data that contradicts a preferred political narrative is vulnerable to being labeled as biased or fraudulent.
The large revisions—which are a predictable outcome of the underlying statistical challenges—provide ammunition for these political attacks.
Understanding this dynamic is key to seeing the full picture: the controversy isn’t about the numbers themselves, but about how the inherent uncertainty in economic measurement is being interpreted and exploited in a deeply divided political landscape. The debate over revisions is less about a single “truth” and more about the fundamental challenge of producing objective data in an era of subjective interpretation.
The Real Story Behind the Numbers
The Bureau of Labor Statistics operates one of the most sophisticated and transparent statistical systems in the world. The process of revising employment data represents the careful work of career professionals committed to accuracy over convenience.
These revisions tell a story about the complexity of measuring a $25 trillion economy in real time. They reflect the trade-offs between speed and precision that define modern economic data collection.
When monthly job numbers get revised down by tens of thousands, it doesn’t represent failure—it represents the system working as designed. When the annual benchmark reveals that initial estimates were off by hundreds of thousands of jobs, it demonstrates the value of cross-checking survey data against comprehensive administrative records.
The political attacks on this process threaten something more valuable than any single month’s job report: the integrity of the institutions that help Americans understand their economy. In democracies around the world, the politicization of official statistics has preceded the erosion of public trust in facts themselves.
The complexity of labor market measurement means there will always be uncertainty, revisions, and competing interpretations. What matters is preserving the independence and professionalism of the agencies tasked with collecting this data.
The monthly jobs report will continue to be revised. The annual benchmark will continue to surprise. These features aren’t bugs in the system—they’re evidence that it’s working properly, prioritizing accuracy over political convenience.
Understanding why the numbers change helps citizens navigate an increasingly complex information environment. It provides the context needed to separate legitimate statistical challenges from manufactured political controversies.
The integrity of American economic data depends on public understanding of how it gets made. The revisions that sometimes frustrate observers are actually proof of the system’s commitment to getting the numbers right.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.