Last updated 4 hours ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

The U.S. federal government spends more than $100 billion each year on information technology. Yet some of the world’s most advanced artificial intelligence systems are being acquired for just one dollar per agency.

Through the General Services Administration, the federal government has signed deals with OpenAI, Anthropic, and xAI that put cutting-edge generative AI tools in the hands of millions of public servants for a nominal fee.

What These AI Deals Look Like

Generative AI creates new content – text, images, videos, music, and code – based on user prompts. This isn’t a software update. It’s technology that could fundamentally change how government operates, from processing veterans’ benefits to defending against cyberattacks.

The GSA’s “OneGov” initiative makes this rapid adoption possible. The strategy centralizes technology procurement, allowing the government to negotiate government-wide deals instead of agency-by-agency purchasing that has hindered modernization for decades.

The deals came in quick succession. OpenAI announced its $1 per-agency offer first. Anthropic matched it to maintain its position in the federal market. Then Elon Musk’s xAI entered with an even more aggressive offer.

This isn’t about the revenue from a single license. It’s a competition for platform dominance. Companies compete for the opportunity to become the foundational AI platform for the entire U.S. government.

The strategy is classic “land and expand”: get adopted across agencies for a nominal fee, then secure future multi-million dollar contracts for specialized services once the platform is embedded in government workflows.

AI ProviderProduct(s) OfferedCost per AgencyAgreement DurationScope & Key Features
OpenAIChatGPT Enterprise, Advanced Voice Mode$11 yearAccess for all executive branch agencies; includes training via OpenAI Academy; data firewall for security
AnthropicClaude for Enterprise, Claude for Government$11 yearAccess for all three branches of government; FedRAMP High certification for sensitive work
xAIGrok 4, Grok 4 Fast$0.4218 monthsLongest duration deal; includes dedicated engineering support for implementation

Why the Government Wants This

The government’s decision to pursue these deals stems from a national strategy. It’s driven by geopolitical ambition, an urgent need to modernize outdated public infrastructure, and a streamlined approach to acquiring technology.

The Global AI Race

The primary force behind these agreements is the White House’s America’s AI Action Plan. This blueprint frames the development and deployment of artificial intelligence as a national security and economic imperative.

The policy assumes the world is in a global AI race, and the nation that achieves dominance will set global standards and reap significant security and economic benefits. The policy aims to ensure the United States leads in this competition against global competitors, particularly China.

GSA Acting Administrator Michael Rigas made this clear when he stated that the government’s “effective use of AI is critical to demonstrating we are the world’s AI leader.” The rapid adoption of AI is seen as an instrument of foreign policy and national power.

Traditional government procurement is notoriously slow and risk-averse, often taking years to finalize major contracts. The administration views the risk of falling behind technologically as greater than the procurement risks associated with these deals.

Fixing Legacy Systems

The federal government’s IT infrastructure is sprawling, complex, and often outdated. Annual spending exceeds $100 billion. Many agencies rely on legacy systems that are decades old, inefficient, and difficult to maintain.

AI is viewed as a way to leapfrog these challenges and fundamentally modernize government operations.

The potential for efficiency gains isn’t theoretical. A pilot program in Pennsylvania found that state employees using ChatGPT Enterprise saved an average of 95 minutes per day on routine tasks like drafting emails, summarizing documents, and conducting research.

Extrapolated across more than two million federal employees, such time savings could translate into billions of dollars in increased productivity.

This demonstrated potential has created massive demand within federal agencies. A Government Accountability Office report found that the number of AI use cases in the federal government nearly doubled in a single year. The use of generative AI specifically increased ninefold, from 32 to 282 documented cases between 2023 and 2024.

How OneGov Works

Historically, each federal agency would have to independently navigate the complex process of procuring new technology. OneGov centralizes this purchasing power, allowing the GSA to negotiate government-wide agreements.

GSA Federal Acquisition Service Commissioner Josh Gruenbaum described it as “revolutionizing how the federal government acquires AI technology – delivering unmatched value, accelerating modernization, and opening the door for America’s best AI companies to work at the scale our nation demands.”

The initiative works with the GSA’s Multiple Award Schedule, a long-term government-wide contract vehicle that screens vendors and their products in advance. By adding leading AI companies like OpenAI, Google, and Anthropic to this schedule, the GSA created a streamlined marketplace where any federal, state, or local agency can acquire these tools without starting the procurement process from scratch.

This combination of centralized negotiation and a pre-approved marketplace is the engine that makes the $1 AI deals possible.

Why Tech Companies Accept $1

For multi-billion-dollar AI companies, the immediate revenue is irrelevant. The true value lies in a long-term strategy focused on market capture, policy influence, unique data insights, and reputational gain.

Capturing the Federal Market

The U.S. federal government is the largest single buyer of goods and services on Earth. Securing it as a flagship customer is a monumental business achievement that provides a stable, long-term revenue stream.

Former U.S. Air Force Chief Software Officer Nicolas Chaillan has described this approach as “a lock-in strategy.” The logic is straightforward: once an agency integrates a specific AI platform into its core workflows, trains thousands of employees to use it, and builds new processes around its capabilities, the cost and disruption of switching to a competitor become prohibitively high.

See also  "I Do Not Consent to a Search": Why These 6 Words Are Your Strongest Shield

The initial $1 deal is the foot in the door. The real financial return comes from future contract renewals, which Chaillan warns could transform the nominal fee into “a seven-figure renewal with no room to negotiate.”

By embedding their technology deep within the government’s operational fabric, these companies position themselves to become indispensable, long-term partners.

The companies’ strategy goes beyond selling simple software access. The agreements with companies like OpenAI go beyond simple software access. They include comprehensive training programs through the OpenAI Academy, implementation support from experienced partners like Slalom and Boston Consulting Group, and the creation of dedicated government-user communities.

Similarly, xAI’s deal includes dedicated engineering support to ensure successful adoption.

This package weaves the company’s technology, methodology, and personnel into the culture of government work, making their platform the default standard.

Shaping Policy From Inside

Becoming an embedded partner of the federal government provides an opportunity to influence the future of AI policy and regulation from the inside.

As federal agencies develop standards for AI procurement, security, and ethical use, the companies whose tools are already in use will have a significant voice in shaping those standards.

When a product becomes the de facto platform across government, future requirements and regulations are often written to align with its existing capabilities. This creates a formidable barrier to entry for competitors who may have different technical architectures.

The incumbent company can actively shape the market in its favor.

Learning From Government Data

A critical incentive for these companies is the unique learning opportunity from working with the federal government.

All major providers have publicly committed to creating a “data firewall” – they will not use sensitive government inputs to train their publicly available AI models. However, the partnership provides invaluable insights.

The U.S. government possesses some of the largest, most complex, and most valuable datasets in the world. These span everything from census and economic data to public health, climate, and national security information.

By collaborating directly with federal agencies, AI companies gain a deep understanding of how the government needs to use this data, what its unique challenges are, and what specific capabilities are most critical for its missions.

This knowledge is highly valuable market research that cannot be obtained from the outside. It allows them to develop highly specialized AI models and services tailored specifically for the public sector, such as Anthropic’s Claude for Government, which is certified to handle sensitive data.

The $1 deal functions as a massively scalable, government-funded research and development program.

The Government-Approved Stamp

A partnership with the U.S. government serves as a powerful seal of approval that confers reputational benefits.

To be selected as a government-wide vendor, companies must undergo rigorous security and compliance reviews. Achieving certifications like FedRAMP High or receiving a GSA “authority to use” signals to the global market that a company’s technology meets some of the most stringent standards for data protection and operational security in the world.

This government-approved status is a valuable marketing asset. It can be leveraged to win contracts with other governments – at the state, local, and international levels – as well as with large private sector enterprises in highly regulated industries like finance and healthcare.

The credibility gained from a successful federal deployment can open doors to new markets and significantly accelerate a company’s growth trajectory.

The Potential Benefits

The rapid infusion of advanced AI into the federal government presents immense promise for a more efficient, responsive, and effective public sector.

Faster, More Accessible Services

For the average American, the most direct impact will be felt in the quality and speed of public services. AI-powered systems can transform how citizens interact with federal agencies.

Sophisticated chatbots and virtual assistants can provide 24/7 support for common inquiries, from checking the status of a passport application to getting information about tax filings. This drastically reduces wait times and makes government more accessible.

The applications go far beyond simple Q&A. Federal agencies are already exploring AI for complex, mission-critical tasks.

The Department of Veterans Affairs uses AI to automate the analysis of medical images to enhance diagnostic services for veterans. The Department of Homeland Security has used AI models to enhance old images in investigations of crimes against children, leading to the identification of hundreds of previously unknown victims.

At the Social Security Administration, AI can help expedite the processing of disability benefits claims. The Department of Education can use it to streamline student loan applications, getting critical support to people faster.

In public health, AI can analyze vast datasets to predict disease outbreaks, as the Department of Health and Human Services has done to track poliovirus.

Cost Savings for Taxpayers

Behind the scenes, AI promises to drive significant operational efficiencies that can lead to substantial cost savings.

A core benefit is the automation of repetitive, time-consuming back-office functions like data entry, document processing, and records management. By freeing federal employees from these tasks, AI allows them to focus on higher-value work that requires human judgment, creativity, and empathy.

The potential economic impact is staggering. Boston Consulting Group estimates that agencies can save up to 35% of their budget costs in areas like case processing by adopting AI. Another forecast suggests that productivity gains from generative AI in the U.S. public sector could collectively reach $519 billion by 2033.

These efficiencies mean government can do more with existing resources.

National Security Advantages

In the high-stakes arena of national security, AI is a game-changing technology. The Department of Defense and intelligence agencies are investing billions to integrate AI into their operations to maintain a competitive advantage over strategic rivals.

See also  Understanding Mandatory vs. Discretionary Spending in the Federal Budget

AI can automate the analysis of vast quantities of intelligence data, such as satellite imagery, far faster than human analysts can. It can enhance cyber defense systems by detecting and responding to threats in real time. It’s also being integrated into advanced weapons systems, such as autonomous drones and naval vessels.

Specific use cases are already demonstrating AI’s impact. U.S. Customs and Border Protection uses machine learning models to analyze border crossing patterns and cargo data to identify and intercept illegal shipments of drugs like fentanyl.

The National Security Agency has established a dedicated Artificial Intelligence Security Center to defend the nation’s AI systems from adversarial attacks.

The Serious Risks

Experts have identified serious concerns about the long-term consequences of this rapid, large-scale AI deployment. The risks span economic, security, and ethical domains.

Vendor Lock-In

A significant concern is the danger of vendor lock-in. By offering their platforms for a nominal fee, large, well-funded AI companies can effectively corner the massive federal market.

This practice disadvantages smaller, innovative firms that cannot afford to give their products away. It stifles competition and potentially leads to a less diverse and less resilient technology ecosystem for the government.

Legal analysts have questioned whether these deals comply with the spirit of federal procurement laws. The Federal Acquisition Regulation generally requires that payment terms for commercial products be “appropriate or customary in the commercial marketplace.”

A $1 price for an enterprise-grade AI platform is anything but customary.

Data Security Threats

Placing the government’s vast and sensitive data into the hands of a few private technology platforms creates profound security risks.

Cybersecurity experts warn that consolidating massive amounts of data from across different agencies creates a “bigger target for adversarial hackers.” A single breach of one of these central platforms could be catastrophic, exposing the personal information of millions of Americans.

There’s also the risk of data leakage, where an AI model might inadvertently reveal sensitive information it was trained on in response to a cleverly crafted query.

The widespread adoption of AI tools introduces the problem of “Shadow AI” – the unauthorized use of AI applications by employees without official oversight. This practice can create significant security vulnerabilities, as sensitive government information may be entered into unsecured, commercial AI systems.

Algorithmic Bias

A critical risk is algorithmic bias. AI models learn from the data they are trained on. If that data reflects historical societal biases, the AI will learn, replicate, and even amplify those biases at a massive scale.

This isn’t theoretical. It has already occurred in numerous high-stakes contexts.

In the Netherlands, a government AI system designed to detect childcare benefits fraud incorrectly targeted families based on factors like dual nationality and low income. This led to thousands of families being wrongly accused of fraud, plunging them into debt. In some cases, children were removed from their homes.

In the U.S. criminal justice system, an algorithm called COMPAS, used to predict the likelihood of a defendant reoffending, was found to be biased against Black defendants. It incorrectly flagged them as high-risk at nearly twice the rate of white defendants with similar histories.

Amazon had to scrap an AI recruiting tool after discovering it had learned to penalize resumes that included the word “women’s” (such as “women’s chess club captain”). It was trained on a decade of resumes submitted by mostly male engineers.

When applied to government services, the consequences of such biases could be devastating. An algorithm used to determine eligibility for housing assistance, small business loans, or medical care could systematically deny services to certain demographic groups.

Job Displacement

The widespread automation enabled by AI raises concerns about the impact on the federal workforce. Roles heavily reliant on routine, administrative tasks – such as data entry, customer service, and some programming and legal assistant roles – are at high risk of being displaced by AI systems.

Goldman Sachs Research estimates that 6-7% of the U.S. workforce could be displaced by AI if widely adopted.

The outlook is nuanced. While some jobs will be eliminated, AI is also expected to augment human capabilities and create new roles that require skills in managing, overseeing, and collaborating with AI systems.

The key challenge for the government will be to invest in robust upskilling and retraining programs to help the federal workforce adapt to this technological shift.

The Governance Deficit

These concerns point to a larger, systemic challenge. The government’s rush to adopt AI, driven by geopolitical urgency, is happening far faster than its ability to develop necessary guardrails.

This creates a “governance deficit,” where powerful technology is being deployed at scale without mature ethical guidelines, transparency standards, data governance policies, and workforce transition plans needed to manage it responsibly.

The overarching concern is the risk of moving fast in a domain where the consequences of failure can be devastating to public trust and individual lives.

Historical Context

The scale and speed of today’s AI deals are unprecedented, but public-private partnerships to advance technology are deeply woven into American history.

When Government Led Innovation

For much of the 20th century, the federal government wasn’t just a customer of technology. It was the primary patron and, in many cases, creator.

Foundational technologies that define modern life were born from government-led research and development initiatives. The internet itself began as ARPANET, a Defense Advanced Research Projects Agency project designed to create a resilient communications network.

The Global Positioning System was developed and is still operated by the U.S. military. From semiconductors and jet aircraft to nuclear power, federal funding and defense contracts were the driving force behind many of the nation’s most significant technological breakthroughs.

A pivotal moment came with the passage of the Bayh-Dole Act in 1980. Before this act, there was no uniform policy for what happened to inventions created with federal funding. Promising technologies often remained in government labs without being commercialized..

See also  Debt Collection Laws: Your Rights When Collectors Call

The Bayh-Dole Act created a standardized framework that allowed universities, small businesses, and nonprofits to retain the intellectual property rights to their federally funded inventions and license them to the private sector for commercialization.

This legislation dramatically accelerated the process of technology transfer, creating a powerful engine for economic growth by bridging the gap between public research and private industry.

An Inverted Model

The current AI partnerships represent a fundamental reversal of this model.

In the 20th-century paradigm, government investment created public or quasi-public technological goods – like the internet – which the private sector then built upon. The government was the seed investor, the initial risk-taker, and the owner of the foundational technology.

Today, the innovation pipeline is inverted. The foundational large language models that power generative AI were not developed in government labs. They were created by private companies, funded by billions of dollars in venture capital and corporate investment.

These companies own the core intellectual property, the vast computational infrastructure, and the top-tier research talent.

The government, through the $1 license deals, is now a customer seeking access to this critical, privately-owned infrastructure.

This shift marks a significant transfer of power and influence from the public to the private sector. Instead of the government setting the terms for technology it created, it’s now negotiating for access to technology it needs but does not control.

This new dynamic has profound long-term implications for public oversight, accountability, and national sovereignty. It raises critical questions about who truly sets the agenda for the nation’s technological future and how the public interest can be guaranteed when the digital infrastructure upon which government increasingly depends is owned and operated by a handful of commercial entities.

The $1 deals are a symbol of a new era in the relationship between the state and the market.

What Happens Next

Federal agencies are now in the early stages of deployment. Some have moved quickly to integrate these tools, while others are proceeding more cautiously.

The agreements typically include training programs to help government employees learn how to use these systems effectively. OpenAI’s deal includes access to the OpenAI Academy. Anthropic and xAI have similar support structures.

Early adopters within agencies are already using AI for tasks like drafting policy documents, analyzing data, summarizing research, and responding to routine inquiries.

The real test will come over the next 12-18 months as these tools become more deeply embedded in government workflows. Usage patterns will emerge. Problems will surface. The companies will gather feedback and refine their offerings.

This period will be critical for understanding whether the promised efficiency gains materialize at scale and whether the security and ethical safeguards hold up under real-world conditions.

The competitive landscape is also evolving. While OpenAI, Anthropic, and xAI have secured early deals, other major players like Google, Microsoft, and Amazon are also vying for government contracts through different channels.

The GSA has added multiple AI providers to its Multiple Award Schedule, creating a marketplace where agencies can choose from various options. This competitive pressure may benefit the government by preventing any single vendor from becoming too dominant.

Congressional oversight will play a crucial role. Lawmakers from both parties have expressed interest in understanding how these tools are being used, what safeguards are in place, and whether taxpayer dollars are being spent wisely.

Several committees have jurisdiction over different aspects of AI in government, from appropriations to oversight to technology policy. Expect hearings, reports, and potentially new legislation as the impacts of these partnerships become clearer.

Civil society organizations, academic researchers, and watchdog groups are also paying close attention. They’re monitoring for signs of bias, privacy violations, security breaches, and other problems that could undermine public trust.

The government’s ability to respond to criticism and adapt its approach will be crucial for the long-term success of these partnerships.

What This Means for You

If you interact with federal agencies – filing taxes, applying for benefits, seeking information – you’ll likely encounter AI-powered systems in the coming years.

Some changes will be visible, like chatbots that answer your questions or automated systems that process your applications faster. Other changes will be invisible, happening behind the scenes as AI helps government employees do their jobs more efficiently.

The hope is that these systems make government more responsive and easier to navigate. The risk is that they introduce new forms of bureaucratic complexity or, worse, discriminate against certain groups through algorithmic bias.

Your rights as a citizen remain the same. You can still request human review of automated decisions. You can still file complaints if you believe you’ve been treated unfairly. You can still use Freedom of Information Act requests to understand how government agencies are making decisions that affect you.

As a taxpayer, you have a stake in whether these partnerships deliver value for money. The $1 deals may sound like a bargain, but the total cost over time could be substantial once renewal fees and additional services are factored in.

You also have a stake in whether government maintains its ability to innovate independently or becomes too dependent on a handful of private technology companies.

As someone who lives in a democracy, you have a right to transparency about how AI is being used to make decisions that affect your life. You have a right to expect that these systems are fair, secure, and accountable.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

Authors

  • Author:

    We appreciate feedback from readers like you. If you want to suggest new topics or if you spot something that needs fixing, please contact us.

  • Editor:

    Barri is a former section lead for U.S. News & World Report, where she specialized in translating complex topics into accessible, user-focused content. She reviews GovFacts content to ensure it is up-to-date, useful, and nonpartisan.