AI, Drones, and Self-Driving Cars: The Deregulation Debate Explained

Alison O'Leary

Last updated 4 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

The United States is at a crossroads for three technologies poised to redefine society: Artificial Intelligence, drones, and self-driving cars.

On one side is a powerful push for deregulation, championed as the key to unlocking rapid innovation, ensuring American global competitiveness, and reaping massive economic rewards. On the other side are urgent calls for robust regulatory guardrails to protect public safety, ensure ethical development, prevent algorithmic discrimination, and build the public trust necessary for long-term adoption.

In This Article

This article explores how three powerful technologies — artificial intelligence, drones, and self-driving cars — are caught between two conflicting forces: a push for rapid deployment through deregulation, and demands for protective oversight via regulation.

It summarizes the current U.S. regulatory landscape (scattered state rules for AI and autonomous vehicles, centralized but restrictive federal control over drones); outlines arguments from both sides (pro-deregulation advocates emphasizing innovation, economic growth, and global competitiveness; proponents of guardrails stressing safety, fairness, and public trust); and reviews the potential economic, social, and legal consequences of different regulatory paths, including risks like job displacement, biased algorithms, and unclear liability when things go wrong.

So what?

The stakes are high: if deregulation wins, AI, drones, and autonomous vehicles may arrive sooner and reshape whole industries, but possibly at the cost of safety, equity, and public trust. On the other hand, careful regulation may slow adoption but make deployment more sustainable, fair, and secure. The outcome will influence not just innovation and business strategy, but everyday life: who benefits from these technologies, who bears the risks, and what kinds of societal trade-offs become acceptable. It’s a pivotal moment for policymakers, companies and citizens alike.

Current Regulatory Landscape

The current regulatory environment for these transformative technologies is complex, fragmented, and rapidly evolving. There’s no single, coherent U.S. strategy. Instead, a collection of federal, state, and executive actions creates a shifting landscape that can sometimes be contradictory, reflecting the deep divisions at the heart of the policy debate.

Artificial Intelligence: Federal Vacuum, State Patchwork

The regulation of artificial intelligence in the United States is characterized by a notable absence of comprehensive federal law and a flurry of activity at the state level, creating a dynamic and often confusing environment for developers and the public alike.

No Comprehensive Federal Law

At present, no single, overarching federal law broadly regulates the development or use of AI in the private sector. The federal government’s approach has been described as cautious, with a greater focus on overseeing the use of AI within federal agencies themselves rather than imposing broad rules on private industry.

This legislative vacuum has left federal policy to be shaped primarily by competing executive orders from different administrations, leading to significant policy whiplash. The Biden administration’s Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” emphasized risk mitigation, testing standards, and the protection of equity and civil rights.

In a stark reversal, the Trump administration’s July 2025 Executive Order and its accompanying “America’s AI Action Plan” revoked the prior order. The new policy’s stated goal is to remove “barriers to American AI innovation” and “onerous regulation” in order to sustain “global AI dominance.”

This back-and-forth at the executive level creates profound uncertainty about the long-term direction of federal AI policy.

State-Level Patchwork

In the absence of a unifying federal law, states have become the primary regulators of AI. This has led to what critics describe as a “patchwork” of laws that creates significant compliance challenges for companies operating nationwide.

In the 2025 legislative session alone, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI-related legislation, with 38 states enacting approximately 100 different measures.

These state laws address a wide array of immediate public concerns:

  • Government Transparency: New York enacted a law requiring state agencies to create and maintain a public inventory of the automated decision-making tools they use, enhancing transparency.
  • Worker Protections: The same New York law includes provisions ensuring that AI systems used by the state government cannot supersede the existing rights of unionized employees under a collective bargaining agreement.
  • Criminal Law: North Dakota expanded its anti-stalking and harassment statutes to explicitly include acts committed using AI-powered robots.
  • Consumer and Professional Standards: Oregon passed a law that prohibits a non-human entity, including an AI agent, from using the protected titles of licensed medical professionals like “registered nurse.”
  • Election Integrity: Responding to a wave of public concern, many states have enacted “deepfake” bills. These laws create civil or criminal penalties for using deceptively manipulated AI-generated audio or video to interfere in elections.
  • Healthcare Decisions: Some states are moving to regulate the use of AI in healthcare insurance, with legislation providing that an insurer cannot deny a claim based on medical necessity solely on the recommendation of an AI system without an individual review by a human medical director.

This situation creates a fundamental tension. The federal executive branch’s push for deregulation is intended to accelerate innovation and ease burdens on industry. However, by not establishing a clear legislative framework, it has created a regulatory vacuum that states are rushing to fill.

The resulting patchwork of disparate state laws can create a more complex, uncertain, and ultimately more burdensome environment for businesses than a single, predictable federal standard would. This has led some proponents of federal regulation to argue that a nationwide structure would actually be better for business and innovation by providing clarity and a level playing field.

Self-Driving Cars: State-by-State Experiment

The regulatory road for autonomous vehicles in the U.S. is being paved one state at a time, with limited federal direction creating a diverse and sometimes contradictory set of rules across the country.

Limited Federal Oversight

Much like with AI, federal efforts to establish a comprehensive oversight framework for AVs have stalled in recent years. The primary federal body involved is the National Highway Traffic Safety Administration (NHTSA). However, NHTSA has so far provided only voluntary guidance and technical assistance to states rather than issuing binding, universal rules for the deployment of AVs.

The agency’s most significant power remains its existing authority under the Federal Motor Vehicle Safety Standards, which allows it to prevent the sale of any vehicle—autonomous or otherwise—that it deems unsafe or non-compliant.

A critical piece of context from NHTSA is its classification system for vehicle automation. It’s crucial for the public to understand that there are no “fully self-driving” (Level 5) cars available for consumer purchase today. The most advanced systems currently on the road, such as Tesla’s Autopilot, are classified as Level 2, or “Driver Assistance.”

At this level, the system can assist with steering and acceleration/braking, but the human driver is fully responsible for monitoring the environment and must remain engaged at all times. Higher levels of automation, such as Level 4 systems that can operate without a human driver within a limited service area, are currently restricted to testing and pilot programs by companies like Waymo in select cities.

States in the Driver’s Seat

With the federal government taking a hands-off approach, states have become the de facto regulators of autonomous vehicles. Since 2012, at least 41 states and Washington D.C. have considered legislation related to AVs, with 29 states having enacted specific laws.

These state laws are far from uniform and generally fall into three broad categories:

Regulatory ApproachStates (Examples)Common Requirements / Key Examples
Permits Full OperationAlabama, Arkansas, Florida, Georgia, Kentucky, Michigan, North Carolina, North Dakota, South Dakota, Texas, Utah, etc.Must comply with federal/state traffic laws; Must be able to achieve a “minimal risk condition” upon system failure. Insurance requirements vary widely (e.g., Alabama: $100,000; Kentucky: $1 million)
Permits Testing/Pilots OnlyCalifornia, Colorado, Connecticut, Illinois, Maine, New York, Pennsylvania, Virginia, Washington, D.C., etc.Often requires a licensed human operator to be present or a remote operator to be designated. May require submission of safety and testing plans to state agencies. California requires $5 million in insurance for testing
No Specific Statute (Silent)Alaska, Iowa, Kansas, Montana, New Jersey, New Mexico, Rhode Island, South Dakota, Wyoming, etc.AVs are governed by existing motor vehicle laws, which may not be equipped to handle the unique aspects of the technology. Relies on federal safety standards as a baseline

The differences among the states that do have laws can be stark, particularly concerning safety and financial responsibility. For example, a new California law requires that manufacturers testing AVs have a designated remote human operator who can immobilize a vehicle if necessary and carry a minimum of $5 million in liability insurance.

In contrast, a new law in Alabama allows for the operation of fully autonomous vehicles with only $100,000 in liability insurance, an amount comparable to that required for a conventional car. This vast disparity in safety and insurance requirements from one state border to the next illustrates the challenges of a state-led regulatory approach and the lack of a national standard.

Drones: Federal Control

In sharp contrast to the fragmented regulation of AI and AVs, the governance of unmanned aircraft systems (UAS), commonly known as drones, is firmly centralized at the federal level. The Federal Aviation Administration considers drones to be aircraft and asserts primary jurisdiction over their operation in the national airspace.

A Two-Tiered Federal System

The FAA has established two distinct pathways for drone operation, based on the pilot’s intent:

Recreational Flyers: This category is for individuals flying for fun or personal enjoyment. Recreational pilots are not required to obtain a license but must pass The Recreational UAS Safety Test (TRUST), a free online knowledge and safety exam. They must also register any drone weighing more than 0.55 pounds (250 grams), fly at altitudes below 400 feet, and always keep the drone within view.

Commercial Operators (Part 107): Anyone flying a drone for work, business, or any form of compensation falls under Part 107 of the FAA regulations. These pilots must obtain a Remote Pilot Certificate from the FAA by passing a more rigorous aeronautical knowledge test. Commercial operations are generally restricted to daylight hours, altitudes below 400 feet, speeds under 100 mph, and are prohibited from flying over people or from a moving vehicle. However, the FAA has a process for granting waivers to some of these restrictions on a case-by-case basis.

Key FAA Rules for All Operators

The FAA enforces a strict set of rules that apply to nearly all drone flights:

Registration: All drones weighing more than 0.55 pounds must be registered with the FAA, and the registration number must be displayed on the drone’s exterior.

Airspace Restrictions: The FAA prohibits drone flights in certain sensitive areas. This includes the controlled airspace around airports (which requires specific authorization to enter), over military bases, critical infrastructure like power stations, and national landmarks. The FAA also issues Temporary Flight Restrictions over areas like major sporting events or emergency response scenes.

Visual Line of Sight (VLOS): Perhaps the most significant operational constraint is the requirement that the pilot or a designated visual observer must be able to see the drone with their own eyes at all times. This rule effectively prohibits long-distance flights.

The requirement to maintain a visual line of sight has become the central battleground in the drone deregulation debate. While the FAA has a mature regulatory framework, industry operators and proponents of expanded drone use argue that the VLOS rule is the single greatest bottleneck to unlocking the technology’s full commercial potential.

The most valuable and transformative applications—such as long-distance package delivery, large-scale agricultural surveys, and inspecting miles of pipelines or power lines—are economically unfeasible or simply impossible under current VLOS restrictions.

Consequently, the push to “deregulate” drones is not about dismantling the entire FAA system, but rather a targeted effort to create a clear, streamlined, and predictable pathway for obtaining approval for beyond-visual-line-of-sight (BVLOS) operations. Legislative proposals like the LIFT Act are a direct response to this specific challenge, aiming to force the FAA to establish rules for BVLOS flight rather than relying on the current slow and costly case-by-case waiver process.

State and Local Influence

While the FAA controls the skies, state and local governments retain authority over activities on the ground. They can pass laws related to drone use concerning privacy, surveillance, and law enforcement operations. For example, California State Parks can prohibit drone flights to protect sensitive wildlife or preserve visitor experience, and Florida law explicitly prohibits equipping a drone with a weapon.

The Push for Deregulation

A powerful and influential movement is advocating for a significant reduction in the regulatory oversight of AI, drones, and self-driving cars. This perspective is rooted in the belief that government rules are the primary obstacle to technological progress, economic prosperity, and national security in the 21st century.

Spurring Innovation and Global Competitiveness

The central philosophy of the pro-deregulation argument is that speed and agility are paramount. Proponents contend that “stringent bureaucratic oversight” and “onerous regulation” slow the pace of innovation, creating hurdles that prevent cutting-edge products from reaching the market quickly and efficiently.

The desired alternative is a “build-first” or “permissionless innovation” approach, where companies are free to experiment and develop new technologies without seeking prior government approval.

This push is driven by a powerful economic imperative. Advocates for deregulation argue that it’s the fastest way to unlock massive economic benefits, including the creation of entirely new industries and the empowerment of American workers through increased productivity and opportunity.

The logic is that the sooner these technologies are deployed at scale, the faster society can realize tangible gains like safer roads, lower transportation costs, more efficient supply chains, and greater personal productivity.

A critical driver of this urgency is the perception of a high-stakes technological race against global competitors, particularly China. There’s a frequently expressed fear that if the United States becomes entangled in complex regulatory debates, adversaries who adopt a more aggressive “do now, worry later” approach will gain a decisive technological, economic, and ultimately geopolitical advantage.

This viewpoint frames deregulation as a matter of national security, with the stated goal being to “achieve and maintain unquestioned and unchallenged global technological dominance.” By moving faster, proponents argue, U.S. companies can take the lead in setting global standards for these emerging technologies, rather than being forced to comply with frameworks developed by other economic blocs, such as the European Union’s more regulatory-heavy approach.

Policy in Practice

This deregulatory philosophy isn’t merely theoretical; it’s being actively translated into policy proposals at both the executive and legislative levels.

The Trump administration’s “America’s AI Action Plan” serves as a clear blueprint for this approach. The plan explicitly calls for rescinding existing regulations deemed burdensome, fast-tracking the permitting process for critical infrastructure like data centers and semiconductor facilities, and using federal financial leverage to encourage deregulation at the state level.

The plan suggests that the federal government could withhold funding from states that maintain “burdensome AI regulations” that are seen as wasting federal investment.

In Congress, similar initiatives have emerged. A bill drafted by Senate Commerce Chair Ted Cruz, for example, would aim to speed up AI development by creating a formal process for companies to apply for waivers from existing regulations, allowing them to pilot new AI programs more quickly.

For drones, the LIFT Act is a legislative attempt to force the FAA to establish a clear and predictable regulatory pathway for beyond-visual-line-of-sight flights, aiming to replace the current slow, costly, and uncertain waiver system that industry sees as a major impediment to growth.

As part of this push, some advocates for deregulation have also begun to question the motivations of those calling for stricter safety rules and ethical guardrails. The argument has been made that some calls for regulation might be driven by incumbent companies seeking to use rules to stifle competition and cement their market dominance, rather than being motivated purely by the public interest.

Ideological Neutrality and Free Expression

A significant new front has opened in the deregulation debate, moving beyond purely economic and safety considerations into the realms of ideology and speech. This argument posits that current regulatory trends are injecting inappropriate political and social biases into technology.

A key element of this is the push for “ideological neutrality” in AI. The Trump administration’s executive orders explicitly mandate that AI systems procured or used by the federal government must avoid what is termed “woke” content. This is defined to include topics related to Diversity, Equity, and Inclusion (DEI), critical race theory, and the recognition of gender identity.

The policy frames the removal of these considerations as a return to a “neutral” and “nonpartisan” form of AI.

This policy goal is supported by a broader argument from some free-market and libertarian advocacy groups, such as the Cato Institute, which contend that AI regulation poses a fundamental threat to free expression. Their case rests on several points:

  • They argue that regulations, particularly those aimed at combating “misinformation” or “hate speech,” will inevitably be used to limit the range of viewpoints and ideas that AI systems are permitted to generate and discuss.
  • They contend that the high cost of complying with complex regulations will serve as a barrier to entry for smaller startups, solidifying the market dominance of a few large tech firms and thereby limiting the diversity of AI models and perspectives available to the public.
  • They characterize the growing fears over AI’s potential harms as a “moral panic,” similar to historical reactions to other new technologies, which they believe is leading to calls for overly restrictive and censorious rules.

However, this push for a “neutral” AI, free from what are defined as ideological safeguards, presents a profound technical challenge. The political argument is straightforward: remove perceived biases from AI systems. Yet, AI experts and critics highlight a fundamental technical reality: AI models are not created in a vacuum.

They are trained on vast datasets of human-generated text, images, and data, which inherently reflect the biases, both conscious and unconscious, that exist within human society.

In this context, “neutrality” isn’t a natural default state that can be achieved by simply removing things. Achieving fairness and mitigating harmful biases in AI requires active, deliberate intervention. This includes carefully curating training data, implementing sophisticated bias-mitigation techniques, and engineering specific ethical guardrails into the system’s architecture.

Therefore, the act of stripping away these active safeguards doesn’t result in a “neutral” AI. Instead, it causes the AI to default to the embedded societal biases present in its raw training data. The practical consequence is that a policy intended to create “non-ideological” AI could inadvertently produce AI systems that are demonstrably less fair and more discriminatory, particularly against women and people of color, because the “default” biases found in vast swaths of historical data often disadvantage these very groups.

The Call for Guardrails

Countering the push for rapid deregulation is a broad coalition of safety advocates, ethicists, civil rights organizations, and policymakers who argue that deploying these powerful technologies without sufficient oversight poses unacceptable risks. Their position is that thoughtful regulation is not an impediment to innovation but an essential prerequisite for its long-term success and safe integration into society.

Mitigating Real-World Harms

The most urgent concern for advocates of regulation is public safety and the potential for real-world, physical harm.

For Self-Driving Cars: The promise of safer roads remains a future goal, while the present reality presents documented risks. A 2024 study published in Nature Communications found that, compared to human drivers, autonomous vehicles currently have a 5.25 times higher crash risk during the low-light conditions of dawn and dusk and are twice as likely to be involved in a crash while making a turn.

Test vehicles continue to struggle with correctly detecting and predicting the behavior of vulnerable road users like pedestrians and cyclists, and they can be confounded by unpredictable hazards such as road construction, emergency vehicles, or the winter weather conditions common in many parts of the country.

For Drones: The risks are also tangible. Drones can and do crash due to a variety of factors, including signal loss, battery failure, or sudden changes in weather, posing a direct threat to people and property on the ground. Furthermore, the risk of a mid-air collision with manned aircraft, such as helicopters or small planes, remains a significant safety concern for the FAA and aviators.

Beyond physical accidents, critics of deregulation point to the significant potential for ethical failures and societal harm. These include:

Social Manipulation and Disinformation: The ability of AI to generate highly realistic “deepfake” audio and video content creates a powerful new tool for malicious actors seeking to interfere in elections, spread propaganda, and manipulate public opinion on a massive scale.

Privacy and Surveillance: The proliferation of AI-powered facial recognition technology and the vast data collection required to train AI systems raise profound privacy concerns. Without strong regulatory safeguards, these tools could be used for widespread surveillance by governments or for data exploitation by corporations, threatening fundamental civil liberties.

Economic Harms: In the commercial sphere, AI can be weaponized to exploit consumers. Advanced algorithms can be used for sophisticated forms of price discrimination or for behavioral manipulation in advertising, identifying and targeting individuals at their “prime vulnerability moments” to encourage impulsive purchases, thereby harming consumer welfare.

Why “Neutral” AI Is a Myth

A central pillar of the argument for regulation is the issue of algorithmic bias. AI systems learn from the data they are given. If that training data reflects historical societal biases related to race, gender, age, or other characteristics, the resulting AI model will not only replicate but can actually amplify those biases.

This isn’t a theoretical problem; it has led to documented discriminatory outcomes in high-stakes areas:

Hiring and Employment: AI-powered tools used to screen resumes or analyze video interviews can perpetuate discrimination. A prominent collective action lawsuit filed against the software firm Workday alleged that its AI-enabled hiring platform discriminated against job applicants over the age of 40.

Civil rights advocates warn that because these biases are often embedded in the algorithms themselves, they can systematically filter out qualified candidates from Black, Hispanic, and other underrepresented communities, putting a “high-tech gloss on old-fashioned discrimination.”

Healthcare and Financial Services: Biased algorithms can lead to profoundly inequitable outcomes in other critical sectors, such as denying deserving individuals access to loans, credit, or even essential healthcare services based on flawed, discriminatory data.

For these reasons, critics argue that the political push to remove DEI and other fairness safeguards from AI development is not a move toward neutrality but rather an act of “deliberate amnesia.” They contend that it cements the default inequities of the past into the automated systems of the future.

This could lead to a “two-track system,” where a stripped-down, less fair, and less accurate version of AI becomes the government and corporate standard, while more equitable versions are only available in markets that demand them. Such a scenario would disproportionately harm women, people of color, and other marginalized groups who are already on the losing end of systemic biases.

Regulation as Pro-Innovation

Advocates for regulation fundamentally reject the premise that rules and innovation are in opposition. Instead, they argue that thoughtful, clear regulations are a catalyst for innovation because they build the public trust that is essential for the widespread adoption and long-term success of any new technology.

Markets, they argue, require rules to function efficiently and fairly. Without clear guardrails, uncertainty and public distrust can erode the very conditions needed for sustainable innovation to flourish. Businesses, consumers, and investors all benefit from knowing where the legal and ethical boundaries lie.

The state of California is often cited as a case in point. As home to a majority of the world’s leading AI companies, it has also been a leader in enacting thoughtful regulations. This, proponents argue, demonstrates that regulatory clarity and a thriving innovation ecosystem are not mutually exclusive but are, in fact, causally linked.

The alternative—a purely deregulated environment—risks creating a “race to the bottom,” where companies feel pressured to cut corners on safety, security, and ethics to be the first to market. This approach could lead to catastrophic failures, high-profile scandals, and a public backlash that could set the entire industry back for years.

The reported decimation of staff at the Department of Transportation’s Office of Automation Safety is highlighted as a particularly alarming development, suggesting that the government’s capacity to ensure safety is being eroded at the very moment the industry is pushing for faster and broader deployment.

The debate reveals a fundamental disagreement about the nature of risk and the timeline of consequences. The pro-deregulation camp tends to operate on a “do now, worry later” principle, prioritizing immediate economic and competitive gains and viewing the risks as secondary or manageable. Their primary fear is the risk of not innovating fast enough.

In contrast, the pro-regulation camp operates on a precautionary principle, arguing that the potential negative consequences—from catastrophic accidents to the erosion of civil rights and public trust—are so significant that they must be proactively addressed before widespread deployment. Their primary fear is the risk of deploying powerful but flawed technology too soon.

Economic Impact and Social Change

The regulatory decisions being made today are critical because their consequences will be felt across the entire economy and will reshape the fabric of daily life. The debate is not abstract; it’s about managing a transition that promises trillions of dollars in economic growth while also threatening to displace millions of workers and alter the way we live, work, and move.

Economic Promise

The economic potential of these technologies is staggering. The autonomous vehicles market alone, which was valued at approximately $208 billion in 2023, is projected to grow exponentially, with some forecasts predicting it will reach over $4.2 trillion by 2032. This represents the birth of a massive new sector of the global economy.

This growth will be fueled by immense gains in productivity and efficiency:

For Autonomous Vehicles: The widespread adoption of AVs is estimated to generate economic benefits of up to $936 billion per year in the United States alone. These gains would come from a variety of sources, including savings from reduced accidents (which cost the U.S. economy over $800 billion in 2010), increased worker productivity during what was formerly commuting time, and dramatically lower costs from traffic congestion (which costs the average driver over $1,300 per year).

For businesses, particularly in logistics and transportation, AVs promise to revolutionize operations by eliminating human driver labor costs, optimizing fuel consumption through smoother driving, and enabling supply chains to run 24 hours a day, 7 days a week.

For Drones: Drones offer similar efficiency gains, particularly in tasks that currently require expensive manned aircraft. For example, using a helicopter for aerial photography or infrastructure inspection can cost hundreds of dollars per hour, a cost that can be dramatically reduced by deploying a drone.

One forecast for the United Kingdom, a much smaller market than the U.S., projected that drones could have a £42 billion positive impact on its economy by 2030, suggesting the potential in the U.S. is many times larger.

While these technologies will inevitably displace some jobs, they are also projected to create new ones. One study focused on the U.S. projected that the adoption of AVs could create 2.4 million new jobs. These new roles will emerge in fields directly related to the new technology, such as computer science, AI development, and robotics, as well as in new service industries like the management of large vehicle fleets and the specialized cleaning and maintenance required for shared-use autonomous shuttles.

Social Impact

The societal impacts of this technological shift will be just as profound as the economic ones, offering both revolutionary benefits and significant challenges.

On the positive side, autonomous vehicles hold the promise of being a truly transformative technology for millions of Americans. For the 49 million Americans over the age of 65 and the 53 million with some form of disability, AVs could provide an unprecedented level of independence, granting them the freedom to access employment, healthcare, and social activities without having to rely on others.

Furthermore, the rise of shared, autonomous ride-hailing services could dramatically lower the cost of transportation, improving mobility and access to opportunity for lower-income groups who may not be able to afford a personal vehicle.

However, the most significant and challenging social consequence is the potential for widespread job displacement. The same labor cost savings that make AVs so economically attractive to the trucking and taxi industries will mean the loss of millions of jobs for human drivers.

This disruption is likely to disproportionately affect workers with lower levels of formal education and fewer easily transferable skills, creating a major societal challenge. Addressing this will require a massive and coordinated effort from both the public and private sectors to invest in worker retraining and reskilling programs.

This challenge is acknowledged even by proponents of deregulation; the Trump administration’s “worker-first AI agenda,” for example, calls for prioritizing AI skill development in federal education and workforce funding streams to help workers navigate this transition.

These technologies could also radically reshape our cities. The reduced need for parking—as shared AVs could remain in near-constant use—could free up vast amounts of valuable urban land for housing, parks, or other uses.

However, there’s also a risk that the convenience of AVs could lead to a significant increase in total vehicle miles traveled, potentially worsening urban sprawl and traffic congestion. To prevent this, cities may need to implement new policies, such as dynamic congestion charges, to manage traffic flow and incentivize the use of shared rides and public transit.

This analysis reveals that the enormous economic benefits and the significant social costs of this technological transition are deeply intertwined. A primary driver of the projected economic efficiency and savings is the elimination of labor costs—specifically, the jobs of human drivers.

Therefore, the very mechanism that creates trillions of dollars in economic value is also the source of the greatest social disruption. This creates a complex policy dilemma. A government that pushes for deregulation to accelerate AV deployment without simultaneously creating a robust and well-funded plan for worker transition and social support is only addressing half of the equation.

Furthermore, the promise of greater equity and mobility for marginalized groups is directly threatened if the AI systems guiding these new services are built with biased data. An autonomous taxi that is cheaper but whose routing algorithm is biased to avoid low-income neighborhoods does not represent a net social good.

This demonstrates that the ethical concerns about bias are not separate from the economic and social impacts; they are fundamentally and inextricably linked.

Liability and Accountability

Perhaps the single greatest legal and practical hurdle to the widespread deployment of autonomous systems is a simple but profoundly complex question: when something goes wrong, who is responsible? Resolving this issue of liability is not just a matter for lawyers and insurance companies; it’s fundamental to building the public trust necessary for these technologies to succeed.

When Technology Fails

In a conventional car crash, the framework for determining fault, while sometimes complex, is well-established. When an autonomous system is involved, the lines of responsibility blur into a complicated web of possibilities.

If an AV causes an accident, is the manufacturer liable for a flaw in its software? Is the owner responsible for failing to properly maintain the vehicle’s sensors? Is the human “operator” at fault for not intervening in time? Or could a third-party hacker who spoofed a GPS signal be to blame?

Currently, there are no clear answers. Despite dozens of states passing AV-related laws, not one has enacted a statute that clearly delineates and apportions liability in the event of a crash. Most state laws simply default to existing tort and product liability law, a legal framework that is ill-equipped to handle the novel challenges posed by autonomous systems.

This legal vacuum has led to a vigorous debate among legal scholars, policymakers, and manufacturers about what the new standard of liability should be. The discussion primarily centers on several competing models:

Liability StandardWho is Held Liable?What Must Be Proven?Key Argument For / Against
Traditional NegligenceManufacturerA specific design or manufacturing defect directly caused the crashFor: Uses existing, familiar legal framework. Against: Extremely difficult for victims to prove due to the “black box” nature of AI and information asymmetry
Strict LiabilityManufacturerOnly that the AV was in autonomous mode and its failure caused the crash. No proof of a specific defect is neededFor: Provides clear compensation for victims and predictable rules for industry. Against: May place a heavy financial burden on manufacturers, potentially stifling innovation
Reasonable Human Driver StandardManufacturerThe AV’s actions were unreasonable and would not have been taken by a competent, attentive human driverFor: Easier for juries to understand and apply. Against: May not incentivize the creation of systems that are significantly safer than human drivers

Traditional Negligence: Under this standard, a victim would have to prove that the manufacturer was negligent—that there was a specific design or manufacturing defect in the AV and that this defect directly caused the crash. This places a very high burden on the plaintiff, given the “black box” nature of AI and the immense information asymmetry between a multinational corporation and an individual consumer.

Strict Liability: This model would hold the manufacturer strictly liable for any accident that occurs while the vehicle is operating in autonomous mode, regardless of whether a specific “fault” can be proven. This approach provides a clear path to compensation for victims and incentivizes manufacturers to build the safest possible systems, but it could also place a heavy financial burden on the industry. In a move to build consumer confidence, some manufacturers, like Volvo, have already voluntarily pledged to accept this form of liability for their vehicles.

The “Reasonable Human Driver” Standard: This is a hybrid approach that would create a new legal category for the “computer driver” and judge its actions against what a competent, attentive human driver would have done in the same situation. This standard is more intuitive for judges and juries to apply but may not create a strong enough incentive for manufacturers to develop systems that are significantly safer than the human average.

For drones, the liability question is currently simpler. Because most drones are under the direct control of a human pilot, liability for an accident typically falls on the operator, especially if they are found to have violated FAA rules by flying recklessly or in a prohibited area.

However, as drones become more autonomous and rely on AI for navigation and decision-making, they will inevitably face the same complex product liability questions that are currently dominating the AV debate.

Building Trustworthy Systems

Ultimately, the successful integration of AI, drones, and AVs into society hinges on public trust. If people don’t believe these systems are safe, fair, and accountable, they won’t use them, and the promised economic and social benefits will never be realized.

A broad consensus is emerging among technologists, ethicists, and policymakers about the core principles required to build this trust. These pillars of “Trustworthy AI” are increasingly being adopted as benchmarks for responsible development:

  • Transparency and Explainability: People need to have some level of understanding of how these complex systems arrive at their decisions. The “black box” problem, where even the developers cannot fully explain a system’s reasoning, must be addressed to build confidence.
  • Accountability: There must be clear lines of responsibility. When an automated system causes harm, there must be a clear and accessible process for recourse and for holding the responsible parties accountable.
  • Human Oversight: For the foreseeable future, especially in high-stakes domains like transportation and healthcare, humans must remain in the loop. Systems should be designed to allow for human monitoring, intervention, and the ability to override an automated decision when necessary.
  • Bias Mitigation and Fairness: Systems must be actively and continuously engineered, tested, and monitored to ensure that they are fair and do not produce discriminatory outcomes.

These principles are moving from theory to practice. Governments are beginning to create governance frameworks, with Canada’s Directive on Automated Decision-Making serving as a leading public-sector example. It classifies AI systems by risk level and mandates algorithmic impact assessments and human oversight for high-risk applications.

In the private sector, companies like IBM and Cisco have established internal AI ethics boards to review their products and are contributing to open-source tools that help developers worldwide identify and mitigate bias in their systems.

The concepts of liability and trust are not separate issues; they are two sides of the same coin, linked by the linchpin of accountability. The legal and financial question of liability is, in essence, society’s formal mechanism for enforcing accountability after a failure has occurred.

Public trust, in turn, is built on the perception and promise of that accountability. People are far more willing to place their confidence in a new technology if they believe that a clear system is in place to hold someone responsible if it fails.

Therefore, solving the complex liability puzzle is not merely a legal exercise for the courts to figure out later. It’s a fundamental and urgent requirement for building the public trust needed for these technologies to move from the lab to the mainstream.

A regulatory framework that leaves liability ambiguous will inherently undermine public confidence, regardless of how safe the technology may actually be.

Practical Guide: Your Questions Answered

Drones: What You Need to Know

Do I need a license to fly a drone?

If you’re flying purely for fun and recreation, you don’t need a formal license. However, you’re required by federal law to pass The Recreational UAS Safety Test (TRUST), which is a free online safety and knowledge test. If you’re flying for any commercial purpose (for work, selling photos, etc.), you must obtain a Remote Pilot Certificate from the FAA.

Where can I fly my drone?

Generally, you can fly in most locations as long as you stay below 400 feet in altitude and keep the drone within your visual line of sight. However, there are many restrictions. You must avoid flying near airports, military bases, national parks, and over stadiums or large crowds. The best practice is to use the FAA’s official B4UFLY mobile app before every flight to check for any local restrictions or temporary flight advisories.

Do I have to register my drone?

Yes, if your drone weighs more than 0.55 pounds (250 grams), you must register it with the FAA through their DroneZone portal. This applies to both recreational and commercial drones.

What if a drone flies over my house?

The FAA’s authority is over the safety of the airspace; it doesn’t regulate privacy. However, state or local laws regarding privacy, trespassing, or nuisance may apply. If you believe a drone is being flown in an unsafe or reckless manner, you should contact your local law enforcement agency.

Can I shoot down a drone over my property?

No. A drone is considered an aircraft under federal law. It’s a federal crime to damage or destroy an aircraft, and shooting at a drone could result in significant civil penalties and criminal charges. It also poses a serious safety risk, as the falling drone could injure someone or cause property damage.

Self-Driving Cars: Understanding Today’s Technology

Can I buy a self-driving car today?

No. According to the National Highway Traffic Safety Administration, there are no fully automated (Level 5) vehicles available for public purchase today. The most advanced systems on the market, like Tesla’s Autopilot or GM’s Super Cruise, are considered “Driver Assistance” systems (Level 2). They can help with steering, braking, and accelerating, but they require the driver to be fully attentive and ready to take control at all times.

What are the different levels of automation?

NHTSA defines five levels of driving automation:

  • Level 0: No automation. The human performs all driving tasks
  • Level 1 (Driver Assistance): The vehicle can assist with either steering or speed control, but not both at the same time
  • Level 2 (Partial Automation): The vehicle can assist with both steering and speed control simultaneously. The human driver must remain fully engaged. This is the highest level currently available for consumer purchase
  • Level 3 (Conditional Automation): The vehicle can handle all aspects of driving under specific conditions, but the human driver must be available to take back control when requested
  • Level 4 (High Automation): The vehicle can perform all driving tasks and monitor the environment on its own, but only within a limited, geofenced area or under certain conditions (e.g., good weather). No human attention is required within that limited domain. This is currently in the testing phase (e.g., Waymo robotaxis)
  • Level 5 (Full Automation): The vehicle can perform all driving tasks, under all conditions, on all roadways. It requires no human driver

Are they safer than human drivers?

They have the potential to be significantly safer. Human error is a factor in 94% of all car crashes, so automation could dramatically reduce accidents, injuries, and fatalities. However, the technology isn’t yet perfect. Current test vehicles have been shown to have higher crash rates than human drivers in certain specific situations, such as in low light or when making turns, and they can still struggle to detect and react to pedestrians or unexpected road hazards.

AI in Your Life: Key Questions

How does AI develop bias?

AI systems aren’t inherently biased, but they learn from the data they’re trained on. If the data reflects historical patterns of discrimination—for example, hiring records from a company that historically favored male applicants—the AI will learn that pattern and replicate it, leading to biased and unfair outcomes.

Who is accountable when an AI makes a mistake?

This is one of the most significant unresolved ethical and legal questions in AI. Accountability could lie with the software developer who wrote the code, the company that deployed the system, or the human user who relied on its output. A major goal of proposed AI regulations is to establish clear rules and lines of responsibility to ensure that when an AI system causes harm, there’s a clear path for recourse.

What is a “deepfake”?

A deepfake is a piece of audio or video content that has been created or manipulated by AI to be highly realistic but is actually fabricated. For example, an AI could create a convincing video of a politician saying something they never said. Because they pose a significant risk for spreading misinformation and interfering in elections, many states have passed laws to prohibit their fraudulent or deceptive use.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

As a former Boston Globe reporter, nonfiction book author, and experienced freelance writer and editor, Alison reviews GovFacts content to ensure it is up-to-date, useful, and nonpartisan as part of the GovFacts article development and editing process.