Last updated 3 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
- How the Technology Works
- The Fair Labor Standards Act Has No Answer
- Collective Bargaining Rights Face the Same Void
- Discrimination Law Isn’t Ready Either
- Who Owns Your Voice
- State Regulation Creates a Patchwork
- Federal Agencies Have Been Silent
- Impact on Workers’ Rights and Compensation
- Other Workplace AI at CES 2026
- What Needs to Happen
- The Fundamental Question
- International Approaches
- Why This Matters Now
A Texas software company called IgniteTech unveiled MyPersonas—software that creates AI-powered digital replicas of employees using their voice, video, and written materials at the Consumer Electronics Show in Las Vegas on January 7, 2026. These digital clones can answer questions, conduct video chats, respond in 160 languages, and work autonomously. They can allow an employee to “be in two places at once.”
None of these statutes—the laws that determine who gets paid, who can organize, and who’s protected from discrimination—contemplated that an employer might create a digital version of Sarah from HR that works evenings and weekends while the real Sarah is home with her family.
How the Technology Works
Feed the system an employee’s voice recordings, video of them working, and samples of their written communication. The AI trains on these materials and produces a digital agent that replicates that specific person’s communication style, knowledge base, and manner of speaking.
A chatbot is generic—it doesn’t sound like anyone in particular. A digital clone sounds like Jennifer. It uses Jennifer’s phrases, Jennifer’s tone, Jennifer’s way of explaining the dental insurance options. A new employee emails a question at 9 PM and gets a response that seems to come from Jennifer, even though Jennifer logged off at 5:30.
These aren’t ethical questions alone. They’re legal questions about property rights, compensation, and the boundaries of the employment relationship. Federal law is silent.
The Fair Labor Standards Act Has No Answer
Start with the most basic question: does an employer have to pay an employee when her digital clone works?
The FLSA requires that covered employees receive at least minimum wage for all hours worked, plus overtime at time-and-a-half for anything over 40 hours per week. It defines work that should be paid for as work that the employer knows about and allows to happen.
Consider Sarah, the HR manager earning $75,000 annually. Her employer builds a digital clone trained on her voice and expertise. The clone operates evenings and weekends, handling 40% of employee inquiries that would otherwise require Sarah’s time. Sarah herself isn’t working those hours—she’s not at her desk, not on call, not doing anything.
Does the employer owe her additional compensation?
One argument: yes, because the employer is benefiting from work that derives directly from Sarah’s knowledge, voice, and communication patterns. The digital clone generates value traceable to Sarah’s human expertise. Without Sarah, the clone wouldn’t exist. Compensation should follow value.
Another argument: no, because Sarah isn’t working. She’s not investing her time or effort. The clone is software, and employers don’t traditionally pay wages for software performing functions. That would be like paying an employee every time someone uses Excel macros they built.
A third possibility: the clone’s work hours should count toward Sarah’s weekly total for overtime purposes. The clone works 20 hours per week, and those count as Sarah’s hours, potentially triggering overtime obligations. But this seems perverse—Sarah would face overtime liability for work she didn’t perform, during hours she wasn’t present.
The statute provides no framework for answering these questions. It was written for a world where work meant a human being doing something, and hours worked meant time that person invested. Digital clones break that model entirely.
Collective Bargaining Rights Face the Same Void
The National Labor Relations Act gives workers the right to organize unions and bargain collectively over wages, hours, and working conditions. Employers can’t make changes to terms and conditions of employment without negotiating with the union first.
If a firm with 500 unionized customer service representatives builds digital clones of its top performers and uses those clones to handle 30% of evening calls, the available work-hours for human employees drop. Overtime opportunities vanish. Shift assignments change.
Under traditional NLRA interpretation, this looks like a change the company makes without asking workers first—a change that requires bargaining. The volume of available work affects employee compensation. But the employer will argue this is a management decision about technology, not an employment decision requiring union negotiation.
Unions represent human workers. A union can’t represent a digital clone. The clone has no interests to advocate, no grievances to file, no capacity to strike. Yet the clone directly competes with and displaces the human workers the union does represent. The NLRA’s framework of worker representation and collective action—developed for identifiable humans performing labor—doesn’t fit.
Discrimination Law Isn’t Ready Either
These statutes assume employment decisions are made by humans about humans.
An employer builds a digital clone that replicates not only job expertise but also communication patterns and behavioral tendencies. Does the clone replicate discriminatory attitudes? The original employee unconsciously communicated differently with customers of different racial backgrounds. Will the AI trained on that employee’s patterns do the same? Federal law provides no framework for addressing discrimination built into the digital clone based on how the person actually behaves.
Consider Marcus, a 68-year-old sales manager with deep client relationships. His employer builds a digital clone trained on his voice and sales conversations, then reduces Marcus’s responsibilities and compensation, telling him the firm is “transitioning to digital service models.” The organization reassigns him to an administrative role while the digital clone handles his former accounts.
Is this age discrimination? Marcus could argue that deploying a digital clone to replace his work, followed by demotion and pay cuts, is age-motivated discrimination accomplished through technological means. The employer will say the decision was business-driven—cost reduction and modernization, not age bias.
The ADEA doesn’t explicitly address using AI or digital technology to accomplish what would otherwise constitute illegal discrimination. Courts must apply existing frameworks, asking whether the decision was motivated by age. But an organization smart enough to frame this as “digital transformation” rather than “replace the older employee” makes the discrimination harder to prove. Employers would be required to check whether their digital clone deployments disproportionately harm certain groups the law protects (older workers, minorities, etc.).
Who Owns Your Voice
An employer builds a digital clone using an employee’s voice, likeness, and communication patterns. The organization is taking and using something that belongs to that person. Your distinctive voice, the particular way you explain things, your mannerisms—these have value, especially in commercial use.
But the employment context complicates this. Employees generally consent to employers using their likenesses in limited ways—directories, internal communications, industry-specific purposes.
The question is whether building a digital clone exceeds the reasonable scope of that consent. An employee who agrees to have her photo in the directory hasn’t necessarily consented to the creation of a fully autonomous AI replica that operates independently, potentially expressing ideas or making decisions on her behalf.
Federal law provides no framework defining the boundaries of permissible likeness use in employment. No mandatory disclosure requirements for likeness used to create digital replicas. No mechanism for employees to withdraw consent or limit scope after the fact.
What happens after someone leaves? Can an employer continue using a digital clone of a former employee indefinitely? Can the clone keep interacting with customers, answering questions, conducting business under the illusion that the original employee is still there?
No federal statute prohibits this.
State Regulation Creates a Patchwork
In the absence of federal action, states are beginning to regulate AI in employment contexts. But the result is fragmentation—different rules in different states, creating compliance nightmares for national employers.
A customer believes she’s communicating with an HR manager but is instead interacting with a digital clone. The organization may be violating California’s disclosure requirement. This would force employers to reveal the AI nature of the interaction, undermining a primary selling point of the technology—seamless, human-quality service that customers can’t distinguish from the real person.
New York City has established requirements for checks to see if AI hiring tools are unfair to certain groups. While focused on computer programs making decisions rather than digital cloning specifically, the requirements establish precedent for local jurisdictions imposing employment-specific AI governance obligations.
An organization operating in California must comply with companion chatbot disclosure requirements. The same organization in Texas must ensure compliance with deepfake prohibitions. In Colorado, impact assessments. In New York City, bias audits. Identical technology, different legal obligations depending on geography.
This fragmentation makes national deployment of employee cloning technology legally complex and incentivizes firms to avoid certain states or modify their technology by jurisdiction. Labor law is an area where federal regulation has traditionally dominated, particularly through the FLSA and NLRA. Employers subject to federal requirements will look to the Department of Labor and the NLRB for guidance. Without it, everyone is guessing.
Federal Agencies Have Been Silent
As of the CES announcement on January 7, 2026, the federal agencies with direct authority over employment—the Department of Labor, the National Labor Relations Board, and the Equal Employment Opportunity Commission—have issued no public guidance addressing digital employee cloning or how existing federal labor laws apply to this technology.
Federal regulators either didn’t anticipate the rapid commercialization of employee cloning technology, or they’re uncertain how to approach the novel legal questions it raises.
The Department of Labor, through its Wage and Hour Division, enforces the FLSA. The agency has published guidance on various novel employment arrangements—gig work, independent contractor classification, situations where multiple employers share responsibility for a worker. But nothing on AI digital cloning. The Wage and Hour Division must eventually grapple with how to count hours worked by autonomous AI systems generating work product, whether compensation is owed for AI-generated work derived from an employee’s likeness, how to classify the relationship between an employer and a digital clone.
The DOL’s enforcement approach will set practical standards that employers will follow. The Wage and Hour Division investigates complaints from employees whose digital clones were deployed without compensation and brings enforcement actions. Employers will be forced to respond. If the DOL remains silent, employers will interpret the silence as permission and deploy the technology more broadly.
The NLRB faces similar questions about whether deploying AI digital clones constitutes an illegal change to working conditions that should have been negotiated with the union. As unions file formal complaints about illegal labor practices challenging digital clone deployments that reduce available work, the Board must develop law addressing the intersection of management decisions and things that need union approval. The Board’s current composition and philosophy will shape this evolution, but either way, workers and employers face uncertainty until the Board clarifies its position through court cases and decisions—a process that takes years.
The EEOC has begun acknowledging that AI systems can embed discrimination and that existing civil rights statutes apply to computer programs making decisions. The agency may eventually be called upon to investigate whether digital clones replicate discriminatory attitudes from their human sources, whether deployment disproportionately affects protected classes, or whether the decision to deploy clones was motivated by discriminatory intent. But as of January 2026, the EEOC has issued no guidance specific to employee digital cloning.
Impact on Workers’ Rights and Compensation
Employers can now create digital versions of employees that work around the clock, and there’s no clear legal requirement to compensate the human whose expertise and likeness made that digital worker possible.
If you’re an HR manager, a customer service representative, a sales associate, or anyone in a knowledge-intensive role where your expertise can be captured and replicated, your employer might soon create a digital version of you. That digital version could handle inquiries while you’re off the clock, reducing the firm’s need for your labor. Your overtime opportunities might disappear. Your hours might get cut. Your role might be “transitioned” to something less valuable while your digital clone does the work you used to do.
Under current federal law, you might have no recourse.
You can’t file a wage claim for work your digital clone performed, because the statute doesn’t recognize that as your work. You can’t file a formal complaint about illegal labor practices for changes to working conditions, because the NLRB hasn’t established that digital clone deployment triggers bargaining obligations. You can’t file a discrimination complaint for being replaced by your own digital likeness, because the EEOC hasn’t clarified that such replacement constitutes discrimination.
If you’re in a union, your representatives can try to bargain over this, but they’re operating in a legal vacuum. If you’re not in a union, you’re negotiating with your employer individually about whether and how your likeness can be used—assuming your employer even asks permission rather than treating your likeness as property because you built it during work hours.
Other Workplace AI at CES 2026
MyPersonas wasn’t the only workplace AI technology showcased at CES. Hyundai announced plans to deploy humanoid robots at its U.S. manufacturing plant in Georgia starting in 2028, initially for parts sequencing, potentially expanding to component assembly and more complex operations by 2030.
Caterpillar announced an AI-enhanced customer solution called Cat AI Assistant—an intelligent operator assistant providing personalized insights, real-time coaching, productivity tips, and safety alerts for equipment operators.
Siemens announced Digital Twin Composer, software that builds Industrial Metaverse environments enabling organizations to apply industrial AI, simulation, and real-time physical data to make manufacturing decisions virtually.
These technologies perform tasks and make decisions previously performed by human workers. They raise questions about labor displacement, wage requirements, and the applicability of worker protection statutes. MyPersonas is distinct because it builds a digital replica of a named employee, inheriting that person’s voice, communication patterns, and likeness. This specificity implicates identity protection and personal agency in ways that more generic AI tools don’t.
The technology represents a shift from automating work functions to replicating work personas.
What Needs to Happen
Federal agencies should issue joint guidance immediately, clarifying how existing labor law applies to digital employee cloning. The Department of Labor should address whether work performed by digital clones constitutes work that should be paid for under the FLSA, how hours should be calculated, what wage obligations employers have. The NLRB should clarify whether deployment constitutes an illegal change requiring bargaining. The EEOC should establish that discrimination prohibitions apply to decisions to deploy digital clones and to AI systems that may replicate embedded biases.
Congress should consider targeted changes to the laws. Amendments to the FLSA might provide that work performed by autonomous AI systems trained on an employee’s knowledge, likeness, or expertise requires compensation for the original employee—either through direct compensation for the clone’s work or through a fee for use of likeness. Amendments to the NLRA might clarify that deployment of technology that replaces workers requires bargaining with unions representing affected employees.
Amendments to Title VII, the ADEA, and the ADA might establish that employers would be required to check whether AI deployment decisions disproportionately affect protected classes and to audit digital employee clones for embedded bias. The amendments might also establish that decisions to deploy digital clones in ways that replace or reduce the work of employees in protected classes will be looked at more carefully by courts.
Rather than extending existing labor law frameworks—which assume human employees working for compensation—to digital systems, policymakers might develop a new framework recognizing that work performed by autonomous AI systems represents a distinct category requiring distinct legal treatment.
This framework might focus less on compensation for work hours (which assumes human time and effort) and more on rights over likeness and intellectual property. Employees whose expertise and likeness are used to train digital clones might have the right to control uses of their digital replica, to receive compensation for value the replica generates, and to ensure their digital replica operates only in approved ways and is deleted after employment ends.
The Fundamental Question
Labor law has always assumed a relatively simple relationship between human workers and employers, in which workers invest their time and effort and receive compensation in return. Digital employee cloning disrupts this fundamental relationship.
A digital clone generates work product—customer interactions, information delivery, complex problem-solving. Who is the laborer? The original employee? The firm that built the platform? The employer that deployed it?
The answer matters for allocating both rights and obligations. If the employee is considered the laborer, wages are potentially owed and labor protections apply. If the firm that built the platform is the relevant actor, questions about the rights of developers and proper regulation of the AI industry become central. If the employer deploying the system is the primary actor, employer liability for compliance becomes the relevant frame.
Federal labor law assumes clarity on this question that digital cloning eliminates.
Traditionally, labor law has prioritized protection of human workers on the theory that workers are vulnerable, have relatively little bargaining power, and require legal protection. The employment relationship has been viewed as inherently unequal—the employer has greater economic and informational power. Labor regulations exist to counterbalance this inequality.
Work is performed by an autonomous digital system. The vulnerability rationale for labor protection becomes less obvious. A digital system has no need for minimum wage, no vulnerability to exploitation, no capacity to suffer workplace injuries. Yet the human whose likeness and knowledge trained the digital system may suffer economically as their labor is effectively replaced by the clone.
This suggests the legal framework might need to shift from individual worker protection toward property rights and compensation for taking and using of likeness and expertise. Rather than regulating digital clones as if they were human workers subject to wage and hour laws, regulators might treat the creation of a digital clone as a taking or appropriation of the employee’s likeness, requiring compensation at the point of cloning rather than compensation for ongoing work performed by the clone.
International Approaches
The European Union’s AI Act, which entered rolled out gradually starting in August 2025, creates obligations for companies that make basic AI tools and companies using AI for important decisions. The statute requires providers of the basic AI systems that other AI tools are built on to publish detailed summaries of training data and imposes downstream requirements that users ensure their AI systems don’t fall into prohibited categories.
While the EU AI Act doesn’t explicitly address employee digital cloning, its framework for regulating high-risk AI uses and requiring transparency and documentation suggests an approach that could be extended. Were the EU to apply its AI Act framework to employee cloning, it might require organizations to disclose digital clone deployments, to conduct impact assessments examining risks to workers, to obtain explicit consent, and to establish mechanisms for workers to request deletion of their digital clones.
The EU’s General Data Protection Regulation imposes strict requirements on collection, use, and retention of personal data, including information about your voice and appearance—which gets extra legal protection. Under GDPR, organizations must obtain explicit consent for collecting and processing personal data, must ensure collecting only the data you need, and must provide individuals the right to have your data deleted. An organization builds a digital clone using an employee’s voice and likeness. It’s processing information about your voice and appearance. The individual has the right to request deletion.
But GDPR complicates this: the employee’s right to have data deleted applies to the original data used to train the digital clone, but may not extend to the trained model itself. Once data has been processed and embedded into an AI model, whether deleting the original information counts as honoring your right to be forgotten is disputed. The digital clone model retains the employee’s distinctive patterns even after original training data is deleted. Has the right to deletion been satisfied?
The answer remains unsettled in European regulatory practice. But Europe is grappling with the question. The United States isn’t yet.
Why This Matters Now
The unveiling of MyPersonas at CES 2026 represents a critical moment. For the first time, tools capable of building fully autonomous AI replicas of employees that perform work, interact with customers, and generate business value have moved from theoretical possibility to commercial reality.
Yet the federal legal framework governing employment—more than a century of statutes enacted to protect workers and regulate the employment relationship—contains no provisions addressing digital employee cloning.
The gaps span the fundamental architecture of labor law: how work is counted and compensated, how workers organize collectively and bargain with employers, how discrimination law protects workers from bias-motivated decisions, how workers’ rights to control their own likenesses and identities are protected.
These gaps exist not because policymakers consciously decided digital employee cloning should be unregulated, but because the technology emerged faster than regulatory frameworks could anticipate and respond.
Federal regulators now face urgent questions. Will they issue guidance clarifying how existing law applies, or allow regulatory uncertainty to persist? Will Congress intervene with legislative amendments, or will the evolution of law be left to court cases and agency enforcement action? Will state regulation continue developing in a patchwork manner, creating complex, fragmented requirements, or will federal standards set national standards?
The answers will shape whether employees whose likenesses and expertise are taken and used for digital cloning receive fair compensation and protection, or whether employers can derive substantial value from these appropriations while workers receive nothing. The answers will affect whether workers can collectively organize and bargain as their labor is supplemented or replaced by digital clones, or whether the legal framework for collective action becomes obsolete as work becomes automated. The answers will determine whether discrimination law provides meaningful protection as employment decisions increasingly involve AI systems, or whether AI becomes a tool for accomplishing discrimination while evading legal accountability.
As of January 7, 2026, these questions remain unanswered. But they can no longer be deferred.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.