
AI is capable of mimicking a real person. It’s clear this capability exists, and the ethics of using AI for this purpose are often very clear. But increasingly, new applications are leading to ethically murky results.
The good
For example, the CEO of a company, or a politician, could choose to create a clone using AI tools, creating a chatbot plus an avatar — a digital twin — that can interact with people on their behalf. Silicon Valley is big on the idea: Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman are working on, or have already created, digital twins of themselves.
Cloned politicians include Pakistan’s Imran Khan, who used an authorized voice clone to campaign from prison, and New York City Mayor Eric Adams, who used voice-cloned robocalls to speak with constituents in languages like Mandarin and Yiddish.
This kind of use case is probably ethical — as long as the people interacting know that they’re dealing with a digital clone and not a real person.
The bad
The flip side of ethical uses for AI-generated clones is the non-consensual (and therefore unethical) cases. And of these, there are already many. For instance:
- In 2019, the first widely documented case occurred when scammers used AI to mimic the voice and German accent of a parent company’s executive, successfully tricking the CEO of a UK energy firm into transferring €220,000 into a fraudulent account.
- In 2023, an Arizona mother, Jennifer DeStefano, was targeted by extortionists who used an AI clone of her 15-year-old daughter’s voice to demand a $1 million ransom.
- And in 2024, a finance worker at a multinational firm in Hong Kong was tricked into transferring $25 million after attending a video conference call featuring deepfake recreations of his CFO and several other colleagues.
Other unethical, non-consensual uses for AI cloning include deepfake videos, where a celebrity’s face is superimposed on a porn actor. In all the above examples, the ethics are clear. This is all very wrong.
But with China leading the way in the emergence of AI clones, the ethics are becoming far murkier.
And the ugly
One emerging trend involves workers using specialized software to build digital versions of their bosses or colleagues. The most prominent project driving this trend is Colleague Skill, which was posted in late March by its creator, a 24-year-old Shanghai-based engineer named Zhou Tianyi.
Colleague Skill and its forks and copycats, which tend to be open source, enable people to upload chat histories, emails, and internal documents to create a functional persona that mimics a specific coworker’s professional expertise and communication style. The technology stack includes tools like Claude, Kimi, ChatGPT, DeepSeek API, OCR (Tesseract), and sentiment analysis modules.
Colleague Skill uses a person’s past communications to build a talking replica of their personality. If you think of a regular AI as a general student who knows a little bit about everything, this tool acts like a specialized mask that forces the AI to behave like one specific individual.
In other words, it produces a chatbot with the knowledge and patterns of speech of a real person.
Colleague Skill started as a satirical commentary on AI-driven layoffs. But some employees began using it in earnest to clone their colleagues. There are several stated reasons for doing so, including retaining institutional knowledge and having an instant sounding board to “discuss” plans and ideas with.
A similar motivation is the use of AI to clone bosses, so employees can better predict how that boss might react to the employees’ work.
In most of these instances, according to reports out of China, the creation of the boss-bot or colleague clone is nonconsensual.
Is non-consensually basing a custom chatbot on a colleague or boss unethical?
And then it got personal (and weird)
Tianyi, creator of Colleague Skill, later forked it into something called Ex-Partner Skill. The idea is to re-create a former partner with AI so the user can continue the relationship.
It operates on the same technical engine but applies it to a much more personal part of life. Users upload photos, social posts, chat logs and other content. The AI chatbot can then mimic the former partner’s tone, catchphrases, and subtle linguistic nuances, something that, “truly sounds like them — speaks with their catchphrases, replies in their style, remembers the places you went together.”
This allows a person to simulate conversations with someone who is no longer in their life.
If Colleague Skill is in a grey area, Ex-Partner Skill is in a darker grey area.
(Note: many of the original repositories for Ex-Partner Skill have been removed from public view in China or “sanitized” after regulatory pressure. But the framework reportedly continues to circulate in private developer circles, and similar tools are increasingly used for “digital resurrection.”)
Ethically, the concept feels like it exists on a wide spectrum somewhere between therapy at one end and revenge porn at the other. (It’s like revenge porn in the sense that when “content” consensually made by two people for one purpose is later used consensually by one person in a way that the other person might find objectionable.)
Or maybe it’s closer to the “deathbot” phenomenon, where an AI-generated simulation provides a fake version of the dearly departed. (In both cases, the user interacts with a digital twin of someone who is no longer present in one’s life.) In fact, some people in China are using Ex-Partner Skill as a deathbot for a deceased loved one.
The lack of consent feels like an ethical lapse. But we don’t consider it unethical to think about, remember, imagine conversations with, or journal about ex-partners — or dead family members.
Boosters of the Ex-Partner Skill idea say that conversations with digital exes are therapeutic. They point out that because it’s private, it’s not harassment or stalking or an invasion of privacy. Instead, they argue, it helps with personal reflection and emotional healing.
As for people who have died, according to Chinese media reports, some users say the tool gives them a sense of closure and allows them to say the things they wish they could have said to the real person. But is it really closure if one person is still obsessively trying to interact — or pretend to interact — with the other person?
It’s healthy to communicate. But it’s not communication when a person is by themselves talking to no one and sending messages to a person who never gets those messages.
While ex-bots are a thing these days in China, the trend is showing up elsewhere. Some Character.AI users outside of China have created chatbots based on ex-partners, even though the company has changed its Terms of Service to explicitly ban the creation of bots using the likenesses of private individuals without their permission.
The emergence of nonconsensual cloning of coworkers, bosses and ex-partners is a new challenge to our sense of right and wrong, and yet another way AI is challenging us to step up and figure out how to respond.
