Jump to content




Soon, anyone with enough data will be able to build a digital version of themselves. But should they?

Featured Replies

rssImage-896ebfb92093fe444e33d0ade7ceb98a.webp

After writing more than one article a day for the last 23 years, I’ve accumulated a body of text large enough to train an AI model that could convincingly write “like me.” With today’s technology, it would not be difficult to build a system capable of generating opinions that sound as if they came from Enrique Dans—an algorithmic professor that keeps publishing long after I’m gone. 

That, apparently, is the next frontier of productivity: the digital twin. Startups such as Viven and tools like Synthesia are building “AI clones” of employees and executives—trained on their voices, writing, decisions, and habits. The idea is seductive. Imagine scaling yourself infinitely: answering emails, recording videos, writing updates, etc., while you do something else, or nothing at all. 

But seductive doesn’t mean sensible.

A world full of digital ghosts 

We are entering an era where professionals will not just automate tasks; they will replicate their personas. A company might build a digital copy of its best salesperson or customer service agent. A CEO might train a virtual twin to respond to inquiries. A university might deploy an AI version of a popular lecturer to deliver courses at scale. 

In theory, this sounds efficient. In practice, it invites a form of existential confusion: If the replica is convincing enough, what happens to the person? What does it mean to “be productive” when your digital version is the one doing the work? 

The fascination with cloning ourselves digitally reflects the same temptation that has driven automation for centuries: outsourcing not just labor, but also identity. The difference is that AI can now replicate the voice of that identity, both literally and metaphorically. 

What I would look like as an algorithm

I could easily do it. Feed a large language model the millions of words I’ve written since 2003 — every article, every post, every comment — and you’d get a fairly accurate simulation of me. It would probably have the right tone, vocabulary, and rhythm. It could write plausible articles, maybe even publish them at the same pace. 

But it would just miss the point. 

I don’t write to fill a schedule or a database. I write to think or to teach. Writing, for me, is not an act of production, but of reflection. That’s why, as I explained recently in “Why I let AI help me think — but never think for me,” I never let AI write my articles for me. It makes no sense. Asking a model to think for me would defeat the very reason I sit down every morning to write. 

Of course, I use AI constantly: summarizing sources, checking facts, exploring counterarguments, and finding references. But I never let it finish my sentences. That’s the boundary that keeps my work mine. 

The illusion of scaling yourself

The promise of digital clones is rooted in the same misconception: that replicating output equals replicating value. Companies now talk about “bottling expertise” or “scaling human capital” as if personality were a production line. 

But cloning output is not the same as extending competence. A person’s professional value is not their words or gestures. It’s their judgment, built over time through context and curiosity. A model trained on your past decisions may imitate your tone, but it cannot anticipate your evolution. It’s a fossil, not a future. 

An AI clone of me could mimic my writing style from 2025. But if I let it publish, it would freeze me in that year forever, a museum piece updated daily. 

From productivity to presence 

Executives, entrepreneurs, and creators should be careful what they wish for. A “digital twin” may handle the inbox or record video briefings, but it also dilutes what makes leadership or creativity meaningful: presence. 

In Axios’s coverage of CEO clones, many executives confessed that they liked their AI doubles but didn’t fully trust them. The clone could handle repetitive interactions, but not empathy, timing, or nuance—the qualities that define credibility. 

Delegating those things to an algorithm is like sending a mannequin to a meeting: technically present, emotionally vacant. 

Corporate immortality and the ethics of legacy

There’s also the question of what happens when your digital twin outlives you. Some companies already treat employee data as assets, so why wouldn’t they treat their digital clones the same way? 

Imagine a firm continuing to deploy the “AI version” of a beloved leader or educator after they’ve passed away. It might seem like a tribute, but it’s really a form of corporate necromancy: using a person’s intellectual remains to perpetuate a brand. 

It’s not hard to picture universities selling “virtual professors” or corporations reusing former CEOs as permanent avatars. In a recent academic paper on digital twins, researchers warned that the boundary between “representation” and “possession” is getting blurry. Who owns the clone? Who profits from it? 

When we replicate people as data objects, we risk turning identity into infrastructure, into something that can be licensed, monetized, or rebranded at will. 

The right way to use AI for personal scale 

There is, however, a rational way to use AI for scale: as augmentation, not imitation. 

I use AI every day as a thinking partner. It reads drafts, proposes structures, suggests sources, and critiques my logic. It’s like having a tireless research assistant, one that never gets offended when I ignore its advice. But the act of reasoning, the decision of what to say and how to say it, remains mine. 

That’s the key difference between using artificial intelligence and becoming it. When we outsource thinking, we lose the feedback loop that makes us human: the constant process of reflection, revision, and growth. 

Professionals who embrace AI responsibly will amplify their reach without diluting their essence. Those who don’t will eventually find their own voices indistinguishable from their machines. 

What businesses should learn from this

For companies flirting with employee clones or AI avatars, here’s a checklist worth remembering: 

  1. Define purpose, not imitation. Don’t build AI twins to replicate people. Build systems that free them to do higher-value work. 
  2. Keep the human in the authorship loop. AI can assist in drafting, coding, and summarizing, but final judgment must remain human. 
  3. Treat data as legacy, not property. Respect employee and creator autonomy. No one should become a perpetual digital asset without consent. 
  4. Focus on augmentation, not automation. Use AI to enhance collective intelligence, not to eliminate the need for it. 

AI is not here to replace human expertise; it’s here to challenge how we apply it. 

The paradox of self-replication

Soon, anyone with enough data will be able to build a digital version of themselves. Some will see it as immortality; others, as redundancy. I see it as a mirror, a test of what truly matters in human work. 

When my own digital twin can write a decent article about AI, I won’t be impressed. The question isn’t whether it can write. It’s whether it can care, and whether it serves me for the purpose I’m trying to achieve. 

And until algorithms can care about truth, nuance, curiosity, or purpose, I’ll keep doing what I’ve done for the last 23 years: Sit down, think, and write. Not because I have to, but because I still can.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.