The Future Virtual You: How Will Digital Twins and Afterselves Change Our Identities?
About
We already can make deepfakes of ourselves and others. There is an issue of consent; ideally people should consent to have digital twins of themselves made, even if its your grandma.
But if we start making digital twins of ourselves, which is already being done, they can go out and do things on our behalf. Many may not object to this application, but it will raise questions about identity. We want them to be sophisticated enough, and attentive to us enough, to perfectly mimic us, but not so sophisticated that they develop goals or identity of their own.
What if they do something criminal – are we liable?
What if your deepfake of grandma isn’t just her best, more friendly, most knowledgeable version, but starts to get creative or go rogue?
When would we have qualms about turning them off?
People can make ‘Afterselves’ (or death-bots) for the dying or dead, but if we start making our own, and refine them over time, they become digital assets that our descendants could inherit. Like copyright, the consent of the dead is probably a matter of time. We can make Einstein bots as much as we want, but should get grandma’s permission. Gifting our digital assets including agents to our inheritors is even better than consent.
If the person died in the last 30 years you need permission, but not before.
The utility question is moot – having a specific personality is only a gimmick for humans. We would have “Top Engineer” not bots with Einstein’s personality – and you definitely don’t want them to have self.
Also see grief-tech.
This talk was recorded at Future Day 2025 by James Hughes.
Timestamps and chapters
00:00 Intro
00:42 Talk Starts – The Future Digital You
01:30 Blockchain and Agents – ChatGPT inspired James
05:30 Origins of the Distributed Self
08:09 Co-Evolution of Humans & Tech
11:59 Self-Interest, Preferences, Agency
16:08 Is Cognitive Offloading Bad?
17:58 The Illusion of Agency and Autonomy
23:15 Digital Twins and AI Superego
23:31 Deathbots and Respect for the Dead
27:23 Suffering and Compassionate Machines
29:43 Talk End
30:14 Would you want your death-bot to experience consciousness?
33:13 What if your future ‘digital you’ shifted away from your values?
36:05 Which values should we lock-in, which to evolve?
37:52 Foundational principles for your digital twin representing you?
39:26 The ‘Great Library’, and sentient books
42:10 People editing your digital-twin’s values
43:46 Value updating via persuasion by good arguments vs ideological implant
45:58 Implanting vast amounts of philosophical/ethical literature in your brain – normative enhancement?
49:18 Why do LLMs generally exhibit liberal values?
52:42 What moral obligations/leanings should be given to digital twins?
55:29 Are LLMs more morally consistant than humans?
58:28 Will authoritarian takeover of LLMs work?
1:00:04 Worries about digital twins being caught in samsara
1:01:38 Should death-bots/digital twins liberate people from samsara?
1:03:02 Should multiple copies of your digital self be allowed?
1:04:11 Governance of and cooperation with digital twins
1:06:35 Should ‘elder’ digital-twins play the role of moral advisors?
1:10:03 Digital selves joining hive minds – and do we want an afterself for america?
1:12:41 Should your digial-selves be alowed to convert to extremist ideologies?
1:16:04 Reconstructing digital versions of famous historical figures?
1:17:13 Concluding thoughts – the future is so unpredictable
Digital twins present an intriguing concept, though they can only capture surface-level aspects of human personality and behavior. While they might offer some emotional comfort to those grieving, they cannot replicate the complex internal experiences that make us human, such as our inner dialogues and private thoughts. Looking ahead, advances in artificial superintelligence could potentially enable more sophisticated approaches to technological preservation of human consciousness and identity.