Key Questions: AI and Identity

Digital estate continuity is a young discipline operating at the intersection of rapidly evolving technology, developing law, and emerging professional practice. Many of its most important questions remain open — contested among practitioners, unresolved in courts, underdetermined by statute, and insufficiently studied by researchers. This document presents the key questions that define the frontier of the field. It is intended to guide research, inform policy, and orient professional practice toward the areas of greatest uncertainty.

The following questions address Artificial Intelligence and Identity.


Q2.1 — Who owns an AI model trained on a deceased individual’s personal data?

When a platform trains an AI model using a user’s data — their writing, their voice, their behavioral patterns — and the user dies, who owns the model? The estate? The platform? No one? The answer has direct implications for commercialization rights, deletion rights, and successor governance.


Q2.2 — What constitutes informed consent for post-mortem AI continuation, and can it be given prospectively?

If a user clicks a checkbox during platform onboarding authorizing AI use of their data, does that constitute informed consent for post-mortem AI continuation? Can consent given years before death, without knowledge of what AI systems would be built, govern the governance of those systems? What is the standard for meaningful consent in this context?


Q2.3 — How should the EU AI Act apply to AI systems that model deceased individuals?

The EU AI Act establishes risk categories for AI systems that interact with natural persons. AI systems that simulate deceased individuals create novel regulatory questions: is the deceased a “natural person” for the purposes of the Act? Does the potential for psychological harm to survivors constitute a risk that triggers regulatory requirements? How should these questions be resolved?


Q2.4 — At what point does AI continuation of a deceased individual’s identity constitute fraud, defamation, or violation of post-mortem rights?

If an AI system trained on a deceased celebrity’s data generates statements the celebrity never made, endorses products they never endorsed, or participates in conversations they could not have authorized — what legal theories apply? How do existing tort frameworks (defamation, right of publicity, false endorsement) interact with AI-generated content of deceased individuals?