By Jessica Adelson

AI continues to advance and offer new possibilities – but the opportunities come with serious ethical questions. As we’ve covered over the past few years, deepfakes and voice synthesis have become so advanced that it can be impossible to tell real from fake. After a hologram of Harry Caray was displayed at the Field of Dreams Game last month, internet users are experiencing deja vu given Amazon’s recent announcement that Alexa will soon be able to sound like a late relative and last year’s controversy surrounding the Anthony Bourdain documentary. Ethics aside, it seems inevitable that identity replication will continue whether people like it or not. 

On one hand, this synthetic content is opening up new opportunities for creatives to work with digital twins: Abba is touring as avatars (or “Abbatars”); the Weeknd “spoke” one-on-one with fans at the start of the pandemic; brands are exploring deepfake technology to streamline influencer marketing. But as we’ve seen, this tech is an evolving tool for misinformation and threatens the reputation of individuals and brands: the deepfake of the Ukraine President surrendering; women in the public eye being edited into pornographic content. 

While the creation of this visual misinformation may not always be ethical, it is in fact legal. Technology has a way of evolving quicker than the government’s ability to regulate it – so what happens when laws catch up? What would it look like to regulate the use of AI identity in media?

For years, public figures have wanted to “own” their image, and given these technological advancements, it’s possible they may soon be able to. Copyright law would allow elements of identity (i.e., voice and appearance) to be protected similar to other forms of intellectual property. Needless to say, things get complicated when identity is seen as a form of content – does your identity belong to your estate after death? Will your identity ever enter the public domain?

It’s likely necessary to establish laws to control the misuse of visual misinformation, but if used ethically, synthetic content is currently a very promising creative tool. Cadbury created a deepfake of India’s most famous actor to support small businesses. Sonantic worked with Val Kilmer to recreate his voice so fans could hear him speak again. Bruce Willis licensed his deepfake image for advertisements. With this technology, brands have extraordinary potential to explore new collaborations, personalize their content, and reach new markets — but also potential for adversity without an ethics lens to guide creative direction. 

This article was originally posted on Media Genius.

Jessica Adelson


Integrated Media, Weber Shandwick Vancouver

Subscribe to our newsletter

Let's connect! Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.