EXCLUSIVE: the world's first digitally transparent deepfake
Let's move the AI doom and gloom to solutions we know already exist to ensure information integrity
Dear all,
It’s been a few weeks … I took some time off from AI to be with my family in Nepal, where we spent some time walking with elephants in the jungles of Southern Nepal and exploring the stupas of Kathmandu (food for the soul.) If you are ever planning a trip to Nepal, feel free to give me a shout for travel recommendations! 🇳🇵
🚫 Not AGI!
Anyway, I’m back now — and writing to you from Washington D.C — where I spoke this morning at the IAPP’s Annual Privacy Summit — the biggest privacy event in the world. My main message on Generative AI is that, although these machines may seem all-powerful, they are not AGI… (yet.) Humans still control these systems, and we are responsible for deciding how they're integrated into society.
I know there has been a lot of doom and gloom surrounding AI while I was away, but it's time to shift the conversation towards solutions. While there's no silver bullet, we already have promising ideas for dealing with concerns around AI-generated content, including information integrity.
💡 Radical Content Transparency
How can we trust digital content when AI-generated content is so pervasive? It's not just that AI can "fake" anything, but the mere existence of AI-generated content makes it easier for everything to be dismissed as "fake" (a phenomenon known as the liar's dividend).
I am proud to be part of a community pioneering a radical transparency approach towards all content. The basic idea is that, if you are a good actor, why not authenticate content and show where it came from?
It is already possible to cryptographically ‘sign’ all digital content, regardless of whether AI generates it, to demonstrate its origins. But simply ‘signing’ content is not enough; the mark of authentication also needs to be visible as the content travels around the digital ecosystem.
The next step is to adopt an open standard for content authenticity so that the infrastructure to see the origins of content is baked into the internet's architecture. This open standard is already being developed by the Coalition for Content Provenance and Authenticity (C2PA), a non-profit organization that includes industry leaders including Adobe, Microsoft and Intel.
❓ Why Aren't Generative AI Companies Signing Their Content?
Despite the importance of radical transparency, Generative AI companies and platforms don't feel the need to adopt it yet. However, as more people demand authenticity, they will have to change.
Personally, I like to know where the food I eat comes from. Why should this be any different for my information diet? We have the right to know what our brains consume and where it originated. Ultimately, this is not about telling people what is ‘true’; it’s about giving them the context and choice to make trust decisions.
🎉 The World's First Digitally Transparent Deepfake
To drive the conversation forward, Truepic, Revel.ai, and I launched the world's first digitally transparent deepfake. This is a call to action for all GenAI companies and platforms that are holding back on authenticating content.
Nothing about the future is inevitable. The future of AI is still in our hands. Why don’t we build a more trustworthy digital ecosystem by adopting radical transparency and providing context for digital content?
Back on Friday with the latest edition of the EGAI newsletter.
Namaste from D.C,
Nina