Could decentralised identities put a stop to deepfakes?
A decentralised identity regime is a strong contender solution to the problem of increasingly convincing deepfakes, writes UNSW Business School's Eric Lim.
A decentralised identity regime is a strong contender solution to the problem of increasingly convincing deepfakes, writes UNSW Business School's Eric Lim.
In a recent viral clip, we saw a video of Bill Gates being grilled by journalist Sarah Ferguson about his contribution to the world. For some individuals, upon seeing this clip, they would cheer and celebrate how mainstream media is finally asking the tough questions to authoritative figures, a phenomenon that has been sorely missing in recent times with increasingly partisan mainstream media outlets. The clip was found to be a deepfake (and not a particularly good one) and has been altered from the real interview by ABC News Australia.
We are entering an era of deepfakes where a video of a person in which their face or body has been digitally altered (through a face swap, for example) so that they appear to be someone else. We knew they were coming. We are also aware that deepfake technology is getting better and has become more accessible to the general public.
Further, we can deduce logically how malicious they can be in relation to how our society can continue to function if we were to be inundated with fake videos and information on a daily basis through artificial intelligence technology. In many of our books and pontifications, many individuals have imagined all the profound possibilities that an artificial intelligence revolution will bring to humanity from medical breakthroughs and to how we could harness the power of the sun for an endless supply of clean energy. Artificial intelligence brings about its own problems as it scales and becomes more accessible.
We can easily imagine a scenario when the world wakes up to find a convincingly altered video on YouTube or social media of President Joe Biden from a previous press conference talking about seriously considering launching missiles into North Korea or Russia and the fallout of that video if it were taken seriously. This kind of disinformation is a worrying trend, but no one seems to want to talk about potential solutions. How do we form a consensus about our reality in this era of deepfake videos and other media?
Anyone who is familiar with my work and stance on blockchain and cryptocurrencies know I am a strong believer in the need for a decentralised identity (DID) solution on an immutable ledger that is not under the control of a centralised entity. Critics of this decentralised solution will normally focus on the decentralisation aspect and failed to realise that it is the universal aspect of this solution that appeals to me in this context of deepfakes. They also do not understand how the very whiff of a centralised entity could be corrupted and be accused of partisanship and would fail to generate the credibility required in the era of deepfakes.
A DID is really a contender solution in this context because digital information flow is not constrained by geographical and jurisdictional borders. If one were to try to verify if a piece of digital information’s provenance is from an official source in another country and that the content has not been tampered with, it could be time-consuming and inefficient and could be potentially consequential if the verification requirement is urgent as in the scenario highlighted above.
We need a solution that is easily accessible to anyone no matter where they are from and that scales with the proliferation of digital information. There very well could be other potential solutions out there in the market, like the use of watermark, or threshold fact-checkers, but it is unclear if they could meet the high bar of being able to remain difficult to corrupt and also to scale with the speed at which digital content is generated.
In my previous article, I explained that the DID on a blockchain is simply a pointer on a blockchain. It is represented by the public key that the individual can share publicly and declare that this public key is connected to their identity. Under the DID regime, every public figure or noteworthy media channels would have their official public key registered on a public blockchain. All content that is released or propagated by these public figures and media channels could be signed cryptographically that are pointed to their public key.
Under this system everyone in the world will be able to instantaneously verify if the video or the digital content they are consuming are signed correctly with the provenance pointing to the official public key and more importantly, if the content has been tampered with by checking the cryptographic hash of the content. Anyone will be able to tell instantaneously if even a pixel in a digital video has been altered with such a solution.=
To provide a concrete example and in simplified form, the office of the President of the United States (POTUS) in a DID regime will possess an official ID (public key) on a public blockchain like Cardano or Algorand. Any videos or press releases created by the POTUS office can be cryptographically hashed and signed with a digital signature created by the private key associated with the POTUS office’s public key. Anyone in the world watching a video or reading the content appended with a digital signature can verify for themselves if the digital signature does indeed match the public ID and that the cryptographic hash has not been tampered with.
In another sense, one could think of each digital content generated to be accompanied by a Non-Fungible Token (NFT), of which its provenance and veracity could be easily verified. There is no need to call up the POTUS office to check with the press secretary. There is no need to wait for third parties like CNN or Reuters or to trust fact-checkers. This is exactly the philosophy behind blockchain and cryptocurrencies: “Don’t trust, verify!” If digital content does not contain a valid digital signature, then it is best for the consumer of the content to remain skeptical and wait for further confirmation.
In an era of social media platforms such as TikTok and Mark Zuckerberg’s meta as well as memes and other synthetic media, our society is fragmented. People are talking over one another instead of with one another, factions have been formed, and some have stopped having conversations altogether. With artificial intelligence constantly improving (thanks to increasingly advanced machine learning, advanced algorithms, generative adversarial networks and deep learning technology) the era of deepfakes is not going to make things better. It would be an understatement to say that it will further erode our society at a time when we need to pull together to tackle all the social problems we are facing.
If we perceived the last two US elections involving former president Donald Trump to be contentious, I think in the upcoming one in 2024, we will see the real impact of convincing deepfakes, fake news and related misinformation coming into the fray. If it ever reaches a point where no one is going to accept the results of the election, it will dark days for all of us globally.
Excerpt from article by Eric Lim, read the full article here