Battling Deepfakes: Is Blockchain the Answer?

One technological tool that may be useful in identifying or preventing misinformation is blockchain— a database technology originated in 2008 that records and stores information in blocks of data that are linked, or “chained,” together. The software is free and open source, which makes it available to the public.

Artificial intelligence offers a great deal of promise to enhance business productivity and societal well-being. However, malicious parties have also adopted this new technology to cause harm by promoting misinformation via deepfakes—information and images that have been manipulated but presented as authentic. Given the widespread concerns of harm caused by misinformation, there is a growing need for technological and regulatory tools that can raise awareness and understanding when individuals face deepfakes or other sources of misinformation.

One technological tool that may be useful in identifying or preventing misinformation is blockchain— a database technology originated in 2008 that records and stores information in blocks of data that are linked, or “chained,” together. The software is free and open source, which makes it available to the public.

Data stored in blocks are synchronized across a network of users, using encrypted software that stores and processes the information. Every single activity on the network is published publicly as a blockchain and anyone can see the trail of activity. Any alterations to an element of the chain are immediately recorded, which creates a verifiable, secure record of any bits of data stored there.

Dr. Ariel Zetlin-Jones, Associate Professor Of Economics for the Tepper School of Business at Carnegie Mellon University

Because blockchains promote the ability to trace the history of digital items from their origin, they are often viewed as a tool to establish the provenance of such items (in supply chains, in identity management systems, etc.). A natural conjecture then is that blockchains may be useful to establish the provenance of “information’’ more generally and thus be used to identify true or un-manipulated information and distinguish it from deepfakes or manipulated information.

Consider this idea in the context of a digital image produced by a digital camera. Imagine you observe an image, and you would like to know whether it has been altered or otherwise manipulated from the moment the image was captured by the camera.

The first question you may have in establishing provenance of the image is, what piece of hardware—what physical camera—took the image? Today, we can insert a software chip inside the camera that attaches a digital signature in the metadata of every digital file it produces. Digital signatures are a familiar example of blockchains; they are used to verify the authenticity of messages transmitted by blockchain addresses. In this example, we are using digital signatures to authenticate the physical camera that captured a given image. If we have a database that connects signatures to physical cameras (possibly stored by the camera manufacturer or in a more decentralized setting such as a blockchain) then we can connect any image with a signature to a physical camera that captured that image.

The second question you may have in establishing the provenance of the image is, how do I know the image that was captured by a specific camera has not been altered? Blockchain and related technologies can help resolve this issue as well. We need a way to prove what image the camera actually captured. Here, a tool known as “zero-knowledge proofs” plays a useful role. This tool is a protocol that allows a party (in our case, the camera) to prove to another party (an individual viewing the digital image) that a given statement is true (in our case, that the digital image file coincides with the one produced inside the camera). Just like the digital signature, we can insert a chip inside the camera that generates such “proofs.”

Next, if we store the image along with the proof on a blockchain, we can trace the provenance of the image along with any subsequent manipulations. As a result, images may be authenticated by the blockchain as having a valid signature and valid proof.

This would seem to guarantee the authenticity of that image, since it cannot be altered.

When deepfake and “fake news” claims cast doubt on authentic-appearing photos, viewers have an independent database they can use to verify or falsify a given piece of information (a photo). 

While the process validates images we observe, compared with images that were captured by a specific physical camera, this process does not validate the information we observe as “true.’’ For example, it would be easy enough for a malicious individual to manipulate a given image at will, and then “take a picture” of the manipulated, fake image. With an appropriate camera and camera lens, this picture of a picture would be indistinguishable from an original image and the process we described above would confirm the picture-of-a-picture as valid. Since there are no previous entries for that specific image, it is indeed original—even though it is not factually correct.

Hence, blockchain and related technologies that offer security are valuable tools to address one concern with the mis-information problem: confirming authenticity. However, they do not handle a second concern: confirming “truth.’’ Uncovering deepfakes will then likely require a larger set of technological tools.