How Incentives Can Help Stop Deepfakes

Deepfakes may spread lies, but they might also force the media to become more honest to maintain trust.

Disinformation. Fake news. Foreign propaganda. Damaged reputations. Distrust. 

A photo headshot of James Best
James Best, Assistant Professor of Economics

Will this be the legacy of AI-generated deepfakes? Or will public reasoning and institutions adapt? 
The first major concern is that deepfakes will serve to spread harmful disinformation. For instance, a fake video could convince people that a presidential candidate promised to pass a deeply unpopular law. This is a problem of generating false positives: people believe something is true, when in fact it is not.  Of course, this is not a new problem—we have always had instances of credulous people believing obvious lies, whether spoken or in print. 

However, people think that a fake video is significantly different from someone simply telling a lie. It is natural to suggest something like ‘seeing is believing.’ However, economic theories of information would suggest that the reason video evidence has been compelling in the past is because of how much harder it is to make a fake video than it is to tell a lie. After all, talk is cheap. Yet, if making fake videos becomes as cheap as making false claims, why should people trust these videos any more than they trust lies? 

There will be many people who believe in fake videos in the coming years, just as there are people today who believe the many lies that get pushed on social media. Moreover, it will take time to adjust to the ease with which these fakes can be created. In the short run, people may be especially gullible. However, in the long run, people should come to understand the nature of the technology and treat videos, photographs and audio with the same skepticism as they currently treat spoken and written claims. 

This raises the second major concern: false negatives. People could stop trusting genuine videos, photos and audio. Indeed, a common purpose of disinformation is not to get people to believe a particular thing but to sow enough doubt to paralyze the population with uncertainty. For instance, the book documents how corporate lobbyists prevented the regulation of smoking cigarettes and action on climate change merely by generating uncertainty about the science. 

Similarly, it has been argued that the goal of Russian propaganda regarding the war in Ukraine is to generate just enough doubt about the evidence of Western media outlets and Ukrainians that other nations do not feel compelled to get involved themselves. 

So, the argument goes, deepfakes may be used to cripple the population’s ability to acquire and act on new information—not because they believe the deepfakes, but because they do not believe anything. The most pessimistic commentators paint a picture of informational collapse: a nihilistic society incapable of responding to reality.  Yet, if this were true, we should also have been incapable of trusting any news article or report in the past, unless it were accompanied by photographic evidence. This is patently not the case. Indeed, most of our decision-relevant information is not corroborated by photographic evidence. For example, scientific journals are trusted without the aid of videos and use very few photographs. You can see this for yourself quite easily by reading any major newspaper; very rarely can you gather all the information you need merely by looking at a photo in the New York Times.

So where do these claims derive their credibility? In the theory of strategic communication, it comes down to whether people have an established incentive to be truthful. Sometimes, some people clearly have no reason to lie. This is why it is seen as more credible when someone makes a claim that goes against their own interest. Indeed, this is reflected in the US legal system, where admissions against interest are admissible in court, despite only being hearsay. However, we often can find ways of making lying more costly. 

A particularly important disciplining factor is the cost of lost reputation when news companies and commentators are found to be mistaken, or worse, willfully dishonest. My theoretical work with University of Oxford professor Daniel Quigley shows how this can be harnessed to generate more credible information provision (with a special application to online platforms). 

Unfortunately, Gallup reports a dramatic decline in trust of the media over the last fifty years. This suggests that something has undermined reputational incentives for truth. While this decline precedes concerns about deepfakes, it makes the advent of deepfakes all the more dangerous. Surprisingly though, deepfakes may also generate some upsides. 

In an ongoing project with Dr. Quigley, I am investigating how deepfakes might improve trust in claims made by the media, albeit at the cost of trust in audiovisual evidence. The mechanism is as follows: If deepfakes undermine trust in audiovisual evidence, information providers will need to convince people they are honest arbiters of information if they want to maintain their ability to sell stories to the public. Thus, the value of a reputation for honesty and accuracy will increase, as this may become one of the few remaining sources of credibility. This in turn implies that information providers may be less willing to print falsehoods, and more eager to check the accuracy of what they are reporting. 


There are many caveats to the arguments above. For one, in economic theory we tend to treat people as more sophisticated in their reasoning than they are; it is well documented that people under weigh the incentives of information providers when assessing the truth of statements. Furthermore, there may well be something fundamental about human cognition that causes audio-visual information to bypass human skepticism. Nonetheless, thinking about how people’s approach to evaluating evidence will evolve in response to this new technology, and the new incentives it creates, is essential for understanding how deepfakes will affect the information ecosystem in the years to come.