The Deepfake Threat is Growing: People Can No Longer Distinguish AI-Generated Content

AI-generated deepfakes have now reached a level that can deceive the human eye. Tech companies and governments are attempting to implement new measures to distinguish the real from the fake.

When AI-powered visual and video generation tools first emerged, distinguishing fake content was relatively easier. Distorted facial proportions, faulty lip synchronization, artificial shadows, or blurry details provided obvious clues betraying a deepfake. However, generative AIs have now eliminated almost all of these distinguishing flaws. This makes deepfake technologies a much greater threat. Security experts warn that many people are no longer able to detect deepfake content.

According to experts, deepfakes are about to cross a critical threshold where they cannot be distinguished by the human eye. Security expert Perry Carpenter suggests that individuals should no longer focus on technical details, but rather on the emotional responses triggered by the content. Carpenter states that content evoking feelings of fear, panic, urgency, or authority should be viewed as a warning signal. He suggests the most effective action is to consciously slow down and begin questioning the content when these emotions kick in.

The proliferation of deepfake technology has also caused an increase in digital abuse cases targeting women and children. Figures shared by the Internet Watch Foundation show that reports regarding AI-generated child abuse content have more than doubled in just one year. Security experts also draw attention to the increasingly widespread use of “undressing” applications (nudification apps).

Research reveals that the deepfake threat extends far beyond individual cases. Attacks and manipulation operations targeting institutions are also on the rise. Moreover, access to these technologies is becoming increasingly easy. Today, scammers can purchase ready-made “persona packages” containing everything from synthetic faces and fake voices to digital histories and social media profiles. This challenges biometric protection methods as well. According to Entrust’s 2026 Identity Fraud Report, deepfake attacks now play a role in one out of every five biometric fraud attempts.


The UK is Enacting Laws to Prevent Offensive Deepfake Content

Faced with this picture, tech companies have also accelerated their defense mechanisms. On December 18, Google integrated the deepfake detection tool, SynthID, directly into the Gemini app. With this feature, users can check whether an image or video was generated or subsequently altered by Google’s AI systems. The fact that SynthID, launched in 2023, has marked over 20 billion pieces of AI content with invisible watermarks demonstrates how early the company focused on this area. However, this step by Google is just one part of the increasing transparency pressure across the industry.

Governments are now slowly stepping in against this threat. Indeed, we have started to see steps taken in this direction recently. The United Kingdom announced it is preparing a new law introducing criminal sanctions against those developing and using “undressing” applications. Technology Secretary Liz Kendall emphasized that such tools have clearly become instruments of abuse and humiliation, stating that the weaponization of technology in this way will not be condoned. In the near future, we may see similar steps being taken in other countries as well. Because deepfake technology is becoming an increasingly major threat for both individuals and institutions.

You Might Also Like;

Exit mobile version