AI Fuels Cybercrime, FBI Warns

The FBI, the United States’ primary domestic intelligence and security agency, has revealed that hackers are finding it increasingly straightforward to produce malicious software, aided by the progress in artificial intelligence technology. The agency predicts a rise in this trend as time progresses.

The FBI has issued an alert indicating that hackers are making use of sophisticated artificial intelligence tools, including ChatGPT, to swiftly create damaging code, thus making cybercrimes that were once more challenging to execute more frequent.

During a press briefing, the FBI expressed its concerns, noting that AI-powered chatbots are being employed for a variety of unlawful acts. This ranges from scammers refining their fraudulent schemes to terrorists seeking guidance from these tools on carrying out more lethal chemical attacks.

A senior FBI official remarked, “We expect these trends to intensify over time with the wider adoption and democratization of AI technologies.” The agency has observed that malevolent actors are applying AI to bolster traditional criminal endeavors. This includes deploying AI-generated voice simulations to impersonate familiar voices, aiming to trick family members or the elderly.

Additionally, there’s been a notable increase in AI-driven writing tools developed by hackers to specifically target online users.

With the advent of more sophisticated multimodal models like GPT-4, hackers are now equipped to produce convincing deepfakes, pressuring victims into revealing personal information or making financial transactions.

In response to these potential threats, earlier this year, Meta declared that it would hold off on releasing its new speech generation tool, Voicebox, until sufficient safeguards were established to mitigate the risk of its exploitation for harmful purposes.


You may also like this content

Exit mobile version