FBI: Les mauvais acteurs utilisent les deepfakes pour les embauches à distance
Deepfakes (AI-generated or AI-assisted videos falsifying human beings) are a relatively known quantity even in mainstream media – particularly due to screenwriter and director Jordan Peele’s 2018 showcase of how believable the technology was in animating an otherwise digital ex-president, Barack Obama. While the tech first made forays in the Internet’s underground, the increasing ease with which bad actors can weaponize the technology is raising alarms throughout most sectors – ou à tout le moins, they should be.
The FBI said bad actors have been prioritizing positions related to IT, la programmation, database maintenance, and other software-related functions. The idea, il semble, would be to facilitate access to customer information, corporate financial data, and proprietary information. Bien sûr, one also has to consider the “Tout comme n'importe quel jeu que vous avez téléchargé depuis l'App Store ou Google Play” technique of straight-up infecting a company’s infrastructure with a virus (more likely a ransomware, according to the latest trends) that could throw havoc in a company’s prospects.
Deepfake technology still isn’t at the level required to fool everyone – especially the most attentive ones, who look for de-synced actions between the “animation” and its audio or odd disconnects between skin coloration – but the technology is sure to only improve. Already companies are developing AI-chasing AIs that can detect the usage of deepfakes. It’s a case of a dog chasing its own tail: and who knows who’ll win in the end.