A new survey found that 50% of UK residents aged 16 to 34 cite deepfake nudes as their top worry related to AI technology, SecurityBrief reports.
The survey, published by VerifyLabs, found that 35% of Brits across all age groups said sexualized deepfakes of themselves or their children were their top concern.
“The study indicated that more than one in three respondents (36%) are also worried about the impact deepfakes could have on their family and friends,” SecurityBrief writes. “These findings point to serious emotional and psychological risks associated with the malicious use of deepfake technology, especially when it targets individuals or their loved ones.”
More than half (55%) of UK adults cited financial losses as their top fear associated with AI. Cybercriminals are increasingly using AI tools to craft extremely convincing social engineering attacks.
“Financial risks associated with deepfakes remain a prominent fear,” SecurityBrief writes. “According to the research, more than half of those surveyed (55%) cited uses for scams and fraud as their greatest concern. Almost half (47%) highlighted sophisticated business fraud, including blackmail, criminal activity, and the potential loss of life savings, as their leading worry. A further 44% are apprehensive about AI-generated content facilitating unauthorised access to personal or sensitive information.”
Additionally, SecurityBrief notes that “10% of participants are unsure what constitutes a deepfake call, demonstrating a need for greater public education on the forms and risks of audio-based deepfake scams.”
These attacks will constantly grow as more sophisticated AI tools improve. AI-powered security awareness training can enable your employees to stay ahead of evolving social engineering threats. KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 HRM+ platform to strengthen their security culture and reduce human risk.
SecurityBrief has the story.