
CyberheistNews Vol 15 #42 | October 21st, 2025
[Heads Up] Fake ‘Support Calls’ Used to Breach Your Salesforce Accounts
Google’s Mandiant has published guidance on defending against an ongoing wave of social engineering attacks targeting organizations’ Salesforce instances.
The organized criminal gang tracked by Google as “UNC6040” has been using voice phishing attacks to trick employees into granting access.
“Over the past several months, UNC6040 has demonstrated repeated success in breaching networks by having its operators impersonate IT support personnel in convincing telephone-based social engineering engagements,” the researchers write.
“This approach has proven particularly effective in tricking employees, often within English-speaking branches of multinational corporations, into actions that grant the attackers access or lead to the sharing of sensitive credentials, ultimately facilitating the theft of organizations’ Salesforce data. In all observed cases, attackers relied on manipulating end users, not exploiting any vulnerability inherent to Salesforce.”
Mandiant recommends that organizations use a defense-in-depth strategy with measures to ensure that callers are who they say they are. In some cases, the attackers impersonate support personnel from third-party vendors in an attempt to gain access. Help desk employees who receive these calls should do the following:
- “End the inbound call without providing any access or information.
- Independently contact the company’s designated account manager for that vendor using trusted, on-file contact information.
- Require explicit verification from the account manager before proceeding with any request.”
Additionally, employees should be wary of unsolicited requests that ask them to log into services used by their employer’s organization. These may be phishing attacks designed to steal their credentials.
“Mandiant has observed the threat actor UNC6040 targeting end-users who have elevated access to SaaS applications,” the researchers write. “Posing as vendors or support personnel, UNC6040 contacts these users and provides a malicious link.
“Once the user clicks the link and authenticates, the attacker gains access to the application to exfiltrate data. To mitigate this threat, organizations should rigorously communicate to all end-users the importance of verifying any third-party requests.”
Blog post with links:
https://blog.knowbe4.com/protect-yourself-from-voice-phishing-attacks-targeting-salesforce-instances
Can’t Miss Sessions at the Human Risk Summit
Your users hold the key to your strongest defense. This November 6, discover how to make that a reality at the Human Risk Summit. Walk away with battle-tested strategies that transform your users into your strongest asset, plus actionable tools to improve your organization’s security culture.
This half-day event brings together forward-thinking IT leaders like you with sessions covering top cybersecurity threat trends and innovative approaches you can implement immediately.
Sneak peek into the agenda:
- Keynote: Security in the Age of Everything-as-a-Weapon – explore how cybercriminals are mastering AI faster than many organizations are adapting their defenses.
- IT Leader Panel: Building Adaptive Security Culture – featuring seasoned IT leaders sharing their real-world experiences in building adaptive security cultures and reducing human risk.
- 2026 Phishing Threat Trends Preview – get a first look at our latest Phishing Threat Trends Report, walk through attack scenarios and share the trends that are shaping the threat landscape.
- The Deepfake Training Playbook – learn the essential training frameworks to help your users recognize and respond to AI-driven manipulation attempts.
Plus: Don’t miss hands-on workshops and an exclusive preview of what’s next on the Human Risk Management roadmap.
Save My Spot:
https://gateway.on24.com/wcc/eh/1815783/human-risk-management-summit?partnerref=CHN2
We Need to Teach Our AIs to Securely Code
By Roger Grimes
I have been writing about the need to better train our programmers in secure coding practices for decades.
At least a third of data compromises involved exploited software and firmware vulnerabilities and we are on our way to having over 47,000 separate, publicly known vulnerabilities this year. There are at least 130 new vulnerabilities learned and publicly reported every day, day after day. That is a lot of exploitation. That is a lot of patching.
And until now, what I have said is that we need to:
- Better train our coders in secure coding practices
- Programming curricula need to teach secure coding practices
- Employers need to require programmers who have secure coding skills
Well, that is all old news now. We no longer need it.
What we now need is to teach AI how to code more securely.
Out of all the productivity gains that have come with AI, the ability for it to write code (and/or assisting developers in writing code) is easily the biggest productivity development to come out of the current level of AI maturity. Almost every coder alive is using AI to code, and if they are not, they will be.
The productivity gains are very impressive. My coder friends say they are experiencing at least a 30% – 40% productivity increase by using AI. Even my programmer friends who were originally AI skeptics have come around. Coding is largely an AI-driven world, although humans still need to be in the loop.
The time to train our programmers in secure coding has passed.
If AI is doing most of the coding, it is time for AI to be forced to do secure coding. And right now, it isn’t doing it well. Every study I have seen on the matter shows that AI is bad or worse at secure coding than human programmers.
[CONTINUED] at the KnowBe4 blog with links:
https://blog.knowbe4.com/we-need-to-teach-our-ais-to-securely-code
2025 Phishing Threat Trends Report
Our Phishing Threat Trends Reports bring you the latest insights into the hottest topics in the phishing attack landscape. In 2025, it’s been in with the old and in with the new, as cybercriminals use new techniques to “revive” the efficacy of existing attacks.
Download this latest edition to discover:
- What’s driving a resurgence in ransomware delivered by phishing emails
- How cybercriminals have achieved a 47% increase in attacks evading Microsoft’s native security and secure email gateways
- Which jobs cybercriminals are most likely to apply for in your organization
- How 92% of polymorphic attacks utilize AI to achieve unprecedented scale – and change the phishing landscape for good
- Plus, other top phishing stats for 2025
Download Now:
https://info.knowbe4.com/phishing-threat-trends-report-chn
The Compliance Catch-22: How Financial Institutions Can Master Data Governance and Regulatory Risk
The financial services industry operates in one of the most heavily regulated environments in the business world. With sensitive client data flowing through every transaction and communication, financial institutions face an increasingly complex web of compliance requirements that can make or break their operations. Traditional approaches to data governance simply aren’t cutting it anymore.
The Perfect Storm of Regulatory Challenges
Financial institutions today must navigate a labyrinth of regulatory frameworks that would challenge even the most seasoned compliance professionals. From the Gramm-Leach-Bliley Act (GLBA) to SEC requirements, FINRA regulations, and global frameworks like GDPR, each comes with its own set of rules, reporting requirements and penalty structures.
What makes this particularly challenging is that these regulations often overlap and sometimes conflict, creating a compliance puzzle that requires constant attention and expertise.
Under GDPR alone, financial institutions face potential penalties of up to 4% of global revenue for serious violations. In 2023, FINRA reported a staggering 63% increase in fines, reaching $89 million.
Despite all the sophisticated technology and security measures financial institutions have implemented, 68% of data breaches still stem from human error, not system flaws. The top culprit? “Misdelivery”—simply sending sensitive information to the wrong recipients.
It’s a humbling reminder that even in our digital age, the human element remains both our greatest asset and our biggest vulnerability.
The Hidden Costs of Traditional Compliance Approaches
Most financial institutions have built their compliance strategies around detection and response rather than prevention. They’ve invested heavily in monitoring systems, incident response teams and remediation processes.
While these elements are important, they represent a reactive approach to a problem that demands proactive solutions.
When a data breach occurs due to an employee accidentally sending client financial information to the wrong recipient, the real costs extend far beyond immediate regulatory fines. There’s the damage to client trust, the reputation hit that can last for years, the operational disruption of incident response and the long-term impact on business relationships.
[CONTINUED] at the KnowBe4 blog:
https://blog.knowbe4.com/the-compliance-catch-22-how-financial-institutions-can-master-data-governance-and-regulatory-risk
Identify Weak User Passwords in Your Organization With the Newly Enhanced Weak Password Test
Cybercriminals never stop looking for ways to hack into your network, but if your users’ passwords can be guessed, they’ve made the bad actors’ jobs that much easier.
Verizon’s Data Breach Investigations Report showed that 81% of hacking-related breaches use either stolen or weak passwords.
The Weak Password Test (WPT) is a free tool to help IT administrators know which users have passwords that are easily guessed or susceptible to brute force attacks, allowing them to take action toward protecting their organization.
Weak Password Test checks the Active Directory for several types of weak password-related threats and generates a report of users with weak passwords.
Here’s how Weak Password Test works:
- Connects to Active Directory to retrieve password table
- Tests against 10 types of weak password related threats
- Displays which users failed and why
- Does not display or store the actual passwords
- Just download, install and run. Results in a few minutes!
Don’t let weak passwords be the downfall of your network security. Take advantage of KnowBe4’s Weak Password Test and gain invaluable insights into the strength of your password protocols.
Download Now:
https://info.knowbe4.com/weak-password-test-chn
Phishing Remains the Top Initial Access Vector in Cyberattacks Across Europe
Phishing was the initial access vector for 60% of cyberattacks across Europe between July 2024 and June 2025, according to the European Union Agency for Cybersecurity (ENISA).
“With regards to the primary method for initial intrusion, phishing (including vishing, malspam and malvertising) is identified as the leading vector, accounting for about 60% of observed cases,” the agency says.
“Advancements in its deployment, such as Phishing-as-a-Service (PhaaS) that allows the distribution of ready-made phishing kits, indicate an automation that paves the way for attackers regardless of their experience.”
The agency warns that AI tools have introduced new risks by assisting in cyberattacks and as a target for attacks themselves.
“The growing role of AI has become an undeniable key trend of the rapidly evolving threat landscape,” the researchers write. “The report highlights AI use both as an optimization tool for malicious activities but also as a new point of exposure.”
Large Language Models (LLMs) are being used to enhance phishing and automate social engineering activities. By early 2025, AI-supported phishing campaigns reportedly represented more than 80 percent of observed social engineering activity worldwide.
“Attacks on the AI supply chain are on the rise. While the focus of threat activities involving AI was the use of consumer-grade AI tools to enhance their existing operations, the emergent malicious AI systems is raising concerns about their capabilities in the future due to the widespread use of AI models.”
ENISA also notes an increase in supply chain attacks, which can allow threat actors to scale their attacks by going after a victim’s customers.
“Closely linked to recent events in the EU, an increase in targeting cyber dependencies has been noted,” the agency says. “Cybercriminals have intensified their efforts to abuse critical dependency points, for example in the digital supply chain, to get the most out of their attacks.
“This method is able to magnify the impact of actions by leveraging the interconnectedness inherent in our digital ecosystems.”
KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 HRM+ platform to strengthen their security culture and reduce human risk.
Blog post with links:
https://blog.knowbe4.com/phishing-remains-the-top-initial-access-vector-in-cyberattacks-across-europe
Let’s stay safe out there.
Warm regards,
Stu Sjouwerman, SACP
Executive Chairman
KnowBe4, Inc.
PS: I am still Exec Chair of KnowBe4, but I started a new company! Recommend your marketing team to sign up for the Beta Waitlist today:
https://www.readingminds.ai/
Quotes of the Week
“Go confidently in the direction of your dreams. Live the life you have imagined.”
– Henry David Thoreau – Author (1817 – 1862)
“The great use of life is to spend it for something that will outlast it.”
– William James – Philosopher (1842 – 1910)
You can read CyberheistNews online at our Blog
https://blog.knowbe4.com/cyberheistnews-vol-15-42-heads-up-fake-support-calls-used-to-breach-your-salesforce-accounts
Security News
OpenAI Warns Against Phishing Attackers: The Ping, Zing and the Sting
Attackers continue to exploit AI tools like ChatGPT to assist in social engineering attacks, according to a new report from OpenAI.
The report describes one threat actor, believed to be tied to China, that was using ChatGPT to write phishing emails and develop malware. The researchers note that ChatGPT did not craft any new attack techniques, but it improved the efficiency and sophistication of routine social engineering attacks.
“Our model did not introduce novel offensive capabilities,” the researchers write. “The operators appear to have primarily used our models to seek incremental efficiency in existing workflows, especially crafting phishing content and debugging or modifying their tooling.
“The actors used ChatGPT to perform two main tasks: generating content for phishing campaigns in multiple languages, including Chinese (both simplified and traditional), English, and Japanese, and helping to develop tools and malware. Their development work was consistent with a technically competent but unsophisticated actor.”
These findings were consistent with other attack campaigns that abused the company’s tools. The researchers explain, “The tradecraft advantage sought through model assistance came from linguistic fluency, localization, and persistence: likely fewer language errors, faster glue code, and quicker adjustments when something failed.”
The researchers also identified large-scale scam operations that used AI to automate attacks. “Abuse of our models to support scams ranges from lone actors attempting fraud to scaled and persistent operations likely linked to organized crime groups,” OpenAI says. “Regardless of their origins and precise tactics, the scam-related activity we’ve disrupted typically follows a common pattern, which we think of as the ping (cold outreach), the zing (trying to generate enthusiasm or panic), and the sting (extracting money or valuable information).
“These scammers start out by scattering content (whether AI-generated or not) across messaging services and the internet, including by running social media ads. They then attempt to inspire anyone who replies with either enthusiasm for a lucrative opportunity or fear of some imminent financial loss, and leverage that emotion to convince the target to hand over money or sensitive information.”
OpenAI has since banned the accounts associated with this activity, but threat actors are constantly looking for ways to bypass AI safety measures. New-school security awareness training gives your organization an essential layer of defense against evolving social engineering attacks.
KnowBe4 empowers your workforce to make smarter security decisions every day. Over 70,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk.
OpenAI has the story:
https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/
North Korea’s Remote Worker Schemes Target the Architecture Industry
Researchers at KELA warn that North Korea’s remote worker schemes have expanded to target organizations in the architectural design industry. KELA found that these workers infiltrated industrial design and architecture companies across multiple U.S. states.
The researchers note that “[t]heir involvement could pose risks related to espionage, sanctions evasion, safety concerns, and access to sensitive infrastructure designs.”
“Operating under fake identities, often from China, Russia, Hong Kong, Southeast Asia, or even within North Korea via controlled internet access tools, these workers use VPNs, VPSs, Western accomplices, and ‘laptop farms’ to conceal their origins and bypass verification,” the researchers explain.
“They secure freelance or full-time jobs on major platforms by leveraging stolen or rented identities, AI-generated photos, and fraudulent portfolios, ultimately infiltrating companies across technology, crypto, transportation, and critical infrastructure sectors.
“Once inside, they either quietly funnel salaries back to the regime or exploit their access to deploy malware, steal data, or conduct extortion.”
The researchers say companies should be on the lookout for the following red flags associated with prospective hires:
- “Suspicious freelancer profiles: Limited work history, unverifiable portfolios, AI-generated headshots, or recurring email/address patterns (e.g., birth years, animals, colors, mythological names).
- Identity inconsistencies: Resumes with mismatched details, multiple personas linked to the same “worker,” or unusual geolocation data when verifying candidates.
- Unexpected skill overlaps: Candidates offering expertise across disparate fields, such as software development and structural engineering, may indicate fabricated or pooled identities.”
Additionally, KELA notes, “Security awareness can’t stop at the SOC – companies should educate HR and recruiting staff on common red flags tied to DPRK operatives. This includes training to spot falsified identities, running enhanced background checks, and integrating vetting tools into the hiring pipeline.”
KELA has the story:
https://www.kelacyber.com/blog/espionage-exposed-inside-a-north-korean-remote-worker-network/
What KnowBe4 Customers Say
“Bryan, Appreciate you checking in. We’re finding good value out of the platform. Our phish-prone % has halved in the past 3-4 months. We’re training repeat clickers and building a culture of security. More updates to come. Great product so far.”
– H.K., Program Administrator
“Bryan, yes, we are good so far. Our CSM (John B.) has been great with his coaching and onboarding guidance. We were able to smoothly roll out the Cyber Security Awareness training to all our staff (100 +). I appreciated being able to see their progress and completion rates on the Dashboard. Thank you for asking!”
– A.D., Sr. Director Operations
The 10 Interesting News Items This Week
Cyberheist ‘Fave’ Links