Verder naar navigatie Doorgaan naar hoofdinhoud Ga naar de voettekst

The Rising Threat of Vishing Attacks and Deepfakes

door Duncan McDonald

11 augustus 2025

What is vishing, and why is it a rising concern in 2025?

In recent months, there has been a marked increase in cyber attacks leveraging social engineering tactics, particularly those involving voice-based deception. 

In the 2024 Annual Threat Intelligence Report, our analysts spotlighted how rapid advances in artificial intelligence (AI) are fueling a new generation of social engineering tactics. AI-driven tools are making phishing schemes more convincing, while deepfake technology and generative language models make voice-based attacks like vishing increasingly difficult to detect and defend against. 

While technical controls have improved at blocking phishing emails from reaching inboxes, an attack vector gaining traction is voice phishing, or vishing, which refers to telephone-based social engineering. Threat actors use this method to bypass email controls and directly target employees. Groups like Scattered Spider have especially drawn attention for their use of this tactic.

As adversaries pivot to more direct and personalised approaches, understanding the evolving threat landscape around vishing is critical for both security teams and employees alike.

How threat actors use social engineering and OSINT

Unlike phishing, vishing uses phone calls as the primary attack vector. Threat actors impersonate trusted sources such as IT Helpdesks, senior executives, or service providers to manipulate their targets into revealing sensitive information or performing actions.

These attacks are often supported by Open-Source Intelligence (OSINT). Threat actors use social media platforms like LinkedIn to identify employees, understand their roles, and map the organisational structure.

 

Vishing: Beyond just a phone call

 

Targeting

Vishing attacks can be as simple as contacting the IT Helpdesk, posing as legitimate employees who have "upgraded their mobile phone and lost access to their MFA app" or "lost access to their password manager." In organisations without a comprehensive caller policy or with undertrained helpdesk teams, these calls can lead to unauthorised access being granted.

 


Caller verification policy

Your organisation's verification policy for confirming callers should be considered public. Threat actors can place multiple calls to IT helpdesks over time, gradually piecing together the policy. While each individual call may seem inconsequential, the cumulative effect allows threat actors to map out verification processes, identify gaps, and ultimately bypass security controls more effectively. 

After enumerating the verification policy, threat actors can research staff and the organisation online or through calls to other users, such as the Reception or Customer Services Teams, to obtain the required information to successfully verify their identity with the IT helpdesk. Verification questions such as "Who is your line manager?" or "What was your start date?" and "Can you confirm your job title?" are common but trivial to answer using publicly available information from social media.

Alternatively, threat actors may call end users directly, posing as members of the IT helpdesk team. These calls can coincide with phishing emails or SMS messages (smishing) to increase urgency and credibility. A typical script might involve requesting the MFA PIN from a user:

"Hey Adam, it's Rory from helpdesk. I've been forwarded a ticket from networks as they're experiencing some issues with your MFA device, it appears that there's an error with the syncing of the MFA pin. 

So that we can make sure you don't lose access would you mind opening up your MFA app and letting me know what numbers are currently displaying, it may be worth waiting till the next cycle. I will then make sure everything is in sync."

 

Caller ID spoofing and deepfakes

Phone number spoofing is simple, allowing threat actors to make their calls appear to come from legitimate and known telephone numbers.

Moreover, real-time deepfake voice cloning can now be used to impersonate individuals within organisations. With only a few minutes of recorded speech, often gathered from public podcasts, social media, or corporate videos, Threat actors can create AI-generated voices nearly indistinguishable from the real person.

Best practices to prevent voice phishing attacks

While it is not possible to block all spoofed calls or prevent deepfakes, organisations can mitigate vishing risks with a structured approach:

Policies: A comprehensive policy should be in place outlining how IT helpdesk staff and end users verify the identity of incoming callers, along with the specific steps to follow when handling such requests.

Call verification: Caller verification questions should not rely on information considered public, such as job role, line manager, or start date. They should be unique to each staff member and not include generic questions about the organisation.

Three-way video calls: Including a line manager, the user, and the IT Helpdesk in a three-way video call where the line manager will ask questions of the user, only known to them, to establish their identity.

Awareness training: Ensure staff can recognise social engineering tactics and verify unusual requests through secondary channels.

Call-back policies: Instruct employees to end calls and call back on numbers recorded internally.

IAM controls: Limit helpdesk capabilities to reset passwords or grant access without secondary approvals.

Monitoring and reporting: Encourage prompt reporting of suspicious calls.

Trigger alerts: All password requests (successful or not) should trigger an email to the user to alert them if someone is trying to reset their password.

Building a resilient security culture against vishing

As technical controls improve at preventing phishing emails from landing and staff receive years of phishing training, threat actors are becoming more creative by using voice-based social engineering techniques, including deepfakes.

Without strict verification policies, ongoing staff training, and periodic social engineering assessments, threat actors can exploit these weaknesses.

Recognising that the voice on the other end of the phone may not be who they claim to be is the first step in building a more resilient security culture.

 


 

Duncan McDonald

Duncan McDonald

UK Lead for Technical Assurance Services, NCC Group 

Duncan has worked in the cyber security industry for over 20 years and has extensive experience designing, building, implementing, and running services to protect organisations across Financial Services, Government, Critical National Infrastructure, and Commercial sectors.

Learn more about NCC Group’s vishing and social engineering prevention: