Welcome to Hoxton Wealth, the new home of Hoxton Capital

Cyber SecurityNovember 18, 2025

AI and cybersecurity : Protecting Your Finances In A Rapidly Changing Threat Landscape

Hoxton BlogAI and cybersecurity : Protecting Your Finances In A Rapidly Changing Threat Landscape

  • Cyber Security

AI is a double-edged sword in cybersecurity. While it is an invaluable tool in the fight against cybercrime, it also empowers malicious actors to refine tactics, scale attacks, and take a businesslike approach to fraud. Duncan Taylor, Infinity’s compliance officer, explores how generative AI is reshaping the cybersecurity landscape and shares practical advice for keeping your finances safe.

Generative AI as Cybersecurity Ally

AI capabilities continue to advance at an astonishing speed. With the threat of a cyberattack ever present, cybersecurity professionals are increasingly harnessing the power of AI to keep the digital world safe and prevent web attacks, including phishing attacks, malware, and distributed denial of service (DDoS) attacks.

AI can collect and analyse data and spot patterns and unusual behaviours far more quickly than a human. In practical terms, that means AI can assist with mitigating risk, detecting the signs of an attempted cyber incident, and preventing cyberattacks. The automation of many manual and laborious cybersecurity tasks also frees up valuable time and resources for cybersecurity professionals to focus on other key aspects of their work.

However, AI is not just a force for good. Malicious actors have, as ever, been quick to jump on the technological advances offered by generative AI. Indeed, CrowdStrike’s 2025 Global Threat Report talks of an ‘arms race’ in cyber operations.

Generative AI as Cybersecurity Adversary

The threat landscape is evolving extremely fast with hackers weaponising AI to scale operations, exploit vulnerabilities at record speed and scale, and create deceptive content that is nearly undetectable with traditional defences.

According to CrowdStrike ‘Cybercrime is becoming a highly efficient business, using automation, AI, and advanced social engineering to scale attacks and maximise impact. From vishing to identity-based intrusions, adversaries are more organised and effective than ever.’

How Malicious Actors are Using Generative AI

  1. AI and deepfakes

Criminals are using deepfake technology to commit fraud against individuals and businesses by deploying AI-generated voices and videos to impersonate executives, colleagues, or even family members to initiate fraudulent fund transfers. One shocking example concerned a Hong Kong finance employee who attended a video conference with what he thought were his co-workers, including the company’s chief financial officer, and subsequently transferred HK$200million (around US$25million) to scammers.  The attendees on the call were actually deepfakes, created from publicly available videos.

  1. AI and phishing, vishing, and smishing

While most of us are well versed in traditional phishing attempts such as the email from a high ranking official promising a large sum of money in return for help in transferring funds, AI is enabling fraudsters to personalise phishing emails, phone calls (voice phishing or vishing) and SMS messages (SMS phishing or smishing) to a degree that makes them far harder to detect, fooling even the most vigilant individuals to reveal personal information that can be used for fraudulent means. Examples include tech support scams targeted at older adults, attackers posing as bank representatives to extort personal information or even persuade victims to transfer funds to ‘fix’ an issue, or impersonating government officials to issue false warnings about unpaid taxes. Vishing attacks soared by 442% between the first and second half of 2024.

  1. Automating vulnerability discovery with AI

AI is helping attackers scan systems, analyse code, and identify weaknesses at a speed and scale far beyond manual efforts. By automating the reconnaissance phase, hackers can exploit vulnerabilities faster than organisations can patch them, putting constant pressure on traditional security teams.

  1. Weaponising disinformation campaigns

AI-generated content – ranging from fake news articles to synthetic social media accounts – allows malicious actors to manipulate public opinion, destabilise markets, or tarnish corporate reputations. These campaigns are cheap to launch, difficult to trace, and highly effective at sowing confusion and distrust.

While this kind of disinformation campaign can obviously be used for political reasons, such as the deepfake of Joe Biden allegedly making transphobic remarks, circulated in the US in 2023, the tactic is moving into the corporate sphere. Fabricated or deliberately manipulated audio or visual content can be used intentionally in smear campaigns and corporate sabotage, causing serious damage to a company’s reputation, impacting its financial stability, and potentially leading to its demise.

The World Economic Forum Global Risks Report 2025 ranks misinformation and disinformation fourth in a list of global threats ‘most likely to present a material crisis on a global scale in 2025’ and first of global risks ranked by the likely impact over the next two years.

  1. Interactive intrusions

Over the past year, interactive intrusions, or hands-on cyberattacks, have increased by 27%. These are human-driven intrusions in which malicious actors pose as legitimate users and adapt their tactics in real time. They don’t involve malware and are notoriously difficult to detect because threats mimic trusted behaviour. AI can supercharge human-driven intrusions by making them faster, stealthier, and more scalable. According to CrowdStrike, ‘79% of detections in 2024 were malware-free, up from 40% in 2019.’

  1. Help-desk social engineering

A growing trend involves attackers targeting an organisation’s IT helpdesk. In these attacks, malicious actors impersonate genuine employees, often using publicly available information or prior breaches to sound convincing. Their goal is to persuade help-desk staff to reset passwords, bypass multi-factor authentication, or provide other sensitive access.

Because these requests appear legitimate and come from seemingly trusted sources, they are particularly difficult to detect. Even well-trained staff can be tricked, which makes this type of attack a significant threat to organisations handling sensitive financial data.

  1. AI-powered malware and ransomware

Hackers are beginning to use AI to create polymorphic malware that constantly changes its code to evade detection. Traditional signature-based security tools struggle to keep up with this, making attacks harder to spot and contain. Similarly, AI can optimise ransomware campaigns – from identifying the most valuable targets to automating negotiations for ransom payments.

  1. Data poisoning and model manipulation

Attackers can intentionally corrupt training data to poison AI models, causing them to behave unpredictably or insecurely. For example, feeding flawed data into a company’s AI fraud detection system could make it overlook suspicious transactions, effectively creating blind spots in security.

  1. AI-powered password cracking and credential stuffing

Machine learning models can analyse leaked password datasets to predict likely credentials with unprecedented speed and accuracy. Combined with automation, attackers can launch large-scale credential stuffing attacks against online services far more efficiently.

Strong, unique passwords are a critical line of defence. Employees should never reuse passwords across accounts, and multi-factor authentication should always be enabled wherever possible. Organisations should also enforce regular password updates and provide clear procedures for verifying any help-desk requests. By combining vigilant staff practices with robust password policies, businesses can greatly reduce the risk posed by help-desk social engineering attacks.

  1. Fraud at scale through generative AI

Beyond phishing emails, AI-generated fake documents, invoices, and contracts are being used to support elaborate fraud schemes. By automating what used to require manual effort, attackers can launch thousands of scams simultaneously with minimal cost.

 11. AI in physical and IoT attacks

With the growth of smart devices, hackers are experimenting with AI to identify vulnerabilities in IoT systems – from smart home devices to industrial control systems. AI-driven attacks can manipulate sensor data, unlock physical systems, or disrupt operations.

***

This is by no means an exhaustive list of how AI is reshaping the cybercrime landscape, however it shows the extent to which AI can be used for social engineering, content manipulation and technical exploitation to commit fraud at scale, and how threat actors are adapting their tactics at record speed.

Proactively Defending Against Today’s Cyber Threats

At Infinity, we take cybersecurity extremely seriously with robust monitoring and detection systems both at the point of identity verification and across the entire customer lifecycle.

We adopt a proactive and multi-pronged approach to stay ahead of the hackers.

Training is key in educating all employees about disinformation tactics and enabling them to critically evaluate information, recognise social engineering, and identify potential threats.

We are constantly evolving our cybercrime strategy to develop best practices, create a resilient information ecosystem, and strengthen our defences using both AI and human analysis.

And because client trust is at the heart of everything we do, we also want you to know what to expect from us.

We will never:

  • Ask you to share your password, PIN, or full security details by email, phone, or text.
  • Pressure you into making an urgent transfer or payment.
  • Send you login links by email or SMS that bypass our secure website or mobile app.
  • Ask you to download unexpected attachments or third-party software.
  • Change our bank details without prior written confirmation from your usual adviser.

If you ever have doubts or suspicions about the security of your financial data, it’s important to act quickly. Contact your Infinity adviser or our dedicated support team immediately. Do not click on links, open attachments, or share sensitive information until the matter has been verified. Prompt reporting allows us to investigate potential threats, secure your accounts, and prevent fraud before it escalates.

Together, we can detect and avert threats more effectively

This article first appeared on the website of Infinity Financial Solutions. The business has since been acquired by Hoxton Wealth.

If you’re a British expat in Asia and want to talk about how you can protect yourself against money laundering, we can help. Reach out to our client services team, who are always here to help.

You can contact them by email at client.services@hoxtonwealth.com or via our global WhatsApp number: +44 7384 100200.

About Author

Duncan Taylor

November 18, 2025

Contact Hoxton Wealth

Contact us today to discover how Hoxton Wealth can help you achieve your financial goals. Together, we can build a brighter financial future.