Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Latest News
  3. What are Deepfakes & How Can Businesses Protect Themselves?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

What are Deepfakes & How Can Businesses Protect Themselves?

Scheduled Pinned Locked Moved AI Latest News
ai-news
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote last edited by
    #1

    Published Time: 2025-07-24T08:42:18Z

    Article

    Machine Learning

    What are Deepfakes & How Can Businesses Protect Themselves?

    ByKitty Wheeler

    July 24, 2025

    6 mins

    Share

    Image 1

    As deepfakes become more sophisticated with AI, businesses are left wondering how to protect themselves | Credit: Getty

    As deepfake risks accelerate alongside AI, businesses face fraud, identity theft and financial loss, urging regulation, protection and prevention strategies.

    Tags

    DeepfakeAIFraudAI Risks

    As deepfakes become more sophisticated with AI, businesses are left wondering how to protect themselves

    Fraudsters are weaponising AI to create fake videos and audio recordings that can fool even experienced executives.

    Deepfakes, which use machine learning (ML) to generate convincing imitations of real people speaking or appearing in video, have moved from internet curiosities to serious business threats.

    The technology works by analysing hours of existing footage or audio to learn speech patterns, facial expressions and mannerisms.

    Once trained, these systems can make anyone appear to say or do things they never did.

    What once required Hollywood-level resources now runs on consumer hardware, with some tools producing credible fakes from just minutes of source material.

    Banks have lost millions to voice clones that bypass security systems and corporate executives find themselves impersonated in video calls to authorise fraudulent transactions.

    But how can businesses protect themselves?

    The hardest challenge with preventing and protecting against deepfakes

    The technology has matured faster than defences against it, leaving businesses scrambling to protect themselves from an entirely new category of fraud.

    The CEO of WPP, Mark Read | Credit: WPP

    The scale of the problem is already visible. The CEO of WPP, the world’s largest advertising group, became a target when fraudsters created a WhatsApp account using his photograph and deployed voice cloning during a Microsoft Teams meeting.

    The attackers impersonated Mark Read and another senior executive to solicit money and personal details from an agency leader within the company’s network.

    “Fortunately the attackers were not successful,” Mark wrote in an internal email to staff.

    “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”

    Financial sector bears the brunt of voice cloning attacks

    Banks have also fallen victim to voice clones that sound identical to authorised users.

    The technology replicates speech patterns, accents and vocal quirks well enough to fool automated systems and human operators alike.

    The defunct digital media startup Ozy provides a case study in corporate deepfake fraud.

    Image 2: Youtube Placeholder

    An executive pleaded guilty to fraud and identity theft after reportedly using voice-faking software to impersonate a YouTube executive, attempting to deceive Goldman Sachs into investing US$40m in 2021.

    Voice calls and video meetings remain primary channels for high-stakes financial decisions, making them natural targets for fraudsters.

    Current tools can generate realistic voice imitations using only minutes of audio samples. Public figures face particular risks because their voices are readily available through speeches, interviews and media appearances.

    Corporate executives increasingly fall into this category as companies emphasise thought leadership and public engagement.

    The technology is becoming cheaper and easier to use.

    Low-cost deepfake tools have spread online, removing technical barriers that once limited such attacks to well-funded criminal groups.

    Ethical implications extending beyond financial fraud

    The misuse of deepfake technology goes well beyond corporate fraud.

    Creating fake intimate imagery without consent has become a weapon against women in public life.

    Former UK Conservative Cabinet Minister, Penny Mordaunt | Credit: X

    Former UK Conservative Cabinet Minister Penny Mordaunt, who served as an MP for 14 years, experienced this form of abuse when her face appeared in AI-generated pornographic content alongside other senior female politicians.

    “The people behind this... don’t realise the consequences in the real world when they do something like that... It plays across into people taking actual real world actions against ourselves,” she told BBC Newsnight.

    Victims report feeling violated and helpless, particularly when fake content spreads across social media platforms.

    The technology has now become so good that viewers may not immediately spot fakes.

    Even school principals have been framed by fake audio recordings.

    A school principal in Baltimore faced suspension over audio recordings containing racist and antisemitic comments that later proved to be deepfakes created by a colleague.

    Political figures including Joe Biden and former presidential candidate Dean Phillips have also been impersonated through AI-generated audio, raising concerns about electoral integrity and democratic discourse.

    Corporate defence strategies requiring multi-layered approach

    Companies need multiple layers of defence against deepfake threats. Technology alone cannot solve the problem as fakes keep getting better.

    Authentication protocols

    Authentication protocols are the first line of defence.

    Companies should establish verification procedures for high-stakes communications, particularly those involving financial transactions or sensitive information sharing.

    These might include callback procedures, secondary confirmation channels or in-person verification for critical decisions.

    Employee education

    Employee education programmes play a crucial role in deepfake defence.

    Staff training should cover recognition techniques, including awareness of subtle audio quality issues, unusual speech patterns or requests that deviate from normal business procedures.

    Mark’s internal communication at WPP outlines specific warning signs for employees to recognise potential attacks.

    These included requests for passport information, money transfers and references to secret acquisitions or transactions unknown to other company personnel.

    “Just because the account has my photo doesn’t mean it’s me,” he notes in his staff warning, emphasising that visual verification alone proves insufficient for authentication.

    Image 3: Youtube Placeholder

    Detection tools are improving, but they face an arms race with generation technology.

    As detection methods get better, so do the techniques used to create convincing fakes.

    The regulatory responses emerging as technology outpaces oversight

    Governments worldwide are grappling with appropriate regulatory responses to deepfake technology.

    The UK government has proposed legislation making the creation or distribution of sexually explicit deepfakes a criminal offence, responding to the surge in non-consensual intimate imagery.

    Yet platform accountability remains a contentious issue.

    Social media companies face pressure to implement detection and removal systems, but the scale of content and sophistication of deepfakes create huge challenges.

    Penny advocates for stronger age verification measures for online platforms, suggesting that technology leaders have the capability to implement effective solutions.

    “[Elon Musk] is taking the human race to Mars. I’m sure he can figure out age verification,” she says, referring to the owner of social media platform X.

    “We have seen increasing sophistication in the cyber-attacks on our colleagues and those targeted at senior leaders in particular,” Mark says in his email.

    Image 4

    Image 5

    Image 6

    Image 7

    Image 8

    Image 9

    Read Now

    Related Content

    • How Deloitte Deploys AI for Industrial Carbon Capture Machine Learning
    • How AI is Transforming Operations for CSPs Technology
    • Bain & Co: AI Infrastructure Needs to Face $800bn Shortfall Data & Analytics
    • How AI & Digital Twins are Shaping Sustainable Business Technology
    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups