Safer Internet Day 2025: 5 new risks for children

Feb 10, 2025 | children online, online safety

Online scams are evolving to target young people

The internet is a space for connection, learning and creativity. But as technology evolves, so too do the risks, particularly for young people. This year on Safer Internet Day 2025 I’m focusing on one of the most pressing threats: online scams.

Scammers have always relied on deception, but with artificial intelligence (AI), increasingly sophisticated social engineering tactics, and the rise of hyper-personalised digital experiences, young people are more vulnerable than ever. From sextortion and revenge porn to AI-driven emotional manipulation and fake job opportunities, today’s scams go far beyond the traditional phishing email.

For Safer Internet Day 2025 I’m exploring the most concerning scams affecting young people today, how they operate, and what educators, parents, and online safety professionals can do to help.

1. Sextortion: a rising epidemic

One of the most damaging forms of online scams affecting young people is sextortion – a type of blackmail where scammers trick victims into sending intimate images or videos and then threaten to share them unless they pay money or comply with further demands.

How does sextortion work?

Sextortion often begins with scammers posing as attractive individuals on social media, dating apps, or gaming platforms. They build rapport quickly, often within hours or days, before persuading their target to share explicit content. Unbeknownst to the victim, the scammer is not a real person but a criminal organisation using stolen images, deepfakes, or AI-generated personas.

Once the victim has shared explicit material, the threats begin. Scammers demand money, cryptocurrency, or even additional images in exchange for silence. Many young people feel too ashamed or afraid to seek help, which leads to devastating psychological consequences.

What’s new in 2025?

AI-generated deepfakes: Scammers can now use AI to fabricate explicit images of a person without their consent. Even if someone has never shared intimate content, AI tools can create realistic fakes and use them for blackmail.

Crypto payment demands: Criminals increasingly demand payment in cryptocurrencies, making it harder to trace and stop them.

Automated scams: Some cybercriminals use bots to mass-target young people, quickly establishing conversations, generating AI-powered responses, and coercing victims into compliance.

Prevention strategies

Teach young people the warning signs of sextortion, such as pressure to move to private chats or requests for explicit images.

Encourage open conversations – shame should never prevent someone from seeking help. Be ‘unshockable’ as a parent, so they know they can come to you.

Use privacy settings to limit who can contact them online.

Promote reporting tools – platforms and law enforcement agencies take sextortion seriously.

2. Revenge porn and AI-generated explicit content

Revenge porn – the non-consensual sharing of intimate images – has long been an issue, but AI has made it more dangerous than ever. Deepfake technology allows scammers and malicious individuals to fabricate explicit videos of people who never participated in such content.

How does AI-driven revenge porn work?

Scammers can take an innocent image, such as a social media profile picture, and use AI tools to generate explicit content. These fake images or videos are then used for blackmail, humiliation, or even financial extortion.

This issue is especially dangerous for young people, who may be unaware that their images can be manipulated without their consent.

The rise of AI-powered apps

Some apps allow users to generate deepfake content with minimal technical knowledge, making this a widespread problem. While tech companies are working on detection tools, scammers often find ways to bypass security measures.

How to protect young people

Educate on AI risks: Many young people don’t realise how their images can be manipulated.

Encourage digital hygiene: Avoid posting high-quality images publicly where possible.

Support victims: Laws against non-consensual AI content are emerging, and victims can take legal action.

3. AI chatbots and emotional manipulation

AI chatbots have become increasingly lifelike, leading to a new wave of scams that manipulate young people emotionally. Many young people form emotional attachments to AI-driven virtual friends, romantic partners, or mentors, only to be exploited for money or data or feel they are being encouraged to self-harm, or worse.

The risks of AI-driven relationships

• Some AI chatbots pretend to be human, gaining a young person’s trust before requesting money or sensitive information.

• Others offer emotional support, then introduce “premium” features, trapping users in a cycle of emotional and financial dependency.

• In some cases, scammers take over chatbot accounts, tricking young people into sharing personal details.

Real-life impact

Imagine a teenager feeling lonely who starts chatting with an AI friend. Over time, they grow attached, confiding in it about personal struggles. Then, the AI (or the person controlling it) introduces a crisis, such as claiming they need money for a “medical emergency.” The young person, emotionally invested, feels compelled to help.

Protecting against AI-driven scams

Teach young people critical thinking – if something seems too good to be true, it probably is.

Promote AI literacy – help them understand the difference between real and artificial interactions. Stress that chatbots are strings of code, not real people.

Set spending controls to prevent financial exploitation.

4. Fake jobs and influencer scams

With the rise of online influencers, young people dream of becoming content creators, making them prime targets for fake job offers and influencer scams.

Common scams

Fake brand deals: Scammers pretend to represent brands, asking influencers to pay for a “starter kit” or sign a fraudulent contract.

Paid promotion scams: Young people are offered money to promote a product but never receive payment.

Job scams: Fraudsters advertise “easy online jobs” that require an upfront fee or personal details, leading to identity theft.

How to spot a fake opportunity

• Too good to be true? It probably is.

Legitimate brands never ask for upfront payments.

Verify contacts – reach out through official company websites before committing.

5. AI-generated phishing and deepfake scams

Phishing scams are nothing new, but AI has made them far more convincing. Scammers can now:

• Use deepfake audio to impersonate parents, teachers, or employers.

• Generate hyper-personalised phishing emails that mimic real messages.

• Create fake videos of authority figures urging victims to take action.

How young people are targeted

A deepfake scam could involve a fake video of a school principal announcing a scholarship opportunity, linking to a fraudulent website designed to steal personal data. Another scam could use AI-generated voices to impersonate a family member in distress, requesting emergency funds (these often take place via Whatsapp).

Defence strategies

Verify requests – call the person directly.

Look for inconsistencies – AI can mimic voices but may struggle with natural intonation.

Enable multi-factor authentication (MFA) to prevent unauthorised access.

Education is the best defence

The digital landscape is changing fast, and scams are becoming harder to spot. But by educating young people, parents, and educators on the latest threats, we can help them stay safe in the online world.

This Safer Internet Day 2025, let’s equip young people with the tools to think critically, verify information, and protect themselves from online deception. Because in the digital world, if something seems too good to be true – it probably is.

Read more on digital wellbeing for young people

The Teenage Guide to Digital Wellbeing

If you want to read more from me on encouraging teens to have a safe and healthy relationship with the digital world, pick up a copy of my latest book.