AI attack that ‘humans can’t easily identify’ is ‘happening now’ as expert warns ‘almost everybody’ is potential target
![AI attack that 'humans can't easily identify' is 'happening now' as expert warns 'almost everybody' is potential target AI attack that 'humans can't easily identify' is 'happening now' as expert warns 'almost everybody' is potential target](/wp-content/uploads/2024/07/kc-ai-attack-plat-780x470.jpg)
ALMOST anyone could become a victim of an artificial intelligence attack – so you need to be on alert.
A leading security expert has warned over some of the ways criminals are already using AI to target you.
AI seems to be everywhere these days, powering apps, features, and human-like chatbots.
And even if you aren’t using those AI-powered tools, criminals are – and might target you just because you have a phone number.
For instance, criminals can use AI to create fake voices (even ones that sound just like a loved one) just to scam you.
“Many people still think of AI as a future threat, but real attacks are happening right now,” said security expert Paul Bischoff.
PHONE CLONE
“I think deepfake audio in particular is going to be a challenge because we as humans can’t easily identify it as fake, and almost everybody has a phone number.”
Artificial intelligence voice cloning can be done in just a few seconds.
And it will become increasingly difficult to tell a fake voice apart from a real one.
Even if you can hear the signs of a faked voice, you might not be able to in the near future.
It’ll be important to avoid answering unknown calls, using safe words to verify caller identities, and paying attention to key signs of a scam like urgent requests for money or info.
Of course “deepfake” voices aren’t the only AI threat we’re facing.
Paul, a consumer privacy advocate at Comparitech, warned that AI chatbots could be hijacked by criminals to obtain your private info – or even deceive you.
“AI chatbots could be used for phishing to steal passwords, credit card numbers, Social Security numbers, and other private data,” he told The U.S. Sun.
“AI conceals the sources of information that it pulls from to generate responses.
AI ROMANCE SCAMS – BEWARE!
Watch out for criminals using AI chatbots to hoodwink you…
The U.S. Sun recently revealed the dangers of AI romance scam bots – here’s what you need to know:
AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.
However, there are some warning signs that can help you identify them.
For example, if the chatbot responds too quickly and with generic answers, it’s likely not a real person.
Another clue is if the chatbot tries to move the conversation off the dating platform and onto a different app or website.
Additionally, if the chatbot asks for personal information or money, it’s definitely a scam.
It’s important to stay vigilant and use caution when interacting with strangers online, especially when it comes to matters of the heart.
If something seems too good to be true, it probably is.
Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.
By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.
“Responses might be inaccurate or biased, and the AI might pull from sources that are supposed to be confidential.”
AI-VERYWHERE!
A big problem for regular internet users is that AI will soon be unavoidable.
It already powers chatbots used by tens of millions of people, and that number will grow.
And it’s going to appear in an increasing number of apps and products.
For instance, Google‘s Gemini and Microsoft Copilot are already appearing in products and devices – and Apple Intelligence will soon power the iPhone, with help from OpenAI‘s ChatGPT.
So it’s important that regular people know how to stay safe when using AI.
“AI will be gradually (or abruptly) rolled into existing chatbots, search engines, and other technologies,” Paul explained.
“AI is already included by default in Google Search and Windows 11, and defaults matter.
“Even if we have the option to turn AI off, most people won’t.”
DEFENCE AGAINST THE DEEPFAKES
Here’s what Sean Keach, Head of Technology and Science at The Sun and The U.S. Sun, has to say…
The rise of deepfakes is one of the most worrying trends in online security.
Deepfake technology can create videos of you even from a single photo – so almost no one is safe.
But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.
For a start, there’s much greater awareness about deepfakes now.
So people will be looking for the signs that a video might be faked.
Similarly, tech companies are investing time and money in software that can detect faked AI content.
This means social media will be able to flag faked content to you with increased confidence – and more often.
As the quality of deepfakes grow, you’ll likely struggle to spot visual mistakes – especially in a few years.
So your best defence is your own common sense: apply scrutiny to everything you watch online.
Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?
If you’re being told something alarming, a person is saying something that seems out of character, or you’re being rushed into an action, there’s a chance you’re watching a fraudulent clip.
Read More