In 2023, Western intelligence agencies discovered evidence that a state-sponsored agent linked to US critical infrastructure networks used artificial intelligence (AI) tools to breach peripheral information transfer systems between electronic equipment. (SPI), in a military base with strategic value.
An attack on said military base based on these same AI tools could hinder communication between US forces in Asia, if a conflict occurred in the region, according to Claroty (a company that protects online and physical cybersecurity systems.
EMPOWERED CYBER THREATS.
This is just one example of the tendency of cyber threat generating agents to increasingly use Artificial Intelligence as a weapon, according to specialists.
AI-powered attacks pose a growing risk to public safety and the delivery of critical services, as more and more digital networks interconnecting billions of devices converge in industry, transportation and healthcare, according to Yaniv Vardy, CEO of Claroty.
It points out that to counteract the speed, sophistication and scale with which malicious actors “arm” AI for their attacks on SPIs and improve the resistance of these systems, it will also be necessary to resort to AI, in its generative variant, which applies machine learning to learn the patterns and relationships in a set of data and uses what it learns to create content.
IDENTITY FRAUD.
For the cybersecurity firm Kaspersky (www.kaspersky.es), emerging Artificial Intelligence tools will be a key element in upcoming cyberattacks.
“In 2023, the notable increase in the availability of AI tools caught the attention of advanced malicious actors involved in extensive and highly complex campaigns, and in 2024 the involvement of AI in its attacks will continue, according to Igor Kuznetsov, director of the Global Research and Analysis Team (GReAT) at Kaspersky.
AI will facilitate the production of ‘phishing’ messages, an identity theft consisting of impersonating a trusted institution or person in an email or phone call, to trick the user into handing over passwords, credentials, card numbers credit, and other confidential information, according to this same source.
They add that attackers will use AI to make their personalized and targeted phishing attacks more effective.
They will also be able to devise creative automation methods by collecting data online and sending it to the LLMs (specialized Large Language Models) used by AI, in order to draft letters imitating the personal style of someone close to the victim, Kaspersky anticipates. .
FAKE VIDEOS THAT LOOK REAL.
The team of experts at digital security company Gen predicts that AI will play a huge role in the way threats will be created, allowing criminals to launch even more sophisticated and credible scams.
These cyber threats could reach a level where “every email, text, advertisement or phone call has to be carefully analyzed to ensure there is not a scammer on the other end,” according to Michal Pechoucek, chief technology officer at Gen (www. gendigital.com).
The specialists of this firm expect that 2024 will be a historic year for the diversification of AI capabilities, in which “cybercriminals will not be limited to text generation, since they will now have text-to-video conversion tools and for the creation of multimedia content.
“These advances will make it increasingly difficult to distinguish a real recorded video from one generated by AI, especially when the videos are cut frequently, as is the case with television news,” according to Gen.
They point out that “cyber threat generators know that the best way to get what they want is through the manipulation of people’s emotions and vulnerabilities, in what is known as social engineering.”
For these manipulations, cybercriminals will use AI-generated content on social networks in 2024 to spread fake news, misleading advertisements, ‘deepfakes’ (fake videos, images or audio practically indistinguishable from a real one) from public figures or even direct messages that appear to come from trusted contacts, according to Gen.
For its part, Panda Security, PS, (www.pandasecurity.com) predicts an increase in ‘vishing’ (identity theft and deception through voice) based on artificial intelligence by 2024.
This method consists of a scammer making a phone call to a user posing as a reputable company or organization or someone related to the work of the victim of the scam, to blackmail them or encourage them to do something that the attacker can monetize, such as purchasing gift cards or cryptocurrencies in your name.
Until now, ‘vishing’ depended on the human being, since it is the cybercriminal himself who speaks on the phone with the victim to deceive him, following scripts designed to take money from him or empty his checking accounts, according to Panda.
But from now on, the combined use of convincing deepfake audio and programs capable of “conversing” with people, both generated by AI, will increase the scale and volume of vishing calls, which in some cases they will not even require human participation, they conclude.
#resources #obtained #Agrosuper #Neighbor #Fund #key #achieving #federation #academy