Photo: ProStockStudio – shutterstock.com
Machine learning or artificial intelligence (AI) is becoming a core technology in threat detection and response. The ability to automatically adapt to changing threat scenarios can give security teams an advantage. However, cybercriminals are also increasingly leveraging machine learning and AI to scale their attacks, bypass security controls, and find new vulnerabilities — all at an unprecedented pace and with potentially devastating consequences.
We present the nine most common ways criminal attackers take advantage of machine learning technology.
1. Spam
Defenders have long relied on machine learning to spot spam, as Fernando Montenegro, an analyst at Omdia, notes, “Spam prevention is the most important use case for machine learning.”
However, if the spam filter used works with predefined rules or creates some sort of score, it can potentially be exploited by attackers to make their own attacks more successful, the analyst warns: “You just have to experiment long enough and then you can reconstruct the underlying model.” and run a custom attack that bypasses that model.” Not only spam filters are vulnerable: According to Montenegro, any security rating or other output provided by a security provider could be exploited: “Not everyone has this problem, but if you’re not careful, useful output can form the basis for malicious delivery activities.”
2. Optimized Phishing
Attackers don’t just use ML-based security tools to test whether their messages can bypass spam filters. They also use machine learning to create these emails, as Adam Malone, partner at EY management consultancy, explains: “They advertise machine learning-based services on criminal forums and use them to create better phishing emails and counterfeit fakes. create personas for scam campaigns. Unfortunately, this is usually not just about marketing – the criminal ML services definitely work better.”
Using machine learning, the attackers were able to creatively optimize phishing emails to avoid being detected as spam and maximize engagement in the form of clicks. According to the consultant, the cybercriminals did not limit themselves to just the email text: “AI can create realistic-looking photos, social media profiles and other material to make the communication seem as legitimate as possible.”
3. Cracking passwords
Cyber criminals also use machine learning to crack passwords, as Malone explains: “This is proven by numerous systems designed to guess passwords with an impressive frequency and success rate. Cyber criminals are now making much better dictionaries and are becoming increasingly adept at cracking stolen passwords.” passwords to hack hash.”
The criminals also used machine learning to identify security checks and “guess” passwords with fewer attempts. The consultant warns that the cyber gangs increase the chances of success of their attacks.
4. Deepfakes
Today’s deep fake tools deceptively create real video or audio files, some of which are hard to expose as fakes. “The ability to simulate a person’s voice or face is very useful for attackers,” said Omdia analyst Montenegro. In fact, in recent years, some high-profile cases have come to light in which Deep Fakes companies have sometimes been stolen cost millions.
More criminal actors are turning to AI to create realistic-looking photos, user profiles, and phishing emails, and to make their messages appear more believable. This is a lucrative business: na FBI Information Business email compromise campaigns have caused more than $43 billion in damage since 2016.
5. Neutralize Security Tools
Many current security tools use some form of artificial intelligence or machine learning. For example, antivirus solutions are increasingly looking beyond basic signatures for suspicious behavior.
“All systems available online — especially open source — can be exploited by cybercriminals,” said Murat Kantarcioglu, a computer science professor at the University of Texas. Attackers could use the tools to modify their malware until it can evade detection: “AI models have a lot of blind spots.”
6. Awareness
Machine learning can be used by cybercriminals to investigate traffic patterns, defenses and potential vulnerabilities. However, this is not easy to achieve, as Kantarcioglu explains: “You need some skills to use AI. I think it is mainly state-controlled actors who use such techniques.”
However, if the technology is eventually commercialized and offered as a service in the cybercrime underground, it could become available to a wider audience, says Forrester analyst Allie Mellen: used and made available to the criminal community, but the barriers to entry remain high: attackers who want to use such tools must have ML expertise.”
7. Autonomous Agents
If a company realizes it is under attack and cuts off Internet access to the affected systems, malware may not be able to connect to the command and control servers for instructions.
“Cybercriminals want to counter this with intelligent machine learning models that guarantee the functionality of the malware even when direct control is not possible. However, this is not relevant for ‘conventional’ criminal hackers,” Kantarcioglu cautions the all-clear.
8. AI Poisoning
Attackers can trick ML models by giving them new information, ie by manipulating the training dataset: “For example, the datasets could be intentionally falsified,” said Alexey Rubtsov, senior research associate at the Global Risk Institute.
This is similar to how Microsoft’s chatbot Tay was “taught” to use racist language in 2016. The same approach can be used to teach a system that a certain type of malware is safe or that certain bot behavior is perfectly normal, Rubtsov said.
9. AI Fuzzing
Serious software developers and penetration testers use fuzzing solutions to generate random input to test systems or find vulnerabilities. Here too, machine learning is now often used, for example to generate more specific and organized input. This makes fuzzing tools useful for businesses, as well as cybercriminals.
“That’s why basic cybersecurity hygiene in the form of patching, anti-phishing training and micro-segmentation remains critical,” said Forrester analyst Mellen. “There are multiple obstacles to set up, not just one, that the attackers will eventually use to their advantage.”
Investing in machine learning requires a high level of expertise, which is currently scarce. Also, there are usually simpler and easier options for attackers, as Mellen knows: “There are a lot of ‘low hanging fruit’ and other ways to make money – without using ML and AI for cyber attacks. In my experience, criminal hackers in They don We don’t use it in the vast majority of cases, but that could change in the future as companies continue to improve their defenses and as criminals and nation-states continue to invest in cyber-attacks.” (FM)
This post is based on an article from our US sister publication, CSO Online.