Cyber criminals could start using AI tools to mimic habits and writing skills of millions of people, thereby spreading malware to unsuspecting victims and perpetrating large-scale scams, cyber experts have warned.
AI tools could also be used by cyber criminals to study data obtained from hacked IoT devices and to use such data to carry out financial scams, experts told a House of Lords committee.
Appearing before a House of Lords committee, experts at security firm Darktrace warned how cyber criminals could misuse AI tools to impersonate individuals, learn their habits and writing skills, and take over their systems to spread malicious software to systems used by the victims’ colleagues and acquaintances. They added that the operation could explode and victimise millions of people.
Cyber criminals & AI: Best friends forever?
“Imagine a piece of malicious software on your laptop that can read your calendar, emails, messages etc. Now imagine that it has AI that can understand all of that material and can train itself on how you differently communicate with different people. It could then contextually contact your co-workers and customers replicating your individual communication style with each of them to spread itself,” said Pave Palmer, director of technology at Darktrace.
“Maybe you have a diary appointment with someone and it sends them a map reminding them where to go, and hidden in that map is a copy of malicious software. Perhaps you are editing a document back and forth with another colleague, the software can reply whilst making a tiny edit, and again include the malicious software.
“Will your colleagues open those emails? Absolutely. Because they will sound like they are from you and be contextually relevant. Whether you have a formal relationship, informal, discuss football or the Great British Bake Off, all of this can be learnt and replicated. Such an attack is likely to explode across supply chains. Want to go after a hard target like an individual in a bank or a specific individual in public life? This may be the best way,” he added.
Mr Palmer also told the House of Lords committee on AI that by using AI tools, cyber criminals could also infiltrate corporate meetings, use translation and transcription tools to access sensitive corporate secrets, and carry out round-the-clock surveillance of enterprises they intend to victimise.
“AI completely changes that whilst we as an economy are busily engaged in the sprinkling of our environments with cameras and mics. AI is absolutely democratised to anyone with a laptop and an internet connection.
“A motivated ‘hobbyist’ software programmer could almost certainly start from no understanding of AI, to deliver the types of attack described in the first and second examples above within 6-12 months. A more focused criminal would be able to achieve this sooner and it is somewhat surprising this hasn’t happened already.” he added.
In February, a report from the Future of Humanity Institute also warned readers about how cyber criminals could exploit advanced AI tools for malicious purposes. Attacks using such tools could be more efficient compared to existing threats and more large-scale and have so far been underestimated.
“In the cyber domain, even at current capability levels, AI can be used to augment attacks on and defences of cyberinfrastructure, and its introduction into society changes the attack surface that hackers can target, as demonstrated by the examples of automated spear phishing and malware detection tools.
“As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars,” the report said.
Can AI tools help cyber experts up their game?
While the possibility of AI tools being used by cyber criminals cannot be ruled out, it is also true that by making effective use of machine learning, cyber security specialists will also be able to protect their organisations and investigate cyber security incidents better.
For example, Endgame, a U.S.-based cyber security firm, recently launched Artemis, a chatbot that allows relatively-inexperienced cyber security specialists to conduct investigations of a large server without having to learn advanced skills.
Similarly, Booz Allen Hamilton, a U.S. defence contractor which has suffered various breaches in the past, is now using AI tools to categorise cyber threats so that cyber security workers can concentrate on the most critical threats at a given time.
Recently, Michael Wignall, CTO for Microsoft UK, batted in favour of AI tools and machine learning as cyber security weapons, stating that it is vital for organisations to attune themselves with the changing technology environment.
“It’s vitally important to understand your technology environment and how it’s changed – you’re now much more connected than ever before. We have to think about cybersecurity in a very different way.
“A lot of the threat isn’t as targeted and sophisticated as you might think, it’s actually much more opportunistic – they’re taking advantages of some of the changes in the tech landscape. If you’re not taking advantage of AI in your systems, you better believe that the attackers are – so you’ve got to keep up,’ he said.