Can you guess why you’re seeing more news connecting AI to cyber-villainy? Yes, 2018 is the year when cybercriminals add AI to their attacks in earnest.
The thought of smart attacks and malware running around loose, operating with minds of their own could make your hair stand up. Calm your follicles; the fight’s not over yet. There are reliable methods, some very simple, to answer these newly savvy cyberattack savants.
Though AI will create elusive attacks and malware, speedy password guessers, and intelligent botnets and spearphishing, you’ll know the solutions less than 500 words from now.
Developments in AI in cyber attacks and malware
The science exists to determine how behavioral anti-malware programs that use machine-learning work, then slip past them. Blackhat hackers could use Generative Adversarial Networks (GANs, an AI technology) to learn how security software learns how malware works, enabling the hackers to recode parts of the malware to fool the security software.
AI can also speed up password guessing using a GAN. Using a GAN together with a top password guessing tool, researchers were able to guess 27% of 43 million passwords previously leaked from LinkedIn. Researchers fed a GAN called PassGAN leaked passwords from gaming site RockYou. PassGAN used the passwords to model and generate millions more passwords and found that many were the same as more than a quarter of the leaked LinkedIn passwords.
Sources are forecasting intelligent botnets that will act without intercession or command from their human masters. These botnets could learn system vulnerabilities and select targets more quickly and precisely.
AI will advance spearphishing campaigns by using Natural Language Processing (NLP) to make the wording more authentic. There is already an app, Crystal, that understands peoples’ personalities based on their writing and recommends how you should communicate with them. It’s not a leap to see AI taking in a target’s communications, then forming emails they will find attractive.
How to combat these
There is an answer to AI that helps hackers recode their malware to mystify anti-malware programs. You need to retool your own malware samples using AI, then feed those into the machine-learning anti-malware programs, so they learn how to detect them. This is a job for your security product vendor, not for you.
No matter how many passwords AI can guess nor how fast, the answer is multifactor authentication. If you add security questions, tokens, biometrics, and other MFA options, guessing the password alone is no longer enough for the hacker. And unless they can create an AI program that can grab a hardware token out of your pocket, I think you’ll be OK.
Intelligent botnets rely on the same weaknesses that regular botnets do: vulnerable IoT devices to add to the botnet, and targets of botnets that are not prepared to respond. The industry must secure IoT. You must prepare for DDoS attacks, such as by using DDoS traffic scrubbing solutions.
Authentic-sounding text in phishing emails is only part of a spearphishing attack. You can still hover your cursor arrow over the link it wants you to click. If the link beneath the text or image is not the site they want you to believe it is, don’t click. Better yet, surf directly to the site you know apart from the email or contact security or tech support.
AI to come
AI in the wrong hands will undoubtedly find more ways to keep us up at night. But we created AI. We can surely be the authors of its undoing.
David D. Geer (https://www.linkedin.com/in/daviddgeer/) writes about cybersecurity and technology for national and international publication. David’s work appears in various trade magazines from IDG in the U.S. and around the world in several languages. ScientificAmerican, The Economist Technology Quarterly, and many magazines and companies have used David’s content. David’s Google Scholar Page is at https://scholar.google.com/citations?user=ZkKA3fsAAAAJ&hl=en.