Protective AI
Even though hackers are constantly finding new ways to breach security, it looks like we may have a new way to defend – AI. The software Darktrace has been protecting Las Vegas from cyber threats since last February and has been used with great success. It works by finding behavioral patterns and can learn it’s users normal behavior within a week so it can single out attacks. (Source https://www.cnet.com/news/cyberattacks-artificial-intelligence-ai-hackers-defcon-black-hat/)
But Darktrace isn’t the only cyber security AI. A system developed by MIT’s Computer Science and Artificial Intelligence Laboratory called Ai2 uses a different method than Darktrace. It goes through millions of log lines per day and identifies threats, which are then checked by a human expert. This ensures that it can have better accuracy instead of relying completely on the AI. (Source https://www.wired.com/2016/04/mits-teaching-ai-help-analysts-stop-cyberattacks/)
And finally we have to consider that AI is definitely not perfect. As anyone software becomes more popular the more hackers will try to find ways to trick it. And 60 percent of the experts at Defcon agree that hackers will use AI for their attacks by 2018. So all in all AI might not be as fool proof as it might seem and as new threats emerge only human ingenuity coupled with AI may be able to stop them. (Source https://www.cnet.com/news/cyberattacks-artificial-intelligence-ai-hackers-defcon-black-hat/)
Video related video:
Sources:
https://www.wired.com/2016/04/mits-teaching-ai-help-analysts-stop-cyberattacks/
https://www.cnet.com/news/cyberattacks-artificial-intelligence-ai-hackers-defcon-black-hat/