The Center for Security and Emerging Technology has issued a new report, Machine Learning and Cybersecurity: Hype and Reality. In case you haven’t read the full report yet, Bruce Schneier sums it up quite well. What we would like to underline from our side and is confirmed also by the report is that AI in cybersecurity is nice to have, but not game changing.
What does this practically mean? AI is used in the form of elaborations of some long-standing methods so it is not about fundamentally new approaches. Machine learning helps for fully or partially automating tasks, but this is also something that still necessitates significant breakthroughs. What is however important to consider and is something that the report provides some excellent insight for the readers is on the risks that AI and Machine Learning (ML) can bring. So apart from being attractive for use by the defenders, it offers also huge opportunities for the attackers as well. To quote from the report:
The benefits of newer ML-based detection systems may be less long-lasting than many hope, since ML models bring new attack avenues of their own. These attacks can be internal to the ML algorithms themselves, which rely on datasets that attackers could poison and which attackers can often circumvent using subtly altered adversarial examples. Moreover, the decisions of ML models can also be opaque, and when they behave in strange or unproductive ways, defenders may be tempted to make adjustments that introduce additional vulnerabilities, as appears to have been the case in the Cylance model discussed above.
In CS-AWARE, we have from the start come to the conclusion that AI in cybersecurity is best used to help organizations make better and improved sense of the massive amounts of organizational and external cybersecurity data, and help them to make the right contextualized decisions based on their individual needs. To this, we provide a superb integrated AI solution for tailored AI security management needs.
The CS-AWARE spin-out team.