Adversarial AI: Cybersecurity battles

One of the world’s foremost experts in building AI systems to detect malware explains “offensive AI” and the mathematical models he develops to protect against cyber attacks.

During my recent CXOTalk conversation (episode #324) with one of the top cybersecurity experts in the world, Stuart McClure, the conversation turned to cyber warfare and adversarial AI (also known as offensive AI).

Stuart is author of the highly respected book, Hacking Exposed, and CEO of security firm Cylance. The company uses AI and machine learning, rather than pre-defined malware signatures, to prevent cyber-attacks.

McClure says that a battle between AI systems in cybersecurity is not here yet but will come in the next three to five years. He describes three points necessary to build an AI system, including those that can be used to bypass other AI systems:

  1. The first is the data itself. That must be created somehow.
  2. The second is security domain expertise, the ability to know what makes a successful attack and what’s an attack that’s not successful. And being able to label all of those elements properly.
  3. The actual learning algorithms and the platform that you use, the dynamic learning system to do this very, very quickly and rapidly.

He explains that gaining the security domain expertise is the hardest challenge among these three. And it is lack of domain expertise that presents the first line of defense against foreign powers developing AI systems that will succeed in bypassing our defensive capabilities.

The conversation offers a glimpse inside the mind of a world’s expert on security and is well worth your time if this topic interests you.

Read More at ZDNet

Read the rest at ZDNet