Twitter user STOK (@stokfredrik) had this great question: will AI kill the security industry?
In the poll, 78.9% of 4,041 votes said no it won’t. I will have to agree with the majority, but it’s not that the technology isn’t there yet. I feel that there’s one important factor that stands in the way of AI taking over and that’s standards and rules.
When I think of AI and how far it has come, I think of a game called Go (board game). When I started playing Go back in the early 2000s, one of the interesting aspects of it was that AI/computers could not compete with humans. Typically, a computer had the equivalent skill level as somebody that had only played for a couple of months. At this time chess AI had been mastered with Kasparov being taken down by Bigblue in the late 90s, so to see a computer struggle to even beat a beginner at Go was extremely interesting to me.
It was often said in the early 2000s in the Go community that no computer could ever beat a professional Go player; there were too many variations, and programs couldn’t “feel” the right move. If you have never seen a Go board, a standard 19×19 board has 19 lines that create 361 intersections that a player can place a stone on. These stones don’t move and there’s only a few rules about where they can’t be played. I would have to agree with the naysayers as well. After playing for several years, every time I felt like I got stronger or moved up the rank, the game just got more difficult. How can a computer account for random moves that one might play?
It wasn’t until 2016, when I heard computer program Alpha Go had defeated Lee Sedo, did I find out how much AI had progressed over time. Lee Sedol is revered as one of the top players in the world; even back in 2000 he was a prodigy. To see the jump from AI struggling to beat beginners to now beating top in the world was a shock. Technology had grown over those 15 years so much that a program could calculate out so many variations to predict the best move. Also, the programming of the software was better to make decisions on how to spend its time on what variations to dig deeper in. Best yet, Alpha Go can play against itself to get better and stronger, playing hundreds of games over and over with no sleep or restrictions. It’s truly amazing. It still shocks me to this day to see computer programs beat people that had played for years without a handicap.
So how does this compare with security? This is from the standpoint of AI doing a security analyst job. Well, I believe the technology is there for AI to kill the security analyst position. If you give a computer a set of rules, it can learn and grow beyond any human. I believe AI’s biggest hurdle is the lack of rules or standards for this field. How well would a self-driving car perform with no paved roads, no lights, no signs or no painted lines? Could a self-driving car get you from point A to B with none of these in place? Probably, but it would take more programming and effort to make it happen, and even then it would be no better than an adolescent. Each new job I have there’s new and different technology. Some technology has user behavior, some have static rules, some companies get endpoint logs, some don’t, some have an accurate asset inventory, most don’t. It’s too varied from one company to another for AI to come in and fix this.
I don’t believe AI can do the job of a security analyst in terms of day to day activity, but I do believe it might be able to assist a security analyst. There needs to be a large push for this field if it really is to be done. Maybe 15 years from now I’ll look back at this blog and be shocked once again to believe AI did something I didn’t think it could do, but I don’t believe I’m waiting for the technology to catch up. I believe I’m waiting for the security industry to standardize and make a game that AI can play.
There’s a great talk about AI and security with InsiderPhD and Bugcrowd on this topic as well.
Editor: Emily Domedion