In the first post in my “Think Like An Analyst” series, I talk about slowing down. I gave a trick of the meditation practice of “body scanning” to do this. More information can be found here Lesson 1.
In theory, it’s a great way of tackling an alert, but in some ways, it’s not real life. Don’t get me wrong, I still believe and recommend it, but in a real-life scenario, you might be given maybe 20 alerts all at once, and it can be difficult to do such a process. It’s even worse if 19 of those 20 alerts are almost identical. I’ll use art as an example.
I’ll give you five pictures, and I want you to spend two minutes on each one.
Did you take 2 minutes apiece and study each picture above? I’m guessing most didn’t, and I don’t blame you. Did you notice all the differences? Did you notice that they were numbered out of order? How did you feel looking at Figure 1 compared to Figure 5? Looking at anything for 2 minutes can feel like an eternity, especially when they’re all about the same. If you were asked to analyze and document each, would you write up one analysis and copy and paste each analysis to each picture? Of course, and why not? They’re all practically the same. The above scenario is all too common for a SOC analyst though.
The problem with repeatedly seeing the same alert or picture is that you become numb and uninterested in it. If there’s no danger, why invest so much time into each? There was a study done that determined people would only spend roughly less than 30 seconds looking at a piece of work at a museum. What was more interesting was if the person had seen a piece of artwork prior, they were 50% less likely to look at it again, and if they did, they spent roughly 15 seconds looking at it the second time. I’m curious, how much time would be spent viewing that piece of artwork the third, fourth, or tenth time? If our attention decreases that much over just one viewing, what do you think our attention to detail is on the hundredth or thousandth time we get an alert?
We blame the analysts for that one in a thousand times when the alert is malicious, and they just closed it as a false positive. The thing is, I’ve done that too. I incorrectly determined an alert as a false positive when it was something that should have been looked at deeper. We can all be victims of an environment where the alerts and systems are just creating noise.
I hear solutions of using a SOAR to fix some of these issues. Automation is great, it can save tons of time, but the problem I stated above isn’t a problem automation is here to solve. Enriching an alert that fires off twenty times a day that is a false positive or low fidelity doesn’t make sense. Just imagine if, along with the pictures above, I gave you additional information for each picture: who painted it, where it was painted, etc. Would that help you want to dig into each picture more? Of course not. I hear that SOAR can reduce some noise, but why not just try fixing your systems by tuning them rather than patching them with something like SOAR?
Tuning of systems is key to keeping things in tip-top shape and keeping analysts engaged with the work they’re doing. I’ll talk about what a good tuning request is, and what a bad one is, in a later post. Just know, tuning is a skill in the security field that many struggle with, and it’s one of the biggest impacts a company can do to help analysts identify the next threat. As an analyst, try to slow down your analysis, especially for unique and different alerts you haven’t seen before. But also, if you see the same alerts repeatedly, try to make sure you get your voice heard, but I know that’s not always easy.
You can find my post about tuning here Tuning Done Right