When working with alerts, I notice that analysts sometimes have the immediate reaction to reach out to somebody else to have them answer it. I’ll hear, “Well, I’ll ask this person about that system” or “This person would be a good resource to ask about that traffic.” There’s a time and place for such questions, but I often see it as a crutch and it doesn’t help build skills for future investigations. I believe bringing in outside sources should only be used if there’s an indication of something truly malicious or the analysis isn’t going anywhere after exhausting all resources.
First, I want to talk about what I do when working on a more tricky alert or that I just haven’t seen before. For me, seeing a new alert is exciting and scary at the same time. Visiting a new alert indicates that we’re seeing something that hasn’t happened in the environment before and we deviate from our standard procedures.
I will use the analogy of putting together a puzzle, which is similar to working alerts. When I’m doing a jigsaw puzzle, I usually pick the most interesting one if a few are lying around. I look at the box because it gives me a clue to what’s inside. I break the seal and start working. I first start with the edge and corner pieces, making the puzzle’s frame. When I’m done with the border, I start putting pieces together with similar colors like whites, greens, reds, etc. I start filling it in the frame, and, hopefully, I don’t have any missing pieces. If I do, I might ask my wife to help me find the missing pieces; the cats probably stole a few.
Now, let us go back to looking at alerts. The first thing I do is pick out more worrisome alerts. I ask myself, “What does the alert mean?” Now, I’m not even digging into the details of the alert like hostname, IP addresses, or looking at traffic (details that are equal to individual puzzle pieces). Just like I would choose a puzzle based on the cover of its box, I look at the “cover” of the alert and decide if I feel equipped to tackle it. If, for example, I see the alert “Potential Kerberoasting attack,” I ask myself: Do I know what an attack like this even looks like? What are the dangers of such an attack? Where does this attack lay in an attacker’s lifecycle? Knowing the alert’s purpose, I can better understand whether this is a true or false positive. Looking at data won’t help if you have no clue what Kerberoasting is.
Now that we know what the alert is supposed to look like, we can crack open that alert and start looking at the pieces such as the hostname, IP addresses, or even traffic. We want to get that frame started. So I’ll ask questions like: What are the devices involved? What do they do? Were they just deployed? Simply understanding what a system is and what it’s supposed to do might give you additional insight to dig deeper into the alert. It’s good to have that frame before you start investigating the center of the alert.
After you have the frame, start looking at the traffic. Is this traffic regular for these systems? Is the process an approved piece of software? Is this process doing its regular activity? Can you look into the past to see what is expected from it and what isn’t? Just like a puzzle, you try to start putting things together. This is equivalent to putting all the white pieces together because those are probably clouds, for example. Suddenly, trying to figure out different parts of the alert, the picture becomes more evident.
As you can see, I’ve worked entirely by myself on the alert so far, instead of immediately asking others for help. Sometimes the pieces don’t go together so quickly, though. Maybe the box cover for the puzzle is just a hint of the puzzle pieces. If you start putting the pieces together and it’s just not making sense, it might be time to ask for outside help. Before I ask for help, I try to identify if any other tools will help me investigate. Have I looked at the vulnerability scanner? Ticketing system? EDR? Asset database? Try to exhaust all available tools. Once you have done this, be ready to explain what part of the alert you do not understand in as much detail as possible. When bringing in outside help, usually this person or group of people will have questions just like we did in the investigation, and we need to answer them. So if we see activity on a host, we should be able to say, “This host is used for this program. We checked ticketing systems, and there have been no updates/changes made to this system since this date. We looked back 30 days, and this is the first time we have had this kind of activity.” And so on. We want to exhaust all information about it so the person we’re bringing in can see as much of the puzzle as possible.
As I said, if there’s an indication that this alert is malicious, escalation or bringing in other people to help with the investigation as soon as possible is essential; you want to triage the alert quickly. Starting in an analyst/SOC role, all alerts might seem bad, and you might want to bring somebody in each time you’re worried that this will be the “big one”. There are two things you can do if you’re not confident in your evaluation of an alert. You can simply ask a senior analyst, “Hey, I feel like this might be malicious. Can you glance at it to make sure it’s not? I’ll dig deeper, but I want your opinion.” Let them know you are concerned it might be malicious, but if the senior doesn’t think it is, then take some extra time to understand what is happening in the alert. Let the senior analyst know your learning intentions and not have them do your work for you. If a junior analyst asked me this request, I would be happy to analyze it quickly. Over time, a junior analyst will start finding patterns in alerts and know how to triage quicker. The second piece of advice is to do some offensive security training. The idea behind that is instead of learning about a topic such as Kerberoasting, you’re doing the activity. You start to understand what steps an attacker would take. I believe a platform like TryHackMe is excellent for this since they have a great selection of red and blue team labs, plus the cost is minimal compared to some other platforms.
Questions are the key to helping move an investigation forward. Working alerts, especially when new to the field, can be tricky. All alerts can feel like a severe attack against the company, and you don’t want to delay reporting it. If you spend three hours on an alert and it’s a severe attack, you might put the company in a risky situation. If you escalate everything, senior analysts might consider you “the boy that cried wolf,” and you won’t learn how to handle it by yourself in the future. It’s a fine line, but I think if there’s an opportunity to take some time to figure out an alert that you know isn’t malicious on your own, it will give you a better understanding of the environment and help you triage the alert quicker and more accurately next time.
Editor: Emily Domedion