Red Team Lessons Learned Series – Episode 3
Focus on information exchange between DevSecOps
In this series of blog posts I wanted to highlight some common patterns of problems I observed while working in cybersecurity with various organizations. I made most of these observations while working as a Red Teamer, but there are also things that occurred to me when working as an Incident Responder. Read on to ask yourself whether your organization could be affected by any of these problems – and if so – to think what could possibly be done about it.
Exchange of information between teams is insufficient
Admins often do not get security, while security teams sometimes lack knowledge about infrastructure. As we will see in examples below, this leads to gaps in Incident Response. It also makes Threat Hunting more difficult.
The example of credentials leak
Let’s imagine a system gets breached and attackers find a software package containing credentials to a database located on a different system, so they access the database as well (and find there even more credentials letting them move laterally to other systems, and so on). Clear text database credentials contained in a config file embedded in some software package is just an example of a credential set present in a system (among many others). This is not about developers/administrators and their practices of credential handling. We all know that eventually all credentials must be stored somewhere, and every system is full of them. For example, system users, internal application users, databases, caches, log files, configs, private keys etc.). In various forms – from plain-text through hashed (most of which can be cracked offline) to encrypted (the encryption key must be somewhere, otherwise there is no way to use it).
Even though developers and admins know about such files containing credentials (e.g. software packages, configuration files, source code, databases), they are often not aware of the attackers’ modus operandi with regard to harvesting and using credentials. Therefore we, as Incident Responders, cannot just expect that admins and developers will fill us in and highlight all the credentials that could have been compromised when we communicate with them about the incident. As Blue Teams, we must understand these patterns and include them into our post-incident Threat Hunting. It means trying to get that information from relevant staff, but also trying to find it ourselves when doing forensics. If we fail to do this, we might not fully estimate the incident’s scope.
While we are at credential leaks, another important factor that keeps getting neglected is the general phenomenon of password reuse. People tend to use the same password for multiple systems, including personal and work accounts. Obviously, attackers know about this trend, therefore every time they get their hands on a bunch of passwords, they know there is a good chance at breaking into other accounts belonging to the victims, by simply using the same or a similar password. And it seems like attackers are the only ones being aware of the password reuse phenomenon. Typical security awareness trainings, as well as password security guidelines and policies do not emphasize this problem, while Incident Responders often fail to notify the victims that their passwords from a particular system have been compromised and therefore must be changed not only for the affected accounts, but also for any other accounts they might have used the same password for.
Impeding incident response
As I mentioned in one of my earlier blog posts, identifying all systems that were affected by an incident (establishing the full scope) is crucial to get rid of attackers from our assets after a compromise (eradication). Again, keep in mind, credentials are just an example of something that developers and administrators have more knowledge of when it comes to individual systems. Environment-specific knowledge is crucial when doing incident response. Instead of credentials, we could as well be talking about a different initial access vector, like a configuration vulnerability or a software vulnerability. My point is that when doing Incident Response, we should have clear information exchange between security teams and DevOps, to make the most of our collective knowledge.
A barrier to alert triage
Another example of how lack of sufficient environment knowledge impedes security teams is alert triage (whether from SIEM or EDR). When we as security teams do not know the environment enough, we cannot effectively recognize anomalies as we do not know what the norm is. Many anomalies suggesting a security breach can only be detected when understanding the context. For example, is a given local system user supposed to be in that system in the first place? Is it a legitimate account? Or was it created by a threat actor?, It is hard to tell when you are not the one who set up the system and has been using and maintaining it. Or let me use another example – is a given access method (SSH, RDP, WMI, SMB, Windows Remoting, etc.) or even a given client software something that administrators always use in that environment, or rather is it something we should investigate deeper?
Of course, we can usually find answers to those questions by querying more data sources and comparing results or trying to identify relevant staff and reach out to them directly – but it all requires work and additional time, making alert triage harder.
Ideas for improvement
When it comes to ignorance of password reuse, I legitimately believe it is something that the periodic, evidence-based security awareness trainings should cover.
Collecting relevant knowledge about the environment is something Incident Response teams should include into their first step of the IR process – “preparation”.
Insufficient security awareness and bad habits among DevOps teams could be solved by having them go through a basic infosec training, at least once in their career. I am not talking about the periodic, regular security awareness mentioned earlier, the one every employee does. More like an additional, extended 2 or 3-day training for techies, which they are. Especially since they, as DevOps, hold more power and therefore more responsibility. A training explaining the basic concepts like risk, vulnerabilities, vulnerability chaining and attack vectors. It would not only help building better security posture and habits among technical employees, but also lay the groundwork for better cooperation with security teams when it comes to responding to incidents or conducting security assessments.