Risk in a Zero Trust world
At a Glance
The zero trust principle envisions cyberspace as a hostile environment where no trust can ever be established and where all parties are assumed to be constantly involved in hostile activity. The implications of this for how we approach cyber risks are far reaching and in this article, Farah Rigal discusses some of the ways it is already transforming approaches to detecting and protecting against cyber risks.
5 Minute Read
Cyberspace is a wide-open arena, not a closed circle where you need to be prudent only until trust is established with the parties involved. The zero trust principle takes this statement to its extreme — portraying cyberspace as a hostile environment where no trust can ever be established, and where all parties are assumed to be constantly involved in hostile activity.
The protect/detect dilemma
Imagine that you have handed your house keys to your teenager for the weekend. Ideally, you get to enjoy a romantic weekend away while your kid can have a few friends over without a bothersome adult presence. Trust occurs when you simply assume things will go well.
Zero trust requires you to constantly assume that you are being fooled by your teenager — for instance that he or she intends to throw a big party that you have strictly forbidden.
It is natural to consider how to protect before you detect, simply because protect comes first in every cybersecurity framework and checklist I can think of. However, you must assume that the protect controls will fail in the face of a motivated adversary or a compromised insider. In our example, your kid may distribute the entrance code to your building (if there is one) and would also be in the position to lock up your family’s trusty guard dog so it cannot sound an alarm.
Atos on the Road to Zero Trust
Eliminate the blind spots
Continuing with the party example (although I’m not that kind of parent at all), let’s conclude that a security camera is the best way to verify your assumptions that a wild party is taking place without your permission.
One option is to make the camera clearly visible, thereby acting as a deterrent against misbehavior. However, the more paranoid you are (i.e. exhibiting zero trust), the better you should hide it. The reasoning is simple: If the camera is visible, it is possible for bad actors to hide in its blind spots, or even feed it wrong or understated data.
In a cyber environment, threat detection measures and security event analytics act in a continuous untrustful paranoid process loop. The less that adversaries know about them (where they are placed, what exact tactics, techniques and procedures they can detect, etc.) the more work it is for adversaries to evade control. However, it’s important to note that security simply by obscurity is not the fundamental principle here.
Keep your friends close, and your adversaries closer
All activity needs to be collected and analyzed, even if it doesn’t raise any alerts. Successful authentication events, calls to selected authorized system libraries, opening connections to other systems, and other similar activity need to be monitored to detect any anomaly in volume, timing or other meaningful attributes. This means that in addition to the security camera, the video needs to be analyzed by computer vision software for anomalies and suspicious patterns.
The rights to turn off the camera (i.e. event monitoring or reporting) need to be segregated from other administrative rights as much as possible. When a system fails to report its events to the monitoring solution, it should be treated as a separate security event and investigated as such.
From cyberspace into the real world
The latest cyber breaches have confirmed that motivated adversaries can circumvent the strictest security controls, no matter how sophisticated they are. In targeted attacks, the offenders conduct detailed reconnaissance to find the path to evade all controls.
Non-targeted attacks can also make use of the latest attack techniques, including evasion capabilities. Some attacks are even named after their ability to fool security controls. To qualify as a Highly Evasive Adaptive Threat (HEAT), a threat needs to successfully bypass at least one of several traditional security defenses. As they proliferate, it is increasingly important to deploy multi-vector threat monitoring to reduce dwell time and response time in the event of a compromise.
Ironically, some breaches have also demonstrated how offenders can use trust adversely. In the high-profile SolarWinds Sunburst backdoor attack, the hackers exploited the established trust in federated (hence trusted) authentication environments to extend their access rights and establish long-term access. The famous watering hole attack falls in this category as well, because it relies on poisoning or compromising a site that the victim commonly visits and trusts.
Taking advantage of a trust relationship to gain or extend malicious access is a documented known attack technique. In the Mitre Att&ck® framework, this technique is known as Trusted Relationships, and seeks to gain access through a less scrutinized path. This may be that of the supply chain or the system administration team. It is unfortunate when high trust and elevated rights are associated, despite the well-known principles of segregation of duties and least privileges.
Many controls can mitigate the risk of compromised trust relationships, such as the good practices of identity and access management, and secure-by-design network and system architecture. Yet zero trust loyalists will prefer to back these with multifactor authenticated access, through bastion systems with session monitoring, event correlation and content inspection.
Digital Vision: Cybersecurity 3 – Further Insights
From across Atos and beyond, find out more about cybersecurity challenges and how organizations can respond to cyber threats