What’s next for the future of DevSecOps?

Digital fosters a culture of continuous DevOps. Organizations used to think that failure wasn’t an option and that they could control everything – because applications were developed entirely from scratch. But to thrive in the digital world, they must adopt early, experiment and iterate, often relying on third-party sources to help them develop quickly.

According to Flexera’s State of the Cloud report, more than half of enterprise workloads and data were hosted in the cloud in 2020. While it may appear that much has already been done to mitigate vulnerabilities, the reality is quite the opposite. For example, in 2019 a malicious intruder successfully obtained AWS access keys from a large bank organization that granted an overly permissive IAM role. The attacker was able to extract sensitive information from an S3 bucket, impacting nearly 100 million people.

When it comes to security incidents, measuring the total impact is crucial. We refer to this as the “security blast radius.” The above breach followed a pattern that is used in most attacks. The hacker builds a kill chain as a path of exploitation towards the final goal, a so-called “breach path.” The hope is to some day leverage artificial intelligence to predict these breach paths.

Taking a closer look at the attack, we can identify the breach path. The SSFR attack allowed the attacker to trick the server into executing commands as a remote user, which gave the attacker access to a private server. By querying the instance metadata, the credentials for the role attached to the EC2 instance were obtained. Because this role didn’t have the least privileged permissions attached, it was able to read and decrypt data from S3. If this role had only been able to access an S3 bucket and its corresponding KMS key, the blast radius would have been limited to that bucket and customer data would not have been compromised.

DevSecOps

DevSecOps has been a buzzword for a few years now. We simply can’t imagine 2022 without proper implementation of the DevSecOps model. Security objectives should be integrated into the software development lifecycle from an early stage, which involves more than just building pipelines. It’s important for people to have the right mindset, understand the shared responsibility model and establish processes that support the methodology and continuous improvement. Organizations must focus on what technology they should be using going forward, and ensure that they are up to date with what the market has to offer.

Security as code

The relatively new concept of codifying policies is a natural evolution of infrastructure-as-code, solving many problems for the engineering teams. When you apply policy-as-code to an existing cloud environment, you can see in real time what is or isn’t compliant with your security standards. The same policies can be defined across cloud environments and even on-premises, and appropriate measures can be taken when violations occur.

Security testing

According to Gartner’s 2019 Magic Quadrant for Application Security Testing, 10% of coding vulnerabilities identified by static application security testing (SAST) will be remediated automatically by 2022 with code suggestions applied from automated solutions, up from less than 1% today.

Security testing is mandatory to detect all possible security risks in the system and mitigate the potential threats at every stage of the development process. While some may argue that it comes with high costs, those costs will always increase if security testing is postponed until after implementation or deployment. Moreover, the cost increases exponentially in case of an incident.

In 2022, static application security testing should be incorporated into every automated development workflow to parse application source code, bytecode or binary without executing it. The following policy would be easily identified as a problem using SAST since it allows public read access across all buckets. SAST would prevent this code from being deployed before the permissions were restricted.

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “PublicReadGetObject”,

“Effect”: “Allow”,

“Principal”: “*”,

“Action”: [

“s3:GetObject”

],

“Resource”: [

“*”

]

}

]

}

Dynamic application security testing (DAST) provides an outside perspective on the application before it goes live. Then, interactive application security testing (IAST) uses software instrumentation to analyze running applications. Finally, runtime application self-protection (RASP) senses an attack and implements appropriate measures.

Penetration tests, risk assessments, security auditing and ethical hacking are also good traditional methods to expose potential vulnerabilities and security flaws in the system, and assess the overall security posture of an organization.

Remediation as code

To ensure that security doesn’t hinder development, remediation workflows must be integrated into the process. Security policies should be defined, and code for resolving violations should be generated automatically when they occur. A pull request should be raised with the new code changes and the development team should be notified to review and merge the changes. An automated testing procedure must be in place to verify that the code works correctly and that previous functionality hasn’t been affected. Using this mechanism, the risky code will be overwritten by the secure code and the risks will be mitigated before the cloud infrastructure is provisioned.

Looking at the S3 bucket misconfiguration example again, there are several ways that this breach could have been prevented by the right tools and processes. For example, had the role been created using IaC (assuming that the role had read access to all the buckets in the environment), it would have been spotted even before its creation and the role would have been granted access only to the logging bucket.

If everything had been deployed using detection tools like AWS Macie (which automatically classifies data and sends alarms in case of anomalous requesters or unusually high volume data movement) or GuardDuty (which alerts customers to unusual API calls), automated remediation-as-code actions could have been applied to at least limit the data exfiltration.

The cybersecurity landscape is a constantly changing environment that poses ever more challenges. While it may be hard to craft a winning recipe for securing the development process, we hope that the guidelines described above will bring you closer to success. Every year, new technologies are developed, existing ones are improved and revolutionary techniques are used in this troublesome industry.

In our fight against bad actors, we are eagerly awaiting 2022 to see what new upgrades and weapons we will receive.

Share this article

About the authors

Corina-Stefania Nebela

Big Data and Cybersecurity Architect, Atos

Corina has a lot of experience in the Big Data area, combining big data with machine learning and artificial intelligence. As the Lead Architect for the Atos Prescriptive Security Operations center (SOC) project, she came to be fascinated by cybersecurity as well. Corina thinks cloud is the future so she is currently leading a DevSecOps team with a focus on AWS Security while also working closely with Cloud Enterprise Solutions (CES) and Portfolio teams to make sure the latest cybersecurity technologies are available both for customers and Atos itself.

Sorin-Alexandru Flueras

Sorin Flueras, DevSecOps Engineer, Atos

Sorin is a DevSecOps enthusiast, currently working as a DevSecOps engineer at Atos. He graduated as a Computer Engineer and has been passionate about security and DevOps ever since. Sorin is part of a team of like-minded individuals that share his passion for technology. He takes pride in his work and strives to improve every aspect of it by following life’s never-ending learning path.