Privacy policy

Our website uses cookies to enhance your online experience by; measuring audience engagement, analyzing how our webpage is used, improving website functionality, and delivering relevant, personalized marketing content.
Your privacy is important to us. Thus, you have full control over your cookie preferences and can manage which ones to enable. You can find more information about cookies in our Cookie Policy, about the types of cookies we use on Atos Cookie Table, and information on how to withdraw your consent in our Privacy Policy.

Our website uses cookies to enhance your online experience by; measuring audience engagement, analyzing how our webpage is used, improving website functionality, and delivering relevant, personalized marketing content. Your privacy is important to us. Thus, you have full control over your cookie preferences and can manage which ones to enable. You can find more information about cookies in our Cookie Policy, about the types of cookies we use on Atos Cookie Table, and information on how to withdraw your consent in our Privacy Policy.

Skip to main content

The hidden supply chain risks of AI workloads in the cloud

Securing the Layers Beneath AI Services

 

Artificial Intelligence in the cloud is rarely a standalone service – it’s the product of a complex digital supply chain of underlying cloud components. Organizations are eagerly adopting cloud-based AI platforms for their efficiency and scalability, but this rush comes with hidden risks. Cloud AI workloads depend on layers of third-party services (VMs, containers, storage, identity services, etc.), and vulnerabilities or misconfigurations in any layer can cascade upward, putting the AI application at risk.

Recent Tenable research underscores that cloud AI environments are rife with such inherited risks.

AI’s digital supply chain: Layers of risk beneath the service

When you spin up an AI service in the cloud – be it a managed machine learning notebook or an AI API – there are multiple services working behind the scenes. A managed AI notebook, for example, might actually run on a container service, which in turn runs on a virtual machine in the cloud provider’s infrastructure. Each of these underlying components is part of your AI’s digital supply chain, provided by the cloud vendor.

The catch: if any underlying service has a weakness, your AI workload inherits that risk.

Cloud providers often layer services Jenga-style, stacking new AI services on top of existing platforms. This means a misconfiguration deep in the stack, like an insecure default setting in a VM or storage service, can silently propagate into the AI layer above.

Jenga-style cloud misconfigurations – where cloud providers build one service atop another – are now surfacing in managed AI services. A single misconfigured lower-layer service can put all the dependent AI services at risk, underscoring the supply chain nature of cloud AI.

This layered supply chain complexity isn’t just theoretical. In the shared responsibility model of cloud, providers secure the base infrastructure, but customers are responsible for secure configuration of what they build on top. The reality is that cloud providers’ default settings prioritize ease of use over security – often excessively permissive defaults left unchanged by users can introduce serious vulnerabilities. Every AI workload built on cloud services thus inherits not only the power of those services, but also any latent security flaws within them.

Inherited vulnerabilities in AI workloads

One striking finding is that cloud workloads running AI software are significantly more vulnerable than those that are not. Nearly 70% of analyzed cloud workloads with an AI package installed contained at least one unremediated critical vulnerability – a markedly higher rate than the 50% of non-AI cloud workloads. In other words, an AI workload is more likely than not to have a critical exposure lurking within. In other words, an AI workload is more likely than not to have a critical exposure lurking within.

Why? One reason is the software supply chain of AI itself: many AI workloads run on Linux-based platforms with numerous open-source libraries and frameworks, which often have reported vulnerabilities. If those libraries (for example, a widely used AI-related package or even a tool like cURL) aren’t patched, they can introduce critical flaws.

The impact of these inherited vulnerabilities is amplified by the nature of AI workloads. If an attacker exploits a critical flaw in an AI system, the consequences can go beyond server access – the attacker might manipulate AI models or tamper with sensitive training data, leading to corrupted AI outputs or leakage of proprietary insights.

And if that vulnerable AI workload is also left exposed to the internet, it creates a toxic combination that significantly increases the likelihood of compromise. The Tenable Cloud AI Risk Report bluntly notes that AI, for all its intelligence, is not risk-free and requires your attention.

Security teams must be especially vigilant with vulnerability management for AI. The high incidence of critical flaws in these workloads combined with the sensitive nature of AI data and models make patching and hardening an urgent priority.

Nearly 70% of AI-enabled cloud workloads contain at least one critical unpatched vulnerability – far higher than the ~50% of cloud workloads without AI. This gap highlights how AI’s complex software stack can introduce hidden weaknesses that demand prompt remediation.

Risky defaults and misconfigurations in Cloud AI services

Beyond software vulnerabilities, misconfigurations in cloud services are a major supply chain risk for AI. Cloud-based AI services often come with default configurations that, if left unchanged, can be dangerously over-privileged. For example, a managed AI notebook service from a leading cloud service provider, by default, gives users root access within the notebook instance – this is full administrator control. Alarmingly, the vast majority (about 91%) of organizations using the service have at least one notebook with this risky root-access default still enabled.

According to Tenable, 90.5% of organizations using managed AI development environments retain overly permissive, root-level access settings in at least one notebook instance. This misconfiguration — often inherited from default configurations — demonstrates how a single oversight can expose AI workloads to significant risk.

To make matters worse, cloud AI environments often deal with highly sensitive data, and misconfigurations here can lead to severe exposure. Consider a leading cloud service providers’s service for AI model hosting and training. It relies on cloud storage buckets for training data. Tenable’s research found that 14% of organizations using the service had at least one training data bucket with public access not blocked, and about 5% had at least one overly permissive bucket open to any authenticated user. In practice, this means a significant number of AI datasets – which could include proprietary training data or customer information – were one misstep away from being exposed to the world. Such exposure invites a host of risks: an attacker could steal the training data (revealing intellectual property or sensitive info) or even poison the data by injecting malicious samples, corrupting the model’s outputs.

Continuous exposure management for AI supply chains

Given the breadth of these risks – from unpatched vulnerabilities in open-source libraries to inherited cloud misconfigurations and exposed datasets – it’s clear that securing AI in the cloud requires continuous, proactive risk management. Traditional security checklists aren’t enough; organizations need an exposure management mindset tailored to AI’s unique attack surface. In practice, this means regularly assessing every layer of the AI supply chain for weaknesses and a continuous supply chain risk assessment for your cloud services.

What might this look like?

First, extend your vulnerability management program to cover AI frameworks and the underlying infrastructure they run on. If 70% of AI workloads carry critical CVEs then scanning and patching those systems (including AI libraries like TensorFlow, PyTorch, and even the base OS images and container layers) should be a top priority. Equally important is configuration auditing: do not trust default settings in managed AI services. When deploying cloud AI offerings – whether Amazon SageMaker, Google Vertex AI, Azure Cognitive Services, or others – scrutinize and harden the configurations at each layer. Ensure storage buckets for model data are not publicly accessible by default, and that service accounts or API keys adhere to the principle of least privilege. Cloud providers often publish security guidelines for their AI services; use them, but remain cautious, knowing that “default” often does not equate to “secure.”

Identity and access management is another linchpin. Apply strict identity and access controls to AI resources and data. Limit who and what services can access your models, notebooks, and data stores. This includes using unique service accounts with minimal scopes for AI services, rather than allowing broad reuse of admin-level accounts. Where possible, implement multi-layer access controls (for example, network restrictions in addition to identity permissions) to guard sensitive training data. The principle of least privilege should be enforced rigorously in AI environments – for both human users and machine identities. As Tenable’s analysts note, reducing excessive permissions and tightening cloud identities is key to preventing unauthorized access, especially to high-value AI data stores.

But securing AI doesn’t start at deployment — it starts in development. Vulnerabilities can be introduced early in the CI/CD pipeline, even before a model reaches production. That’s why it’s essential to adopt a shift-left approach: identifying and addressing security risks during code development, integration, and testing stages. Cloud-native application protection platforms (CNAPPs), including those from Tenable, help teams embed security earlier in the lifecycle, providing visibility and control across development and runtime environments. The earlier you act, the more resilient your AI systems become.

Finally, given the interconnected nature of cloud risks, organizations should consider Exposure Management platforms with robust Cloud Security capabilities that can provide a holistic view of risk across the cloud stack. An effective platform that is ready to tackle the risk of AI can correlate findings from vulnerabilities in workloads, misconfigurations in cloud resources, identities, and even data exposure, to paint a context-rich picture of your true risk. This helps in prioritizing remediation by impact and focusing on the toxic combinations that could lead to a breach.

Digging deeper

AI is driving innovation and competitive advantage, but it also introduces a high-stakes security puzzle. The hidden supply chain beneath every cloud AI service means security teams must look beyond the AI application itself and secure each dependency that makes it work. This requires vigilance, the right tools, and a proactive strategy. The encouraging news is that by embracing continuous exposure management and treating AI workloads as part of a broader cloud ecosystem, organizations can enable AI innovation securely.

Securing the AI revolution is about securing the layers beneath – from code libraries to cloud configurations and data pipelines.

To learn more, read Tenable’s latest research, Tenable Cloud AI Risk Report 2025 and the Tenable Cloud Security Risk Report 2025.

Share this article

X IconLinked-in Icon

Justin Buchanan

Sr. Director, Cloud Security Marketing, Tenable

View detailsof Justin Buchanan >
  • Follow Justin Buchanan on LinkedIn
 

Subscribe for regular insights

Thank you for your interest. You can download the report here.
A member of our team will be in touch with you shortly

More on Digital supply chains

How secure digital identities and zero touch onboarding are unlocking the future of OT cybersecurity

The anatomy of modern IT supply chain attacks

Threat actor playbooks: Who is targeting the IT supply chain & how

Three steps to managing secure third-party access in your supply chain

Unifying and securing the software supply chain with ASPM

Unleashing the synergy of agentic AI and zero trust to secure the supply chain