Integrated IT/ IOT/ IoMT
Blurring boundaries between the IT and OT networks, corporate and factory environments, all boosted by adoption of common protocols and increased usage of wireless technologies has made it easier to target production systems through the corporate and administration network.
Protection of industrial assets and OT in a broader sense (pipelines, supply and distribution infrastructure, turbines..) is rendered ineffective if the legitimate traffic flowing through the authorized remote access is not treated as a possible threat vector. In that context visibility convergence over the IT network and OT network is essential to be able to protect against patterns of indirect and staged compromise.
Industrial IoT (IioT) is present inside the OT to enhance its manageability, adaptiveness and resilience. Gaining access to IioT even indirectly through possible poisoning of input data is yet another way to adversely gain control of the critical infrastructure. When patients are the “critical infrastructure”, IoT is an IoMT (M for Medical), with the risk of possible remote and malicious control of valves and (insulin) pumps, or where IoMT devices are used in preventive care, and diagnostics, with the risk for privacy, data theft or use of data beyond consent.
The coming together of IT, IoT/IoMT technologies to create value for many segments of the industry and society translates into demand for Integrated supervision of all these systems together.
From persistent threats, supply chain attacks to more basic spread of malware, many use cases can be detected for this converged visibility :
- converging data from the assets participating in a business process,
- from the IoT implanted to monitor and control it,
- from the Cloud and edge resources where the data collected is processed
- and from the IT (or at least the incident management systems) of the organizations, their suppliers and business partners
Every player of the Digital value chain secures within its boundaries, focusing in most cases on in-bound inspection, instead of integrated supervision.
Boundaries are fading and adversaries are learning to use the fact that many environments are blind to each other.
- increased detection and response
- while optimizing resources when single platforms or interconnected systems are used (instead of manual or ad-hoc alignment between different parties)
Although the technology is mostly seen in the manufacturing and energy sectors, it is applicable everywhere.
Hybrid & Multi-cloud unified detection & response
Hybrid and multi-cloud are leveraged to get the best compute for workloads. Orchestrating workloads across different clouds to get the best cost, performance and use case fit will become common in future. This brings up multiple challenges for threat detection & response. Threat detection platforms and services should support most common cloud & SaaS providers, discover & add workloads in a dynamic way, support the different cloud technologies for monitoring and provide central visibility across multi-cloud. Such a unified approach will elevate the capability to detect and respond to threats.
- Unified technology layer that can abstract and integrate with hybrid and multi-cloud providers to provide central visibility, detection, and response capabilities
- Discover workloads on the fly as they get orchestrated across traditional datacenters, and different cloud providers. Workloads get added for monitoring as soon as they are discovered
- Threat detection use cases across different service layers of cloud including virtual machines, cloud consoles, cloud storage, database, app services, containers, serverless compute, identity services, token management, cloud firewalls, WAF, flow logs.
- Permission changes including public access to storage accounts/S3 buckets
- Sync/copy of S3 objects/Storage account blobs from unusual/blacklisted user accounts, IP address, URL
- Anomalous execution of critical commands on a database/table
- Changes to NSG rules from unusual IP/geographySuspicious API registration
- Unauthorized Container Registry changes
- High velocity of cloud console logins for admin user from multiple geographies in a short time frame
- Lateral movement detection using Azure NSG flow logs, AWS VPC flow logs
- Orchestrate threat response across multiple clouds from a single console. As an example, block a malicious public IP address by writing a containment rule using Azure NSG, AWS VPC security groups
- High fidelity detection across a multi-cloud environment
- Ability to track targeted attacks coming from cyber crime syndicates or nation states leveraging the unified visibility across multi-cloud
- Rapid response using unified visibility and multi-cloud orchestration prevents threats from converting to high impact security incidents or breaches
- Common approach through a unified security technology layer that abstracts the variations and complexities of managing security with each cloud provider
This approach assumes integration across multiple cloud providers. Openness and richness of API from cloud providers for various service layers can be a challenge. The risk is not high since cloud providers will have to keep APIs open and with highest flexibility for their success.
Risk-based vulnerability management
Risk based vulnerability Management takes vulnerability management to a level where it becomes a key process in understanding the organization attack surface, exposure and how it affects businesses and impacts their operations. Using context and risk awareness to prioritize, triage and respond to the vulnerability landscape allows to achieve best possible outcomes and mitigations.
It may bring together up to asset management, vulnerability management, security and network intelligence, risk and security posture complemented with configuration assessment and other operational indicators such as Security Configuration Assessment (SCA). SCA complements a Vulnerability Management program by ensuring that a specific system holds the right security settings.
The first step is getting a good understanding of the organization and mapping business processes. On top of this are mandatory components – Network Vulnerability Scans (external and internal)- to be enriched with the necessary indication of severity, availability of exploit and ease of procuring them. The overall aim is to triage and trigger remediation based on risk. Any further enrichments with other operational data are desired to bring awareness of existing controls, incident history, current volume and profile of events, all in an automated way. Good security analytics are necessary to drive the right correlations and support decisions. AI for instance can help drive continuous improvement of the Triage capabilities and reduction of false positives.
Adding Priorization and risk based management to vulnerability management, a key pillar of any cybersecurity framework, ensures that it is most efficient and aligned to the business objectives ( outcome vs cost).
Risk Management when combined with operations adds a business driven layer and better alignment of IT and non-IT teams.
It supports audit programs to validate adherence to company’s policies or compliance with specific industry security frameworks.
It can improve the maturity and efficiency of the cybersecurity framework implemented. It provides the framework for onboarding a wide variety of additional services and programs such as penetration testing, red teaming and so on.
Any industry enforcing compliance with IT/Security/Data regulations from most generic and widely applicable such as NIST to industry specific ones such as PCI or HIPAA (HIPAA’s Security Rules requiring the correct deployment of security controls protecting any personal health information). Verticals holding assets which are not often hardened and/or patched frequently are also very good candidates for such technology as the decision to take action and disrupt production with remediation actions require to be backed by a business justification.
Risk based vulnerability management needs to be deployed as part of a mature Vulnerability Management program. For it to be effective, the process needs to use reliable asset information, and be connected to multiple sources of enrichment to correlate with the risk indicators.
On the technical side, existing solutions tend to require privileged access on the assets to assess, making integration more complex.
MITRE based risk management
MITRE ATT&CK is a curated knowledge base and model for cyber adversary behavior, reflecting the various phases of an adversary’s attack lifecycle and the platforms they are known to target. ATT&CK focuses on how external adversaries compromise and operate within computer information networks
Many organizations, private sector, governments alike, are starting to use MITRE as a central and key barometer of their operational security and threat preparedness and the ATT&CK knowledge base as a foundation for the development of specific threat models and methodologies. MITRE is also used to prioritize and roadmap, in a risk driven approach, the deployment of new security use cases and projects. . T .
At a high-level, ATT&CK is a behavioral model that consists of the following core components:
- Tactics, denoting short-term, tactical adversary goals during an attack;
- Techniques, describing the means by which adversaries achieve tactical goals;
- Sub-techniques, describing more specific means by which adversaries achieve tactical goals at a lower level than techniques
- Documented adversary usage of techniques, their procedures, and other metadata. other metadata.
Adversary Emulation – ATT&CK can be used to create adversary emulation scenarios to test and verify defenses against common adversary techniques. Profiles for specific adversary groups can be constructed out of the information documented in ATT&CK (see Cyber Threat Intelligence use case). These profiles can also be used by defenders and hunting teams to align and improve defensive measures.
Red Teaming – ATT&CK can be used as a tool to create red team plans and organize operations to avoid certain defensive measures that may be in place within a network. It can also be used as a research roadmap to develop new ways of performing actions that may not be detected by common defenses.
Behavioral Analytics Development– ATT&CK can be used as a tool to construct and test behavioral analytics to detect adversarial behavior within an environment. The Cyber Analytics Repository (CAR) is one example of analytic development can be a starting point for an organization to develop behavioral analytics based on ATT&CK.
Defensive Gap Assessment– ATT&CK can be used as a common behavior-focused adversary model to assess tools, monitoring, and mitigations of existing defenses within an organization’s enterprise. The identified gaps are useful as a way to prioritize investments for improvement of a security program. Similar security products can also be compared against a common adversary behavior model to determine coverage prior to purchasing.
SOC Maturity Assessment – ATT&CK can be used as one measurement to determine how effective a SOC is at detecting, analyzing, and responding to intrusions.
Cyber Threat Intelligence Enrichment – ATT&CK is useful for understanding and documenting adversary group profiles from a behavioral perspective agnostic of the tools the group may use. The structured format of ATT&CK can add value to threat reporting by categorizing behavior beyond standard indicators. Multiple groups within ATT&CK use the same techniques. For this reason, it is not recommended to attribute activity solely based on the ATT&CK techniques used. Attribution to a group is a complex process involving all parts of the Diamond Model, not solely on an adversary’s use of TTPs.
The basis of ATT&CK is the set of techniques and sub-techniques that represent actions that adversaries can perform to accomplish objectives. Those objectives are represented by the tactic categories the techniques and sub-techniques fall under. This relatively simple representation strikes a useful balance between sufficient technical detail and the context around why actions occur at the tactic level. Adoption of ATT&CK is widespread across multiple disciplines, including intrusion detection, threat hunting, security engineering, threat intelligence, red teaming, and risk management. It is important for MITRE to strive for transparency about how ATT&CK was created and the decision process that is used to maintain it, as more organizations use ATT&CK.
- Assumptions that other frameworks (Diamond Model or Kill Chain) are sufficient.
- Technology support.
Managed eXtended Detection & Response (MxDR)
Managed Extended Detection & Response combines Technology and skills to deliver
- advanced threat detection
- deep threat analytics
- global threat intelligence
- Enhanced Threat Hunting
- faster incident analysis
- collaborative incident response on a 24×7 basis.
In other words MxDR provides:
- detection of deep attacks using AI/ML versus using only rules.
- response to threats vs only alerting from traditional MSSPs.
- collects data from all vectors – security devices, users, server end points, cloud, OT/IIOT that enable better detection
(eg logs, alerts, flows, changes in device configuration and vulnerabilities etc).
- Threat Intelligence: Going beyond the generic data of threat intelligence providers, a mature MDR service converts threat intelligence data into actionable tasks, anticipating what could happen and how to stop it if it happens.
- Threat Hunting: AI models are applied on security, user and IT data to enable the detection of unknown and hidden threats.
- Security Monitoring: The application of rules to logs and security events to detect known attacks. MDR offering has a SIEM module for detecting known threats, policy, and compliance violations.
- Incident Analysis: This MDR module triages alerts to focus on the most relevant threats and then investigating them to identify potential impact to assets and spread of attack. The alerts are investigated for who, what, when, and how to determine the extent of the impact.
- Threat Containment: It provides automated containment of threats and prevents threats from becoming incidents or breaches
- Response Orchestration: It enables carrying out rapid, coordinated activities for containment, remediation, and recovery. It provides the basis for collaboration between key teams responding to an attack
including end user teams and MDR specialized responders.
Deep detection of threats coming
from any vector.
- Minimize Response tasks with automation.
- Increased threat containment speed, limiting threats from leading to incidents or breaches.
- Get specialized skill sets for incident/breach response.
- Centralized visibility across Hybrid IT environment.
- Better TCO using a combination of technologies, skill sets.
Cost could be sometimes a challenge to adoption although MDR is getting widely adopted.
Edge security analytics
With Edge technology, new points where data is not only transiting but is processed and stored are added along the chain. These new points are not as stringently protected as central platforms behind DMZs and other controls. They are both physically and logically more exposed. Protecting and monitoring the security of Edge devices is critical to maintaining trust in the overall business process. When secured and properly monitored, the edge device becomes a reliable component which in turn can leverage its strategic positioning close to the data sources and contribute to the threat detection & response strategies deployed to the environment it sits in.
Edge computing can bring priceless optimization to security solutions. It can
- run asset inventories and security configuration assessments on local estate-Run IoT Vulnerability Scanning and Patching
- run any correlation, threat analytics routine or AI threat model on local data
- support response orchestration including with local decision
The advances in Federated machine learning will boost the last two use cases allowing for collaborative and scalable new types of XDR capable to leverage a multitude of locally run detection analytics, and execute on response strategies.
Protecting and monitoring the security of the Edge devices are critical to maintaining the trust in the overall business process and unlocking the value of Edge technology
All verticals are in a position to consider edge adoption for their corporate IT (Cloud adoption) and/or their business process
Complexity to monitor security of the edge in the wider context of the systems and networks it interfaces with may lead organizations to opt to limit Edge capabilities and maintain its security to low levels, rather than augment them having to mitigate all the underlying risks. Detect and identify any patterns of spoofing, data poisoning and other attacks on the edge device if the events coming from it are not put in perspective with how other components perceive its activity can be a challenge for Edge technology adopters as in many cases the other systems that it communicates with rely on external providers and when internal usually within different teams and perimeters. Hence integrated edge solutions where security end-to-end supervision is ensured will be key differentiators for Digital services providers.
Safe intelligence sharing
Sharing threat intelligence but also mitigative controls and safe configurations requires a lot of effort. It starts with identifying the right internal sources, solving data ownership and other compliance issues to finding the right vehicle for sharing. Safely collaborating in a group of trusted parties can either come from active sharing of data (eg Threat intelligence) or by collective usage of trusted infrastructure. When information is put in common and used / processed by all participants then there is no need to route it through platforms and processes dedicated to sharing.
Putting data in common and collaborating in trusted environments, as well as safe sharing will both benefit from higher degrees of automation injected into the underlying tools and platforms. Starting with the data itself, automation (including using AI) of data classification, labeling and proper handling would reduce the human cost of sharing and the reticence around it. The need to embark, on top of protective measures, data breach detection & response (such as event monitoring and DLP itself) as requirement in the control framework and into the sharing platforms is very likely.
NIST SP 800-150 provides the following example scenarios of Threat information sharing
- Scenario 1: Nation-State Attacks against a Specific Industry Sector
- Scenario 2: Collaboration for Malware campaigns Analysis
- Scenario 3: Attack against an Industry Sector
- Scenario 4: Public event Phishing Attacks
- Scenario 5: Business Partner Compromise (or Supply chain attack)
Scenario 6: CERT collaboration with other bodies (indicators / feedback loop)
To unlock the benefits of Security information and intelligence sharing, a summary of which is also given by NIST :
- Shared situational awareness
- Improved security posturex
- Knowledge maturation
- Greater defense agility
The increasingly stringent data regulation may be a barrier to adoption of automation and AI technologies for the purposes of sharing. Regulators are still very prudent when considering AI/un-assisted processing of regulated data.
Blockchain security monitoring
In the most generic cases, blockchain is characterized by a peer to peer network allowing nodes to communicate with each other in a decentralized way. While blockchains’ security features make them resistant to some cyberattacks, their inherent technologies make them vulnerable to numerous issues that centralized databases do not usually face. The organizations which implement Blockchain combine the exposure effects due to the technology inherent vulnerabilities, to a typical start-up mentality which does not put security first. The first layer to monitor is the network, as blockchain always relies on it. Even as a new technology, most IT security best practices can be applied such as Transport Layer Security (TLS), firewalls and anti-DDoS, especially to avoid a node to be taken away or isolated from the blockchain. On the protocol part, formal verifications and security proof incentives are needed to protect how the nodes agree upon the ledger state.
Blockchain inherent technologies make it subject to security issues that centralized databases do not usually face. For instance :
- Can the nodes in a permissioned or permissionless blockchain be taken down through a substitution or a denial of service?
- Is the transaction integrity compromised before it reaches a validating peer?
- Is the network subject to Man-in-the-middle attack including with intent to de-couple off-chain data from register value?
- Does the information sharing remain within the intended boundaries?
- Can credentials theft allow unauthorized parties to participate in a consortium?
The monitoring of these threats shall be implemented on each of the 4 layers’ levels that constitute the Blockchain model:
- Network: by running a blockchain, a network is used to communicate between the peers and manages how messages are sent and received.
- Protocol: the protocol is the consensus and how the peers differently agree upon the order and message content that are posted by Blockchain users.
- Ledger: from a local point of view, an organization must rely on a valid, ordered and unforgeable register so the application will use the trusted data inside this ledger.
- Application: the application is built upon the data inside the ledger.
Blockchain security monitoring allows to unlock the benefits of Blockchain while mitigating its risks.
In the public sector, governments across the world are experimenting Blockchain for data sharing (USA), payments (Japan), anti-counterfeiting (EU) and even voting (Denmark) .
In retail and e-Commerce, Blockchain is used to enhance customer experience and traceability by tackling counterfeit goods.
In the music industry, to resolve improper rights management issues or piracy of digital records.
The increasingly stringent data regulation may be a barrier to adoption of automation and AI technologies for the purposes of sharing. Regulators are still very prudent when considering AI/un-assisted processing of regulated data.
Unified fraud & Security
Financial Services have been the target of cybercrime syndicates due to high financial gain prospects. Financial gain from a security attack is usually the result of a fraudulent activity carried out from credentials or data gained from the successful breach. Traditional approach has been to have two separate silos for monitoring security and fraud. The nature of financially motivated attacks on banks and other financial services organizations calls for unification of security and fraud monitoring for much faster detection and containment of financial impact.
- Attack on Swift systems: The initial breach occurs through phishing, malware-based attack. Post this, lateral movement occurs and targets Swift systems. Money transfer is then crafted through instructions for large wire transfers which are fraudulent. In this case, detection of phishing & malware breach is detected by Cybersecurity SOC teams while the financial fraud is detected by fraud management team within a bank. Impact of such breaches can be limited by quickly correlating the initial system breach with fraud transactions being attempted.
- Internet Banking fraud: Internet banking systems can get breached through web application vulnerabilities. This can lead to an attacker crafting fake transactions to transfer funds from customer accounts to mule accounts and thereafter withdraw from mule accounts. Here again the web application attack and corresponding breach is detected by SOC teams. The financial fraud is detected by fraud management teams. Unified monitoring that can correlate the web application attack with fraudulent transfer to mule accounts enables higher fidelity detection along with much faster action to limit damage
- Credit card fraud: Credit card data breach occurs through phishing, malware, later movement to credit card systems followed by exfiltration of customer card data. With the use of cloud systems and cloud storage for financial transactions the risks are even higher. Stolen credit card data is used for fraudulent transfers. This can be better detected with unified monitoring of attack signals from SOCs and fraud transfers observed. In the case of sophisticated attacks, SOCs might not get to see the breach happening. In unified monitoring scenario, fraud detected in a card can trigger investigation on access to that card data and forensics around the same. Similar unified monitoring requirement can be extrapolated for attacks on ATM, Core Banking, Mobile Banking and other channels in banking.
Financial fraud can be detected and in many cases prevented with unified monitoring. With innovative high impact attacks in high volume to be expected in the coming years which can lead to large scale financial losses, using unified monitoring of security threats and fraud could help prevent such losses.
- Organization restructuring along with process changes can be an impediment especially in large financial institutions.
- Availability of common platforms that can correlate security incidents with fraudulent transactions.
- Regulatory approvals which might be required.
API Security monitoring
Use of API has gained high traction due to the need for agile interoperability between software layers in Cloud, SDN (Software-Defined Networks), containers and event driven software. This has also created new API protocols and interfaces like gRPC, GraphQL in addition to REST. This creates a new set of threats where API become targets for infiltration. API authentication, authorization methods are emerging at a rapid pace. Just like any other technology the initial implementations are likely to be vulnerable with weak authentication and protection mechanisms. Hence monitoring API is a critical requirement.
API Discovery: It is important for an enterprise to understand all the APIs that are being used. API discovery is done using a combination of scanning for API end points, integration with API gateways, container clusters, app servers and gateways. This enables setting up a baseline of APIs used in the environment and leads to classification of public, private, shared APIs. Any new APIs observed can be examined further to understand if they are malicious ones spawned by an attacker.
Identify Suspicious API activity – AI can be used to create a baseline of all the API communication happening between different API end points. As an example, API baseline can be created using features of API type, end points, authentication methods, time of day, day of week, user association, network association. Any deviation from the baseline will be flagged by AI and can be further compared with typical attacker behavior to detect any malicious activity using API infiltration. Higher priority can be assigned for internet facing APIs or APIs that handle sensitive data.
API monitoring will enable detection of new threats that will evolve for exploiting API methods, protocols, configuration weaknesses, tokens and other API parameters. Higher detection confidence will lead to higher adoption of API which is critical for innovation in multiple industry sectors.
All verticals will start using APIs since it enables much faster integration between applications leading to easier collaboration with suppliers, third parties, enterprise applications, SDN, cloud. Hence API security monitoring will apply to all verticals.
As a starting point, highest impact and use is expected in technology and financial services.
AI driven SOAR
SOAR is a solution which automates analysis, triaging, investigation of security alerts. It also enables response action to contain an attack by integrating with UTMs, EDR, Proxies, Web security gateways, Email systems, saving time and delinking the need for high expertise across many platforms for SOC analysts and responders.
Current SOAR platforms rely on creation of workflows through a visual design tool to structure playbooks against different types of attacks and corresponding analysis/response scenarios. However, they become unwieldy if we create workflows for every possible scenario. It does not scale.
A new “Avatar” of SOAR is required to be more effective to address cyberthreats. SOAR will evolve and emerge very similar to SIEM. Instead of using workflows for every scenario, Artificial intelligence (AI) will be used to learn analysis and response patterns. AI-based SOAR will profile vulnerability patterns, network architecture, asset criticality to auto generate workflows for containment, based on specialized AI techniques that can learn from previous successful attack scenarios and responses. This knowledge will then be used in the context of organization network, data, systems to make possible intelligent analysis and response.
- In the context of a typical breach (phishing email, account take over followed by malware implementation, lateral movement, data exfiltration from critical systems), auto generate workflows required to manage this type of attack as soon as the account take over alert is received. Since AI based SOAR will have previously profiled the environment, the system will automatically search for other accounts compromised with emphasis on critical users including CXOs, data owners.
- The system can also take steps on resetting passwords, disabling accounts as required.
- AI based SOAR can take intelligent decisions to contain lateral movement based on network architecture understanding.
- Understanding of the attack tree towards critical servers containing business critical data and initiate containment steps to disrupt the spread through the attack tree leading to the critical systems.
All of these use cases will be based on auto generated analysis and response mechanisms without the need for manual creation of any workflows.
AI-driven SOAR platforms address the scalability issue of current SOAR platforms to be more effective in a context where attacks are becoming more complex, perimeters disappear, and digital diversity explodes.
The evolution of AI driven SOAR is dependent on speed at which cognitive AI evolves.
Availability of data sets for learning responses based on attack scenarios will also be a challenge.
Initial transition towards AI based SOAR could be to semantic Q&A systems that analysts can ask targeted questions to for answers and leading up to cognitive systems that can make decisions on their own.
Security Data Annotation
While it is possible to pre-train an AI to identify malware patterns thanks to the large amount of malware data, it is a challenge to pre-train it to recognize patterns of user misbehavior. A similar challenge would reside in training an AI powered SIEM to determine events likely to contribute to a breach sequence from harmful normal events. This is largely a data challenge as not enough annotated data exists to train such AI.
Yet, is Data annotation for cyber security purposes impossible? Although complex, convergence of data collection, data mining, event management and incident response into XDR platform has created the platforms capable of capturing an end-to-end picture and therefore a single point of annotation.
Use of annotated data to
- train a prediction system. The risk of a sequence can increase as the sequence events un-match the collection of the predicted possibilities
- to recognize specific patterns such as those of forming botnets
- on a specific source (a rich and self-contained source such as DNS data) to solve the scope challenge ; The addition of Root cause analysis can also solve or mitigate the effects of the explain-ability challenge.
The data annotation would take to the next level security analytics and the wider Detection & response systems.
All industries. The Tech sector would however be likely to pioneer the technology. It has already started together with academics although these initiatives are not yet operationalized.
It is not sufficient to do this annotation one time. The data sets shall be maintained up to date and the models developed from AI training shall be governed accordingly.
This requires a huge amount of data to be kept readily available (so-called online storage) and processed incrementally, requiring extensive resources. While end-point telemetry alone and EDR analytics require peta-bytes of data, Data annotation would require potentially more data sources to be retained, and for longer periods of time (in order to cover the full cycle from initial compromise to incident closure).
Explainability could be a challenge if such annotation is intended to train event decision systems.
Cyber Deception tools are systems that are managed in a centralized way to allow organizations to build, distribute and handle all the components required for a deception environment (decoys, lures, honeytokens or breadcrumbs). This kind of security countermeasure is appropriate for small companies and big ones suffering any kind of constraint preventing to deploy more intrusive security tools or suffering more frequent attacks. The key parameters are the number and location, the engagement (how deep to interact with attackers) or believability (how similar is to the attacked site). There are some limitations as well: blind spots, number of artifacts, malicious insiders, etc.
- Basic Threat Detection: simple artifacts with low interaction, believability and cost.
- Advanced Threat Detection and Response: high believability, low interaction, higher integration and medium cost.
- Production of local Indicators of Compromise and Machine-Readable Threat Intelligence: advanced analytics, high interaction, believability and cost.
- Integrated Proactive Threat Hunting: robust SOC in place, expanding Threat Hunting capabilities, on top of previous Use Case.
- Active Attacker Engagement: building a battlefield, ad hoc artifacts, riskiest approach and very high cost.
- Low cost (in many Use Cases).
- High impact as a complement due to high fidelity alerts.
- Can be an option for a Threat Detection system for small and midsize companies
- Easiness of implementation .
Deception technologies are mostly seen as a complementary add-on to core monitoring functions leading to budget allocation issues. Core technologies are also adding deception as an integrated feature. This will again slow down adoption of deception technology as a stand-alone entity. On the technical front, practicality of a serious threat actor believing the decoy to be the real system is a challenge. Simulation of realistic interactions is a challenge, this often leads to easy identification of decoy systems. Advancement of AI in combination with concept of digital twins can make decoy close to real experience. It remains to be seen if cyber deception products will evolve in this direction.
5G monitoring & response
5G is fast becoming the connectivity revolution set to change many industries.
Large scale adoption and use cases for many industries makes it a target vector for nation states and cybercrime syndicates.
Although 5G is intrinsically secure thanks to a strong security and authentication framework, 5G security has high risk exposure from insecure legacy from 2/3/4G, traditional IP-based threats, virtualization-related threats, threats from 5G Software Defined Networks, from use of devops environments with cloud & containers, but also threats on assets in the network architecture (RAN, Transport Network, Core Network…).
In the context of such risk exposure and high value industry use cases, monitoring 5G environment for threats becomes critical.
5G security monitoring is multi-dimensional given that 5G is a combination of many different technologies and practices including IP stack, connecting RAN with wireless protocols, use of Software Defined Networks (SDN), Network Function Virtualization (NFV), containers, cloud technologies. Below are some interesting use cases:
- News threats from adoption of agile development model and operation using DevOps in 5G environments to meet customer requirements rapidly. Early detection of access from cybercrime groups or nation-states to a 5G DevOps environment goes a long way in limiting damage. Different activities in DevOps including build, code push and release will be profiled for geographical, location, IP/URL and user access. Any anomalous access/activity that deviates from a normal profile will be immediately investigated and responded to. This can be achieved by directly integrating with DevOps platforms.
- 5G will be driven by the influence of software on network functions, known as Software Defined Network (SDN) and Network Function Virtualization (NFV), giving rise to new security threats unique to SDN. Given the importance of SDN, detection of these threats in the early stages to prevent far-reaching damage is essential, through AI algorithms monitoring data flow in a SDN (monitoring flow creation, validating the network identifiers, identifying changes to flow tables, monitoring access patterns and identifying communication patterns between network elements).
- Much of the openness and programmability offered by the new 5G network architecture relies on the expanded use of APIs. A poorly designed or configured API with inaccurate access control rules may expose core network functions and sensitive parameters. A threat actor can target different types of APIs exposed in different layers of the network. To comprehensively detect threats related to exploitation of APIs, it is necessary to inspect all activity associated with these APIs. AI-driven models will complement the rule-driven and policy-driven security measures to provide a 360 degree coverage.
- A typical 5G network will consist of various components including Radio Access Network (RAN), Software Defined Network (SDN), Edge cloud, Management and Network Orchestrator (MANO), WLAN, end users and Internet of Things (IoT). The sprawl of new technologies in 5G makes it difficult to draw clear lines on authorized devices within a 5G environment. This makes it easier for an attacker to introduce rogue network elements and persist in the network. It is difficult to use traditional discovery methods and identify rogue network elements using rules or manual analysis. With an AI based approach that can autonomously fingerprint the topology of a 5G network and monitor it on a continuous basis, it could detect changes in the network topology. In addition to fingerprinting the topology, the model can learn from various features of network elements like frequency spectrum, authentication methods, cryptographic controls, network access patterns, geo location etc. and use them to determine whether a new element introduced in the network is genuine or not.
5G will become the backbone of nation state communication infrastructure, making 5G networks a target for Nation States and cybercrime syndicates. In addition , 5G will be adopted by all industry verticals for a number of use cases. Monitoring 5G environment enables protection of critical infrastructure. It also prevents infiltration of enterprises using 5G for different use cases. Across the board, 5G monitoring will enable deep detection of threats, increased threat containment speed for protection.
- 5G monitoring will require native integration into a number of core 5G technologies.
- The data volumes for monitoring can be high.
- AI model complexity can be high. Though some of these factors look challenging, they can be addressed easily through technological advancements in High-performance-computing, big data and AI.
AI driven threat modeling
State of the art threat hunting leverages high quality threat intelligence combined with Indicators of Compromise (IOCs). Mature teams integrate in their function the gradual improvement of automation of their tasks (hunting and enhancing detection capabilities). EDR and SOAR have much contributed to the enhancement of threat hunting by introducing automation and orchestration including but not limited to large estate covered by EDR agents. SOAR propagates new IOCs to more systems allowing hunting for these in existing datasets and historical events. TI-centric and TI-platform supported hunting, much more structured, and targeted to specific use cases, is also improving the quality and aligning it to the real threat scenarios that the enterprise is facing. AI is however under-utilized in this area. There is a wide room for using AI to boost the existing assets and tools of threat hunting, and create new ones.
“What if? “ kind of use cases today are dependent on the expertise, level of experience and imagination of the threat hunter. Disruption may come from the ability of an autonomous and creative AI to consider a broad range of scenarios. An intermediate step may well be the augmentation of a qualified set of TI with additional records of a high plausible accuracy using generative models. Reasoning and simulation may constitute ways of pursuing and filtering out among these scenarios and hypothesis. Graph embedded AI may allow to apply these to large sets of data.
Significantly reduce if not eliminate the persistent threats by using their residency time within the environment to hunt them and take them down.
- Long cycle of operationalization coupled with risks that the same technology is used in adverse hands simultaneously to invent adaptive new types of vectors.
Swarm security Intelligence
Swarm security intelligence is the outcome of processing signaling data from heterogenous digital agents towards achieving a significantly unified security posture and behavior without a central coordinating entity.
- Security resilient systems (without a central command that is prone to failure)
- “Crowd” driven decision adapted to a particular environment
- Autonomous systems
- Collaborative systems
Highly complex behaviors can emerge from the interactions and data exchanges of digital agents that are following a simple set of rules as entirely new (and non-human driven logic) solution “paths” to security problems are discovered.
- Infant technology
- Usability not entirely clear at this stage
- Inability to cover the regular IT security landscape of the moment
Cognitive detection & response
Cyber attacks are increasingly AI-driven. Five years from now AI will be the hacker. The attack surface will get wider and wider with new technologies around Swarm, Edge, IIOT, IOT, IOMT. The application of these technologies in industries, Smart cities, driverless cars, drones will increase the attack surface readily available to exploit for threat actors. The use of AI for attacks will increase the scale of not only automated but intelligent attacks. This scenario of threat will call for using Cognitive AI (CAI) approach for detecting and responding to AI driven attacks. CAI will enable mimicking human thinking in detecting and responding to attacks. This will increase the scale of operations without having proportionate human experts for managing high scale and sophisticated attacks using AI. CAI can also mimic human thinking in detecting vulnerabilities including zero day ones.
- Cognitive AI algorithms can play the role of Cyber Security SOC personnel including threat hunting, monitoring, investigation, response. CAI bots can mimic human thinking and start assisting cyber experts in operations. AI driven SOAR section has example of some response use cases.
- Take-over of vulnerability management operations by mimicking human actions for vulnerability scanning, application security testing, code reviews for devops. Cognitive AI can also be used for mimicking R&D thinking for detecting zero day vulnerabilities.
- Play an important role in recovery operations post major breaches. Intelligent recovery based on organizational business priorities, technologies, processes can be AI-driven assisted by human intervention. This will increase Cyber Resilience even for organizations with low human expertise.
- Managing high scale AI driven attacks on a wide attack surface area.
- Higher accuracy of detection & response at a much lower cost.
- Lower dependence on high end cyber experts for managing attacks. cyber experts can focus their time on strategic activities and fine-tuning AI algorithms.
- CAI bot-based capabilities will equalize detection and response capabilities for large & SME organizations through CAI bot based capabilities as against dependence on human skill set.
- Reduced downtime for organizations even during scenarios of breach through use of CAI for cyber resilience operations.
The evolution of Cognitive detection & response is dependent on speed at which cognitive AI evolves.
Availability of data sets for learning responses based on attack scenarios will also be a challenge.
The evolution will be phased over the years with initial evolution to specialized AI using supervised algorithms. It will then evolve to Cognitive AI.