Daniel Suissa
Daniel is the Data Team Lead at Sentra. He has nearly a decade of experience in engineering, and in the cybersecurity sector. He earned his BSc in Computer Science at NYU.
Name's Data Security Posts
Top 5 GCP Security Tools for Cloud Security Teams
Top 5 GCP Security Tools for Cloud Security Teams
Like its primary competitors Amazon Web Services (AWS) and Microsoft Azure, Google Cloud Platform (GCP) is one of the largest public cloud vendors in the world – counting companies like Nintendo, eBay, UPS, The Home Depot, Etsy, PayPal, 20th Century Fox, and Twitter among its enterprise customers.
In addition to its core cloud infrastructure – which spans some 24 data center locations worldwide - GCP offers a suite of cloud computing services covering everything from data management to cost management, from video over the web to AI and machine learning tools. And, of course, GCP offers a full complement of security tools – since, like other cloud vendors, the company operates under a shared security responsibility model, wherein GCP secures the infrastructure, while users need to secure their own cloud resources, workloads and data.
To assist customers in doing so, GCP offers numerous security tools that natively integrate with GCP services. If you are a GCP customer, these are a great starting point for your cloud security journey.
In this post, we’ll explore five important GCP security tools security teams should be familiar with.
Security Command Center
GCP’s Security Command Center is a fully-featured risk and security management platform – offering GCP customers centralized visibility and control, along with the ability to detect threats targeting GCP assets, maintain compliance, and discover misconfigurations or vulnerabilities. It delivers a single pane view of the overall security status of workloads hosted in GCP and offers auto discovery to enable easy onboarding of cloud resources - keeping operational overhead to a minimum. To ensure cyber hygiene, Security Command Center also identifies common attacks like cross-site scripting, vulnerabilities like legacy attack-prone binaries, and more.
Chronicle Detect
GCP Chronicle Detect is a threat detection solution that helps enterprises identify threats at scale. Chronicle Detect’s next generation rules engine operates ‘at the speed of search’ using the YARA detection language, which was specially designed to describe threat behaviors. Chronicle Detect can identify threat patterns - injecting logs from multiple GCP resources, then applying a common data model to a petabyte-scale set of unified data drawn from users, machines and other sources. The utility also uses threat intelligence from VirusTotal to automate risk investigation. The end result is a complete platform to help GCP users better identify risk, prioritize threats faster, and fill in the gaps in their cloud security.
Event Threat Detection
GCP Event Threat Detection is a premium service that monitors organizational cloud-based assets continuously, identifying threats in near-real time. Event Threat Detection works by monitoring the cloud logging stream - API call logs and actions like creating, updating, reading cloud assets, updating metadata, and more. Drawing log data from a wide array of sources that include syslog, SSH logs, cloud administrative activity, VPC flow, data access, firewall rules, cloud NAT, and cloud DNS – the Event Threat Detection utility protects cloud assets from data exfiltration, malware, cryptomining, brute-force SSH, outgoing DDoS and other existing and emerging threats.
Cloud Armor
The Cloud Armor utility protects GCP-hosted websites and apps against denial of service and other cloud-based attacks at Layers 3, 4, and 7. This means it guards cloud assets against the type of organized volumetric DDoS attacks that can bring down workloads. Cloud Armor also offers a web application firewall (WAF) to protect applications deployed behind cloud load balancers – and protects these against pervasive attacks like SQL injection, remote code execution, remote file inclusion, and others. Cloud Armor is an adaptive solution, using machine learning to detect and block Layer 7 DDoS attacks, and allows extension of Layer 7 protection to include hybrid and multi-cloud architectures.
Web Security Scanner
GCP’s Web Security Scanner was designed to identify vulnerabilities in App Engines, Google Kubernetes Engines (GKEs), and Compute Engine web applications. It does this by crawling applications at their public URLs and IPs that aren't behind a firewall, following all links and exercising as many event handlers and user inputs as it can. Web Security Scanner protects against known vulnerabilities like plain-text password transmission, Flash injection, mixed content, and also identifies weak links in the management of the application lifecycle like exposed Git/SVN repositories. To monitor web applications for compliance control violations, Web Security Scanner also identifies a subset of the critical web application vulnerabilities listed in the OWASP Top Ten Project.
Securing the cloud ecosystem is an ongoing challenge, partly because traditional security solutions are ineffective in the cloud – if they can even be deployed at all. That’s why the built-in security controls in GCP and other cloud platforms are so important.
The solutions above, and many others baked-in to GCP, help GCP customers properly configure and secure their cloud environments - addressing the ever-expanding cloud threat landscape.
Overcoming Gartner’s Obstacles for DSPM Mass Adoption
Overcoming Gartner’s Obstacles for DSPM Mass Adoption
Gartner recently released its much-anticipated 2024 Hype Cycle for Data Security, and the spotlight is shining bright on Data Security Posture Management (DSPM). Described as having a "transformative" potential, DSPM is lauded for its ability to address long-standing data security challenges.
DSPM solutions are gaining traction to fill visibility gaps as companies rush to the cloud. Best of breed solutions provide coverage across multi-clouds and on-premises, providing a holistic approach that can become the authoritative inventory of data for an organization - and a useful up-to-date source of contextual detail to inform other security stack tools such as DLPs, CSPMs/CNAPPS, data catalogs, and more, enabling these to work more effectively. Learn more about this in our latest blog, Data: The Unifying Force Behind Disparate GRC Functions.
However, as with any emerging technology, Gartner also highlighted several obstacles that could hinder its widespread adoption. In this blog, we’ll dive into these obstacles, separating the legitimate concerns from those that shouldn't deter any organization from embracing DSPM—especially when using a comprehensive solution like Sentra.
Obstacle 1: Scanning the Entire Infrastructure for Data Can Take Days to Complete
This concern holds some truth, particularly for organizations managing petabytes of data. Full infrastructure scans can indeed take time. However, this doesn’t mean you're left twiddling your thumbs waiting for results. With Sentra, insights start flowing while the scan is still in progress. Our platform is designed to alert you to data vulnerabilities as they’re detected, ensuring you're never in the dark for long. So, while the scan might take days to finish, actionable insights are available much sooner. And scans for changes occur continuously so you’re always up to date.
Obstacle 2: Limited Integration with Security Controls for Remediation
Gartner pointed out that DSPM tools often integrate with a limited set of security controls, potentially complicating remediation efforts. While it’s true that each security solution prioritizes certain integrations, this is not a challenge unique to DSPM. Sentra, for instance, offers dozens of built-in integrations with popular ticketing systems and data remediation tools. Moreover, Sentra enables automated actions like auto-masking and revoking unauthorized access via platforms like Okta, seamlessly fitting into your existing workflow processes and enhancing your cloud security posture.
Obstacle 3: DSPM as a Function within Broader Data Security Suites
Another obstacle Gartner identified is that DSPM is sometimes offered merely as a function within a broader suite of data security offerings, which may not integrate well with other vendor products. This is a valid concern. Many cloud security platforms are introducing DSPM modules, but these often lack the discovery breadth and classification granularity needed for robust and accurate data security.
Sentra takes a different approach by going beyond surface-level vulnerabilities. Our platform uses advanced automatic grouping to create "Data Assets"—groups of files with similar structures, security postures, and business functions. This allows Sentra to reduce petabytes of cloud data into manageable data assets, fully scanning all data types daily without relying on random sampling. This level of detail and continuous monitoring is something many other solutions simply cannot match.
Obstacle 4: Inconsistent Product Capabilities Across Environments
Gartner also highlighted the varying capabilities of DSPM solutions, especially when it comes to mapping user access privileges and tracking data across different environments—on-premises, cloud services, and endpoints. While it’s true that DSPM solutions can differ in their abilities, the key is to choose a platform designed for multi-cloud and hybrid environments. Sentra is built precisely for this purpose, offering robust capabilities to identify and protect data across diverse environments (IaaS, PaaS, SaaS, and On-premises), ensuring consistent security and risk management no matter where your data resides.
Conclusion
While Gartner's 2024 Hype Cycle for Data Security outlines several obstacles to DSPM adoption, many of these challenges are either surmountable or less significant than they might first appear. With the right DSPM solution, organizations can effectively overcome these obstacles and harness the full transformative power of DSPM.
Curious about how Sentra can elevate your data security?
Request a demo here.
Protecting Source Code in the Cloud
Protecting Source Code in the Cloud
Source code lies at the heart of every technology company’s business. Aside from being the very blueprint upon which the organization relies upon to sell its products, source code can reveal how the business operates, its strategies, and how its infrastructure is designed. Many of the recent data breaches we’ve witnessed, including those against industry leaders like LastPass, Okta, Intel, and Samsung, were instances where attackers were able to gain access to all or part of the organization's source code.
The good news with source code is that we usually know where it originated from, and even where it’s destined to be shipped. The bad news is that code delivery is getting increasingly more complex in order to meet business demands for fast iterations, causing code to pass multiple stations on its way to its final destination. We like to think that the tools we use to ship code protect it well and clean it up where it's no longer needed, but that’s wishful thinking that puts the business at risk. To make matters worse, bad development practices can lead to developer secrets and even customer information being stolen with a source code breach, which can in turn trigger cascading problems.
At Sentra, we see protecting source code as the heart of protecting an organization’s data. Simply put, code is a qualitative type of data, which means that unlike quantitative data, the impact of the breach does not depend on its scale. Even a small breach can provide the attacker with crucial intellectual property or intel that can be used for follow up attacks. That said, not every piece of code leaked can damage the business in the same way.
So how do we protect source code in the cloud?
Visualization
All data protection starts with knowing where the data is and how it’s protected. We always start with the home repository, usually in GitLab, GitHub, or BitBucket. Then we move to data stores that are a part of the code delivery cycle. These can be container-management services like Amazon’s Elastic Containers Service or Azure Container Instances, as well as the VMs running that code. But because code is also used by developers on personal VMs and moved through Data Lakes, Sentra takes a wider approach and looks for source code across all of the organizations’ non-tabular data stores across all IaaS and SaaS services, such as files in Azure Disk Storage volumes attached to Azure VMs.
Classification
We said it before and we’ll say it again - not all data is created equal. Some copies of source code may include intellectual property and some may not. For example, a CPP file with complex logic is not the same as an HTML file distributed by a CDN. On the other hand, that HTML might accidentally contain a developer secret, so we must look for those as well before we label it as ‘non-sensitive’. Classifying exactly what kind of data each particular source code file contains helps us filter out the noise and focus on the most sensitive data.
Detecting Data Movement
At this point we may know where source code is located and what kind of information it contains, but not how where it came from or how to stop bad data flows that lead to unwanted exposure. Remember, source code is handled both manually and by automatic processes. Sometimes it’s copied in its entirety, and sometimes partially. Detecting how much is copied and through which processes will help us enforce good code handling practices in the organization. Sentra combines multiple methods to identify source code movement at the function level by understanding the organization’s user access scheme, activity, and by looking at the code itself.
Determining Risk
Security efficiency begins with prioritization. Some of the code we will find in the environment may be properly separated from the world behind a private network, or even encrypted, and some of it may be partially exposed or even publicly accessible. By determining the Data Security Posture of each piece of code we can determine what processes are conducive to the business’ goals and which put it at risk. This is where we combine all of the above steps and determine the risk based on the kind of data in the code, how it is moved, who has access to it, and how well it’s protected.
Remediation
Now that we understand what source code needs protecting against which risks, and more importantly, what are processes which require the code in each store, we can choose from several remediation tools in our arsenal:
- Encrypt. Often source code is not required to be loaded from rest very-quickly, so it’s alway a good idea to encrypt or obfuscate it.
- Limiting access to all stores other than the source code repository.
- Use a retention policy anywhere where the code is needed only intermediately.
- Review old code delivery processes that are no longer needed.
- Remove any shadow data. Code inside unused VMs or old stores that weren't accessed in a while can most probably be removed altogether.
- Detect and remove any secrets in source code and move them to vaults.
- Detect intellectual property that is used in non-compliant or insecure environments.
Source code is the data that absolutely cannot be allowed to leak. By taking the steps above, Sentra's DSPM ensures that it stays where it’s supposed to be, and always is protected properly.
Book a demo and learn how Sentra’s solution can redefine your cloud data security landscape.
How Sensitive Cloud Data Gets Exposed
How Sensitive Cloud Data Gets Exposed
When organizations began migrating to the cloud, they did so with the promise that they’ll be able to build and adapt their infrastructures at speeds that would give them a competitive advantage. It also meant that they’d be able to use large amounts of data to gather insights about their users and customers to better understand their needs.
While this is all true - it does mean that there’s more data than ever that security teams are responsible for protecting more data than ever before. As data gets replicated, shared, and moved throughout the public cloud, sensitive data exposure becomes more common. These are the most common ways that sensitive cloud data is exposed and leaked - and what’s needed to mitigate the risks.
Causes of Cloud Data Exposure
Negligence: Accidentally leaving a data asset exposed to the internet shouldn’t happen. Cloud providers know it happens anyway - AWS’ first sentence in their best practices for S3 storage article says “Ensure that your Amazon S3 buckets use the correct policies and are not publicly accessible.” 5 years ago AWS added warnings to dashboards when a bucket was publicly exposed. Of course, S3 is just one of many data stores that contain sensitive data and are prone to accidental exposure. Despite the warnings, exposed data assets continue to be a cause of data breaches. Fortunately, these vulnerabilities are easily corrected- assuming you have perfect visibility into your cloud environment.
Data Movement: Even when sensitive data is properly secured, there’s always a risk that it could be moved or copied into an unsecured environment. A common example of this is taking sensitive data from a secured production environment and moving it to a developer environment with a lower security posture. In this case, the data’s owner did everything right - it was the second user who moved the data who accidentally put it at risk. Another example would be an organization which has a PCI environment where they keep all the payment information of their customers, and they need to prevent this extremely sensitive data from going to other data stores in less secured parts of their cloud environment.
Improper Access Management: Access to sensitive data should not be granted to users who don’t need it (see the example above). Improper IAM configurations and access control management increases the risk of accidental or malicious data leakage. More access means more potential shadow data being created and abandoned. For example, a user might copy sensitive data and then leave the company, creating data that no one is aware of. Limiting access to sensitive data to users who actually need it can help prevent a needless expansion of your organization’s ‘data attack surface’.
3rd Parties: It’s extremely easy to accidentally share sensitive data with a third party over email. Accidentally forwarding sensitive data or credentials is one of the simplest ways to leak sensitive data from your organization. In the public cloud, the equivalent of the accidental email is granting a 3rd party access to a data asset in your public cloud infrastructure, such as a CI/CD tool or a SaaS application for data analytics. It’s similar to improper access management, only now the over privileged access is granted outside of your organization entirely where you’re less able to mitigate the risks.
Another common way data is leaked to 3rd parties is when someone inside an organization shares something that isn't supposed to have sensitive data, but does. A good example of this is sharing log files with a 3rd party. Log files shouldn’t have sensitive data, but often it can include data like user emails, IP addresses, API credentials, etc.
ETL Errors: When extracting data that contains PII from one from a production database to a data lake or an analytics data warehouse, such as Redshift or Snowflake, sometimes the wrong warehouse might be specified. This is an easy mistake to miss, as data agnostic tools might not understand the sensitive nature of the data.
Why Can’t Cloud Data Security Solutions Stop Sensitive Data Exposure?
Simply put - they’re not looking at the data. They’re looking at the network, infrastructure, and perimeter. That’s how data leaks used to be prevented in the on-prem days - you’d just make sure the perimeter was secure, and because all your sensitive data was on-prem, you could secure it by securing everything.
For cloud-first companies, data isn’t staying behind the corporate perimeter. And while cloud platforms can identify infrastructure vulnerabilities, they’re missing the context around which data is sensitive. Remediating data vulnerabilities - finding sensitive data with an improper security posture remains a challenge.
Discovering and Classifying Cloud Data - The Data Security Posture Management (DSPM) Approach
Instead of trying to adapt on-prem strategies to cloud environments, DSPM (a new ‘on the rise’ category in Gartner’s™ latest hype cycle) takes a data first approach. By understanding the data’s proper context, DSPM secure sensitive cloud data by:
- Discovering all cloud data, including shadow data and abandoned data stores
- Classifying the different data types using standard and custom parameters
- Automatically detects when sensitive data’s security posture is changed - whether via data movement or duplication
- Detects who can access and who has accessed sensitive data
- Understands how data travels throughout the cloud environment
- Orchestrates remediation workflows between engineering and security teams
Data Security Posture Management solves many of the most common reasons sensitive cloud data gets leaked. By focusing on securing and following the data across the cloud, DSPM helps cloud security teams finally secure what we’re all supposed to be protecting - sensitive data.
To learn more about Data Security Posture Management, check out our full introduction to DSPM, or see it for yourself.
Finding Sensitive Cloud Data in all the Wrong Places
Finding Sensitive Cloud Data in all the Wrong Places
Not all data can be kept under lock and key. Website resources, for example, always need to be public and S3 buckets are frequently used for this. On the other side, there are things that should never be public - customer information, payroll records, and company IP. But it happens - and can take months or years to notice - if you do at all.
This is the story of how Sentra identified a large enterprise’s source code in an open S3 bucket.
As part of work with this company, Sentra was given 7 Petabytes in AWS environments to scan for sensitive data. Specifically, we were looking for IP - source code, documentation, and other proprietary data.
As we often do, we discovered many issues, but really there were 7 that needed to be remediated immediately, 7 that we defined as ‘critical’.
The most severe data vulnerability was source code in an open S3 bucket with 7.5 TB worth of data. This file was hiding in a 600 MB .zip file in another .zip file. We also found recordings of client meetings and a tiny 8.9KB excel file with all of their existing current and potential customer data.
Examples of sensitive data alerts displayed on Sentra's dashboard
So how did such a serious data vulnerability go unnoticed? In this specific case, one of the principal architects at the company had backed up his primary device to their cloud. This isn’t as uncommon as you might think - particularly in the early days of cloud based companies, data is frequently ‘dumped’ into the cloud as the founders and developers are naturally more concerned about speed than security. There’s no CISO on board to build policies. Everyone is just trusted with the data that they have. The early Facebook motto of ‘move fast and break things’ is very much alive in early stage companies. Of course, if they’re successful at building a major company, the problem is now there’s all this data traveling around their cloud environment that no one is tracking, no one is responsible for, and in the case above, no one even knew existed.
Another explanation for unsecured sensitive data in the public cloud is that some people simply assume that the cloud is secure. As we’ve explained previously - the cloud can be more secure than on-prem architecture - but only if it’s configured properly. A major misconception is that everything in the cloud is secured by the cloud provider. Of course, the mere fact that you can host public resources on the cloud demonstrates how incorrect that assumption is - if you’ve left your S3 buckets open, that data is at risk, regardless of how much security the cloud provider offers. It’s important to remember that the ‘shared model of responsibility’ means that the cloud provider handles things like networking and physical security. But data security is on you.
This is where accurate data classification needs to play a role. Enterprises need a way of identifying which data is sensitive and critical to keep secure, and what the proper security posture should be. Data classification tools have been around for a long time, but mainly focus on easily identifiable data - credit card and social security numbers for example. Identifying company secrets that weren’t supposed to be publicly accessible wasn’t possible.
The rise of Data Security Posture Management platforms is changing that. By understanding what the security posture of data is supposed to be. By having the security posture ‘follow’ the sensitive data as it travels through the cloud, security teams can ensure their data is always properly secured - no matter where the data ends up.
Want to find out what sensitive data is publicly accessible in your cloud?
Get in touch with Sentra here to see our DSPM in action.
DSPM vs Legacy Data Security Tools
DSPM vs Legacy Data Security Tools
Businesses must understand where and how their sensitive data is used in their ever-changing data estates because the stakes are higher than ever. IBM’s Cost of a Data Breach 2023 report found that the average global cost of a data breach in 2023 was $4.45 million. And with the rise in generative AI tools, malicious actors develop new attacks and find security vulnerabilities quicker than ever before.
Even if your organization doesn’t experience a data breach, growing data and privacy regulations could negatively impact your business’s bottom line if not heeded.
With all of these factors in play, why haven’t many businesses up-leveled their data security and risen to the new challenges? In many cases, it’s because they are leveraging outdated technologies to secure a modern cloud environment. Tools designed for on premises environments often produce too many false positives, require manual setup and constant reconfiguration, and lack complete visibility into multi-cloud environments. To answer these liabilities, many businesses are turning to data security posture management (DSPM), a relatively new approach to data security that focuses on securing data wherever it goes despite the underlying infrastructure.
Can Legacy Tools Enable Today’s Data Security Best Practices?
As today’s teams look to secure their ever-evolving cloud data stores, a few specific requirements arise. Let’s see how these modern requirements stack up with legacy tools’ capabilities:
Compatibility with a Multi-Cloud Environment
Today, the average organization uses several connected databases, technologies, and storage methods to host its data and operations. Its data estate will likely consist of SaaS applications, a few cloud instances, and, in some cases, on premises data centers.
Legacy tools are incompatible with many multi-cloud environments because:
- They cannot recognize all the moving parts of a modern cloud environment and treat cloud and SaaS technologies as though they are full members of the IT ecosystem. They may flag normal cloud operations as threats, leading to lots of false positives and noisy alerts.
- They are difficult to maintain in a sprawling cloud environment, as they often require teams to manually configure a connector for each data store. When an organization is spinning up cloud resources rapidly and must connect dozens of stores daily, this process takes tons of effort and limits security, scalability and agility.
Continuous Threat Detection
In addition, today’s businesses need security measures that can keep up with emerging threats. Malicious actors are constantly finding new ways to commit data breaches. For example, generative AI can be used to scan an organization’s environment and identify any weaknesses with unprecedented speed and accuracy. In addition, LLMs often create internal threats which are more prevalent because so many employees have access to sensitive data.
Legacy tools cannot respond adequately to these growing threats because:
- They use signature-based malware detection to detect and contain threats.
- This technique for detecting risk will inevitably miss novel threats and more nuanced risks within SaaS and cloud environments.
Data-Centric Security Approach
Today’s teams also need a data-centric approach to security. Data democratization happens in most businesses (which is a good thing!). However, this democratization comes with a cost, as it allows any number of employees to access, move, and copy sensitive data.
In addition, newer applications that feature lots of AI and automation require massive amounts of data to function. As they perform tasks within businesses, these modern applications will share, copy, and transform data at a rapid speed — often at a scale unmanageable via manual processes.
As a result, sensitive data proliferates everywhere in the organization, whether within cloud storage like SharePoint, as part of data pipelines for modern applications, or even as downloaded files on an employee’s computer.
Legacy tools tend to be ineffective in finding data across the organization because:
- Legacy tools’ best defense against this proliferation is to block any actions that look risky. These hyperactive security defenses become “red tape” for employees or connected applications that just need to access the data to do their jobs.
- They also trigger false alarms frequently and tend to miss important signals, such as suspicious activities in SaaS applications.
Accurate Data Classification
Modern organizations also need the ability to classify discovered data in precise and granular ways. The likelihood of exposure for any given data will depend on several contextual factors, including location, usage, and the level of security surrounding it.
Legacy tools fall short in this area because:
- They cannot classify data with this level of granularity, which, again, leads to false positives and noisy alerts.
- There is inadequate data context to determine the true sensitivity based on business use
- Many tools also require agents or sidecars to start classifying data, which requires extensive time and work to set up and maintain.
Big-Picture Visibility of Risk
Organizations require a big-picture view of data context, movement, and risk to successfully monitor the entire data estate. This is especially important because the risk landscape in a modern data environment is extremely prone to change. In addition, many data and privacy regulations require businesses to understand how and where they leverage PII.
Legacy tools make it difficult for organizations to stay on top of these changes because:
- Legacy tools can only monitor data stored in on premises storage and SaaS applications, leaving cloud technologies like IaaS and PaaS unaccounted for.
- Legacy tools fail to meet emerging regulations. For example, a new addendum to GDPR requires companies to tell individuals how and where they leverage their personal data. It’s difficult to follow these guidelines if you can’t figure out where this sensitive data resides in the first place.
Data Security Posture Management (DSPM): A Modern Approach
As we can see, legacy data security tools lack key functionality to meet the demands of a modern hybrid environment. Instead, today’s organizations need a solution that can secure all areas of their data estate — cloud, on premises, SaaS applications, and more.
Data Security Posture Management (also known as DSPM) is a modern approach that works alongside the complexity and breadth of a modern cloud environment. It offers automated data discovery and classification, continuous monitoring of data movement and access, and a deep focus on data-centric security that goes far beyond just defending network perimeters.
Key Features of Legacy Data Security Tools vs. DSPM
But how does DSPM stack up against some specific legacy tools? Let’s dive into some one-to-one comparisons.
Legacy Tools | Data Security Posture Management |
---|---|
Legacy Data Intelligence While these tried-and-true tools have a large market presence, they take a very rigid and labor-intensive approach to security data.
|
|
Cloud DSPM While cloud-only DSPM solutions can help organizations secure data amid rapid cloud data proliferation, they don’t account for any remaining on premises data centers that a company continues to operate.
|
|
Cloud Access Security Broker (CASB) Although many organizations have traditionally relied on CASB to address cloud data security, these solutions often lack comprehensive visibility.
|
|
Cloud Security Posture Management (CSPM) /Cloud-Native Application Protection Platform (CNAPP) While these solutions provide strong cloud infrastructure protection, such as flagging misconfigurations and integrating with DevSecOps processes, they lack data context and only offer static controls that can’t adapt to data proliferation.
|
|
How does DSPM integrate with existing security tools?
DSPM integrates seamlessly with other security tools, such as team collaboration tools (Microsoft Teams, Slack, etc.), observability tools (Datadog), security and incident response tools (such as SIEMs, SOARs, and Jira/ServiceNow ITSM), and more.
Can DSPM help my existing data loss prevention system?
DSPM integrates with existing DLP solutions, providing rich context regarding data sensitivity that can be used to better prioritize remediation efforts/actions. DSPM provides accurate, granular sensitivity labels that can facilitate confident automated actions and better streamline processes.
What are the benefits of using DSPM?
DSPM enables businesses to take a proactive approach to data security, leading to:
- Reduced risk of data breaches
- Improved compliance posture
- Faster incident response times
- Optimized security resource allocation
Embrace DSPM for a Future-Proof Security Strategy
Embracing DSPM for your organization doesn’t just support your proactive security initiatives today; it ensures that your data security measures will scale up with your business’s growth tomorrow. Because today’s data estates evolve so rapidly — both in number of components and in data proliferation — it’s in your business’s best interest to find cloud-native solutions that will adapt to these changes seamlessly.
Learn how Sentra’s DSPM can help your team gain data visibility within minutes of deployment.
Cloud Security Strategy: Key Elements, Principles, and Challenges
Cloud Security Strategy: Key Elements, Principles, and Challenges
What is a Cloud Security Strategy?
During the initial phases of digital transformation, organizations may view cloud services as an extension of their traditional data centers. But to fully harness cloud security, there must be progression beyond this view.
A cloud security strategy is an extensive framework that outlines how an organization manages its dynamic, software-defined security ecosystem and protects its cloud-based assets. Security, in its essence, is about managing risk – addressing the probability and impact of attacks instead of eliminating them outright. This reality essentially positions security as a continuous endeavor rather than being a finite problem with a singular solution.
Cloud security strategy advocates for:
- Ensuring the cloud framework’s integrity: Involves implementing security controls as a foundational part of cloud service planning and operational processes. The aim is to ensure that security measures are a seamless part of the cloud environment, guarding every resource.
- Harnessing cloud capabilities for defense: Employing the cloud as a force multiplier to bolster overall security posture. This shift in strategy leverages the cloud's agility and advanced capabilities to enhance security mechanisms, particularly those natively integrated into the cloud infrastructure.
Why is a Cloud Security Strategy Important?
Some organizations make the mistake of miscalculating the duality of productivity and security. They often learn the hard way that while innovation drives competitiveness, robust security preserves it. The absence of either can lead to diminished market presence or organizational failure. As such, a balanced focus on both fronts is paramount.
Customers are more likely to do business with organizations that consistently retain the trust to protect proprietary data. When a single instance of a data breach or a security incident that can erode customer trust and damage an organization's reputation, the stakes are naturally high. A cloud security strategy can help organizations address these challenges by providing a framework for managing risk.
A well-crafted cloud security strategy will include the following:
- Risk assessment to identify and prioritize the organization's key security risks.
- Set of security controls to mitigate those risks.
- Process framework for monitoring and improving the security posture of the cloud environment over time.
Key Elements of a Cloud Security Strategy
Tactically, a cloud security strategy empowers organizations to navigate the complexities of shared responsibility models, where the burden of security is divided between the cloud provider and the client.
Key Element | Description | Objectives | Tools/Technologies |
---|---|---|---|
Data Protection | Safeguarding data from unauthorized access and ensuring its availability, integrity, and confidentiality. | - Ensure data privacy and regulatory compliance - Prevent data breaches |
- Data Loss Prevention (DLP) - Backup and recovery solutions |
Infrastructure Protection | Securing the underlying cloud infrastructure including servers, storage, and network components. | - Protect against vulnerabilities - Secure the physical and virtual infrastructure |
- Network security controls - Intrusion detection systems |
Identity and Access Management (IAM) | Managing user identities and governing access to resources based on roles. | - Implement least privilege access - Manage user identities and credentials |
- IAM services (e.g., AWS IAM, Azure Active Directory) - Multi-factor authentication (MFA) |
Automation | Utilizing technology to automate repetitive security tasks. | - Reduce human errors - Streamline security workflows |
- Automation scripts - Security orchestration, automation, and response (SOAR) systems |
Encryption | Encoding data to protect it from unauthorized access. | - Protect data at rest and in transit - Ensure data confidentiality |
- Encryption protocols (e.g., TLS, SSL) - Key management services |
Detection & Response | Identifying potential security threats and responding effectively to mitigate risks. | - Detect security incidents in real-time - Respond to and recover from incidents quickly |
- Security information and event management (SIEM) - Incident response platforms |
Key Challenges in Building a Cloud Security Strategy
When organizations shift from on-premises to cloud computing, the biggest stumbling block is their lack of expertise in dealing with a decentralized environment.
Some consider agility and performance to be the super-features that led them to adopt the cloud. Anything that impacts the velocity of deployment is met with resistance. As a result, the challenge often lies in finding the sweet spot between achieving efficiency and administering robust security. But in reality, there are several factors that compound the complexity of this challenge.
Lack of Visibility
If your organization lacks insight into its cloud activity, it cannot accurately assess the associated risks. Lack of visibility also introduces multifaceted challenges. Initially, it can be about cataloging active elements in your cloud. Subsequently, it can restrain comprehension of the data, operation, and interconnections of those systems.
Imagine manually checking each cloud service across different HA zones for each provider. You'd be manifesting virtual machines, surveying databases, and tracking user accounts. It's a complex task which can rapidly become unmanageable.
Most major cloud service providers (CSPs) offer monitoring services to streamline this complexity into a more efficient strategy. But even with these tools, you mostly see the numbers—data stores, resources—but not the substance within or their inter-relationship. In reality, a production-grade observability stack depends on a mix of CSP provider tools, third-party services, and architecture blueprints to assess the security landscape.
Human Errors
Surprisingly, the most significant cloud security threat originates from your own IT team's oversights. Gartner estimates that by 2025, a staggering 99% of cloud security failures will be due to human errors.
One contributing factor is the shift to the cloud which demands specialized skills. Seasoned IT professionals who are already well-versed in on-prem security may potentially mishandle cloud platforms. These lapses usually involve issues like misconfigured storage buckets, exposed network ports, or insecure use of accounts. Such mistakes, if unnoticed, offer attackers easy pathways to infiltrate cloud environments.
An organization can likely utilize a mix of service models—Infrastructure as a Service (IaaS) for foundational compute resources, Platform as a Service (PaaS) for middleware orchestration, and Software as a Service (SaaS) for on-demand applications. For each tier, manual security controls might entail crafting bespoke policies for every service. This method provides meticulous oversight, albeit with considerable demands on time and the ever-present risk of human error.
Misconfiguration
OWASP highlights that around 4.51% of applications become susceptible when wrongly configured or deployed. The dynamism of cloud environments, where assets are constantly deployed and updated, exacerbates this risk.
While human errors are more about the skills gap and oversight, the root of misconfiguration often lies in the complexity of an environment, particularly when a deployment doesn’t follow best practices. Cloud setups are intricate, where each change or a newly deployed service can introduce the potential for error. And as cloud offerings evolve, so do the configuration parameters, subsequently increasing the likelihood of oversight.
Some argue that it’s the cloud provider that ensures the security of the cloud. Yet, the shared responsibility model places a significant portion of the configuration management on the user. Besides the lack of clarity, this division often leads to gaps in security postures.
Automated tools can help but have their own limitations. They require precise tuning to recognize the correct configurations for a given context. Without comprehensive visibility and understanding of the environment, these tools tend to miss critical misconfigurations.
Compliance with Regulatory Standards
When your cloud environment sprawls across jurisdictions, adherence to regulatory standards is naturally a complex affair. Each region comes with its mandates, and cloud services must align with them. Data protection laws like GDPR or HIPAA additionally demand strict handling and storage of sensitive information.
The key to compliance in the cloud is a thorough understanding of data residency, how it is protected, and who has access to it. A thorough understanding of the shared responsibility model is also crucial in such settings. While cloud providers ensure their infrastructure meets compliance standards, it's up to organizations to maintain data integrity, secure their applications, and verify third-party services for compliance.
Modern Cloud Security Strategy Principles
Because the cloud-native ecosystem is still an emerging discipline with a high degree of process variations, a successful security strategy calls for a nuanced approach. Implementing security should start with low-friction changes to workflows, the development processes, and the infrastructure that hosts the workload.
Here’s how it can be imagined:
Establishing Comprehensive Visibility
Visibility is the foundational starting point. Total, accessible visibility across the cloud environment helps achieve a deeper understanding of your systems' interactions and behaviors by offering a clear mapping of how data moves and is processed.
Establish a model where teams can achieve up-to-date, easy-to-digest overviews of their cloud assets, understand their configuration, and recognize how data flows between them. Visibility also lays the foundation for traceability and observability. Modern performance analysis stacks leverage the principle of visibility, which eventually leads to traceability—the ability to follow actions through your systems. And then to observability—gaining insight from what your systems output.
Enabling Business Agility
The cloud is known for its agile nature that enables organizations to respond swiftly to market changes, demands, and opportunities. Yet, this very flexibility requires a security framework that is both robust and adaptable. Security measures must protect assets without hindering the speed and flexibility that give cloud-based businesses their edge.
To truly scale and enhance efficiency, your security strategy must blend the organization’s technology, structure, and processes together. This ensures that the security framework is capable of supporting fast-paced development cycles, ensures compliance, and fosters innovation without compromising on protection. In practice, this means integrating security into the development lifecycle from its initial stages, automating security processes where possible, and ensuring that security protocols can accommodate the rapid deployment of services.
Cross-Functional Coordination
A future-focused security strategy acknowledges the need for agility in both action and thought. A crucial aspect of a robust cloud security strategy is avoiding the pitfall where accountability for security risks is mistakenly assigned to security teams rather than to the business owners of the assets. Such misplacement arises from the misconception of security as a static technical hurdle rather than the dynamic risk it can introduce.
Security cannot be a siloed function; instead, every stakeholder has a part to play in securing cloud assets. The success of your security strategy is largely influenced by distinguishing between healthy and unhealthy friction within DevOps and IT workflows. The strategic approach blends security seamlessly into cloud operations, challenging teams to preemptively consider potential threats during design and to rectify vulnerabilities early in the development process. This constructive friction strengthens systems against attacks, much like stress tests to inspect the resilience of a system.
However, the practicality of security in a dynamic cloud setting demands more than stringent measures; it requires smart, adaptive protocols. Excessive safeguards that result in frequent false positives or overcomplicate risk assessments can impact the rapid development cycles characteristic of cloud environments. To counteract this, maintaining the health of relationships within and across teams is essential.
Ongoing and Continuous Improvement
Adopting agile security practices involves shifting from a perfectionist mindset to embracing a baseline of “minimum viable security.” This baseline evolves through continuous incremental improvements, matching the agility of cloud development. In a production-grade environment, this relies on a data-driven approach where user experiences, system performance, and security incidents shape the evolution of the platform.
The commitment to continuous improvement means that no system is ever "finished." Security is seen as an ongoing process, where DevSecOps practices can ensure that every code commit is evaluated against security benchmarks, allowing for immediate correction and learning from any identified issues.
To truly embody continuous improvement though, organizations must foster a culture that encourages experimentation and learning from failures. Blameless postmortems following security incidents, for example, can uncover root causes without fear of retribution, ensuring that each issue is a learning opportunity.
Preventing Security Vulnerabilities Early
A forward-thinking security strategy focuses on preempting risks. The 'shift left' concept evolved to solve this problem by integrating security practices at the very beginning and throughout the application development lifecycle. Practically, this approach embeds security tools and checks into the pipeline where the code is written, tested, and deployed.
Start with outlining a concise strategy document that defines your shift-left approach. It needs a clear vision, designated roles, milestones, and clear metrics. For large corporations, this could be a complex yet indispensable task—requiring thorough mapping of software development across different teams and possibly external vendors.
The aim here is to chart out the lifecycle of software from development to deployment, identifying the people involved, the processes followed, and the technologies used. A successful approach to early vulnerability prevention also includes a comprehensive strategy for supply chain risk management. This involves scrutinizing open-source components for vulnerabilities and establishing a robust process for regularly updating dependencies.
How to Create a Robust Cloud Security Strategy
Before developing a security strategy, assess the inherent risks your organization may be susceptible to. The findings of the risk assessment should be treated as the baseline to develop a security architecture that aligns with your cloud environment's business goals and risk tolerance.
In most cases, a cloud security architecture should include the following combination of technical, administrative and physical controls for comprehensive security:
Access and Authentication Controls
The foundational principle of cloud security is to ensure that only authorized users can access your environment. The emphasis should be on strong, adaptive authentication mechanisms that can respond to varying risk levels.
Build an authentication framework that is non-static. It should scale with risk, assessing context, user behavior, and threat intelligence. This adaptability ensures that security is not a rigid gate but a responsive, intelligent gateway that can be configured to suit the complexity of different cloud environments and sophisticated threat actors.
Actionable Steps
- Enforce passwordless or multi-factor authentication (MFA) mechanisms to support a dynamic security ethos.
- Adjust permissions dynamically based on contextual data.
- Integrate real-time risk assessments that actively shape and direct access control measures.
- Employ AI mechanisms for behavioral analytics and adaptive challenges.
- Develop a trust-based security perimeter centered around user identity.
Identify and Classify Sensitive Data
Before classification, locate sensitive cloud data first. Implement enterprise-grade data discovery tools and advanced scanning algorithms that seamlessly integrate with cloud storage services to detect sensitive data points.
Once identified, the data should be tagged with metadata that reflects its sensitivity level; typically by using automated classification frameworks capable of processing large datasets at scale. These systems should be configured to recognize various data privacy regulations (like GDPR, HIPAA, etc.) and proprietary sensitivity levels.
Actionable Steps
- Establish a data governance framework agile enough to adapt to the cloud's fluid nature.
- Create an indexed inventory of data assets, which is essential for real-time risk assessment and for implementing fine-grained access controls.
- Ensure the classification system is backed by policies that dynamically adjust controls based on the data’s changing context and content.
Monitoring and Auditing
Define a monitoring strategy that delivers service visibility across all layers and dimensions. A recommended practice is to balance in-depth telemetry collection with a broad, end-to-end view and east-west monitoring that encompasses all aspects of service health.
Treat each dimension as crucial—depth ensures you're catching the right data, breadth ensures you're seeing the whole picture, and the east-west focus ensures you're always tuned into availability, performance, security, and continuity. This tri-dimensional strategy also allows for continuous compliance checks against industry standards, while helping with automated remediation actions in cases of deviations.
Actionable Steps
- Implement deep-dive telemetry to gather detailed data on transactions, system performance, and potential security events.
- Utilize specialized monitoring agents that span across the stack, providing insights into the OS, applications, and services.
- Ensure full visibility by correlating events across networks, servers, databases, and application performance.
- Deploy network traffic analysis to track lateral movement within the cloud, which is indicative of potential security threats.
Data Encryption and Tokenization
Construct a comprehensive approach that embeds security within the data itself. This strategy ensures data remains indecipherable and useless to unauthorized entities, both at rest and in transit.
When encrypting data at rest, protocols like AES-256 ensure that should the physical security controls fail, the data remains worthless to unauthorized users. For data in transit, TLS secures the channels over which data travels to prevent interceptions and leaks.
Tokenization takes a different approach by swapping out sensitive data with unique symbols (also known as tokens) to keep the real data secure. Tokens can safely move through systems and networks without revealing what they stand for.
Actionable Steps
- Embrace strong encryption for data at rest to render it inaccessible to intruders. Implement industry-standard protocols such as AES-256 for storage and database encryption.
- Mandate TLS protocols to safeguard data in transit, eliminating vulnerabilities during data movement across the cloud ecosystem.
- Adopt tokenization to substitute sensitive data elements with non-sensitive tokens. This renders the data non-exploitable in its tokenized form.
- Isolate the tokenization system, maintaining the token mappings in a highly restricted environment detached from the operational cloud services.
Incident Response and Disaster Recovery
Modern disaster recovery (DR) strategies are typically centered around intelligent, automated, and geographically diverse backups. With that in mind, design your infrastructure in a way that anticipates failure, with planning focused on rapid failback.
Planning for the unknown essentially means preparing for all outage permutations. Classify and prepare for the broader impact of outages, which encompass security, connectivity, and access.
Define your recovery time objective (RTO) and recovery point objective (RPO) based on data volatility. For critical, frequently modified data, aim for a low RPO and adjust RTO to the shortest feasible downtime.
Actionable Steps
- Implement smart backups that are automated, redundant, and cross-zone.
- Develop incident response protocols specific to the cloud. Keep these dynamic while testing them frequently.
- Diligently choose between active-active or active-passive configurations to balance expense and complexity.
- Focus on quick isolation and recovery by using the cloud's flexibility to your advantage.
Conclusion
Organizations must discard the misconception that what worked within the confines of traditional data centers will suffice in the cloud. Sticking to traditional on-premises security solutions and focusing solely on perimeter defense is irrelevant in the cloud arena. The traditional model—where data was a static entity within an organization’s stronghold—is now also obsolete.
Like earlier shifts in computing, the modern IT landscape demands fresh approaches and agile thinking to neutralize cloud-centric threats. The challenge is to reimagine cloud data security from the ground up, shifting focus from infrastructure to the data itself.
Sentra's innovative data-centric approach, which focuses on Data Security Posture Management (DSPM), emphasizes the importance of protecting sensitive data in all its forms. This ensures the security of data whether at rest, in motion, or even during transitions across platforms.
Book a demo to explore how Sentra's solutions can transform your approach to your enterprise's cloud security strategy.