Ron Reiter
Ron Reiter
November 17, 2024
5
Min Read
AI and ML

Enhancing AI Governance: The Crucial Role of Data Security

Enhancing AI Governance: The Crucial Role of Data Security

In today’s hyper-connected world, where big data powers decision-making, artificial intelligence (AI) is transforming industries and user experiences around the globe. Yet, while AI technology brings exciting possibilities, it also raises pressing concerns, particularly related to security, compliance, and ethical integrity. 

As AI adoption accelerates一fueled by increasingly vast and unstructured data sources—organizations seeking to secure AI deployments (and investments) must establish a strong AI governance initiative with data governance at its core.

This article delves into the essentials of AI governance, outlines its importance, examines the challenges involved, and presents best practices to help companies implement a resilient, secure, and ethically sound AI governance framework centered around data.

What is AI Governance?

AI governance encompasses the frameworks, practices, and policies that guide the responsible, safe, and ethical use of AI systems across an organization. Effective AI governance integrates technical elements—data, models, and code—with human oversight for a holistic framework that evolves alongside an organization’s AI initiatives.

Embedding AI governance, along with related data security measures, into organizational practices not only guarantees responsible AI use but also long-term success in an increasingly AI-driven world.

With an AI governance structure rooted in secure data practices, your company can:

  • Mitigate risks: Ongoing AI risk assessments can proactively identify and address potential threats, such as algorithmic bias, transparency gaps, and potential data leakage; this ensures fairer AI outcomes while minimizing reputational and regulatory risks tied to flawed or opaque AI systems.
  • Ensure strict adherence: Effective AI governance and compliance policies create clear accountability structures, aligning AI deployments and data use with both internal guidelines and the broader regulatory landscape such as data privacy laws or industry-specific AI standards.
  • Optimize AI performance: Centralized AI governance provides full visibility into your end-to-end AI deployments一from data sources and engineered feature sets to trained models and inference endpoints; this facilitates faster and more reliable AI innovations while reducing security vulnerabilities.
  • Foster trust: Ethical AI governance practices, backed by strict data security, reinforce trust by ensuring AI systems are transparent and safe, which is crucial for building confidence among both internal and external stakeholders.

A robust AI governance framework means your organization can safeguard sensitive data, build trust, and responsibly harness AI’s transformative potential, all while maintaining a transparent and aligned approach to AI.

Why Data Governance Is at the Center of AI Governance

Data governance is key to effective AI governance because AI systems require high-quality, secure data to properly function. Accurate, complete, and consistent data is a must for AI performance and the decisions that guide it. Additionally, strong data governance enables organizations to navigate complex regulatory landscapes and mitigate ethical concerns related to bias.

Through a structured data governance framework, organizations can not only achieve compliance but also leverage data as a strategic asset, ultimately leading to more reliable and ethical AI outcomes.

Risks of Not Having a Data-Driven AI Governance Framework

AI systems are inherently complex, non-deterministic, and highly adaptive—characteristics that pose unique challenges for governance. 

Many organizations face difficulty blending AI governance with their existing data governance and IT protocols; however, a centralized approach to governance is necessary for comprehensive oversight. Without a data-centric AI governance framework, organizations face risks such as:

  • Opaque decision-making: Without clear lineage and governance, it becomes difficult to trace and interpret AI decisions, which can lead to unethical, discriminatory, or harmful outcomes.
  • Data breaches: AI systems rely on large volumes of data, making rigorous data security protocols essential to avoid leaks of sensitive information across an extended attack surface covering both model inputs and outputs. 
  • Regulatory non-compliance: The fast-paced evolution of AI regulations means organizations without a governance framework risk large penalties for non-compliance and potential reputational damage. 

For more insights on managing AI and data privacy compliance, see our tips for security leaders.

Implementing AI Governance: A Balancing Act

While centralized, robust AI governance is crucial, implementing it successfully poses significant challenges. Organizations must find a balance between driving innovation and maintaining strict oversight of AI operations.

A primary issue is ensuring that governance processes are both adaptable enough to support AI innovation and stringent enough to uphold data security and regulatory compliance. This balance is difficult to achieve, particularly as AI regulations vary widely across jurisdictions and are frequently updated. 

Another key challenge is the demand for continuous monitoring and auditing. Effective governance requires real-time tracking of data usage, model behavior, and compliance adherence, which can add significant operational overhead if not managed carefully.

To address these challenges, organizations need an adaptive governance framework that prioritizes privacy, data security, and ethical responsibility, while also supporting operational efficiency and scalability.

Frameworks & Best Practices for Implementing Data-Driven AI Governance

While there is no universal model for AI governance, your organization can look to established frameworks, such as the AI Act or OECD AI Principles, to create a framework tailored to your own risk tolerance, industry regulations, AI use cases, and culture.

Below we explore key data-driven best practices—relevant across AI use cases—that can best help you structure an effective and secure data-centric AI governance framework.

Adopt a Lifecycle Approach

A lifecycle approach divides oversight into stages. Implementing governance at each stage of the AI lifecycle enables thorough oversight of projects from start to finish following a multi-layered security strategy. 

For example, in the development phase, teams can conduct data risk assessments, while ongoing performance monitoring ensures long-term alignment with governance policies and control over data drift.

Prioritize Data Security

Protecting sensitive data is foundational to responsible AI governance. Begin by achieving full visibility into data assets, categorize them by relevance, and then assign risk scores to prioritize security actions. 

An advanced data risk assessment combined with data detection and response (DDR) can help you streamline risk scoring and threat mitigation across your entire data catalog, ensuring a strong data security posture.

Adopt a Least Privilege Access Model

Restricting data access based on user roles and responsibilities limits unauthorized access and aligns with a zero-trust security approach. By ensuring that sensitive data is accessible only to those who need it for their work via least privilege, you reduce the risk of data breaches and enhance overall data security.

Establish Data Quality Monitoring

Ongoing data quality checks help maintain data integrity and accuracy, meaning AI systems will be trained on high-quality data sets and serve quality requests. 

Implement processes for continuous monitoring of data quality and regularly assess data integrity and accuracy; this will minimize risks associated with poor data quality and improve AI performance by keeping data aligned with governance standards.

Implement AI-Specific Detection and Response Mechanisms

Continuous monitoring of AI systems for anomalies in data patterns or performance is critical for detecting risks before they escalate. 

Anomaly detection for AI deployments can alert security teams in real time to unusual access patterns or shifts in model performance. Automated incident response protocols guarantee quick intervention, maintaining AI output integrity and protecting against potential threats.

A data security posture management (DSPM) tool allows you to incorporate continuous monitoring with minimum overhead to facilitate proactive risk management.

Conclusion

AI governance is essential for responsible, secure, and compliant AI deployments. By prioritizing data governance, organizations can effectively manage risks, enhance transparency, and align with ethical standards while maximizing the operational performance of AI.

As AI technology evolves, governance frameworks must be adaptive, ready to address advancements such as generative AI, and capable of complying with new regulations, like the UK GDPR.

To learn how Sentra can streamline your data and AI compliance efforts, explore our guide on data security posture management (DSPM). Or, see Sentra in action today by signing up for a demo.

Read More
David Stuart
David Stuart
November 7, 2024
3
Min Read
Sentra Case Study

Understanding the Value of DSPM in Today’s Cloud Ecosystem

Understanding the Value of DSPM in Today’s Cloud Ecosystem

As businesses accelerate their digital growth, the complexity of securing sensitive data in the cloud is growing just as fast. Data moves quickly and threats are evolving even faster; keeping cloud environments secure has become one of the biggest challenges for security teams today.

In The Hacker News’ webinar, Benny Bloch, CISO at Global-e, and David Stuart, Senior Director of Product Marketing at Sentra, discuss the challenges and solutions associated with Data Security Posture Management (DSPM) and how it's reshaping the way organizations approach data protection in the cloud.

The Shift from Traditional IT Environments to the Cloud

Benny highlights how the move from traditional IT environments to the cloud has dramatically changed the security landscape. 

"In the past, we knew the boundaries of our systems. We controlled the servers, firewalls, and databases," Benny explains. However, in the cloud, these boundaries no longer exist. Data is now stored on third-party servers, integrated with SaaS solutions, and constantly moved and copied by data scientists and developers. This interconnectedness creates security challenges, as it becomes difficult to control where data resides and how it is accessed. This transition has led many CISOs to feel a loss of control. 

As Benny points out, "When using a SaaS solution, the question becomes, is this part of your organization or not? And where do you draw the line in terms of responsibility and accountability?"

The Role of DSPM in Regaining Control

To address this challenge, organizations are turning to DSPM solutions. While Cloud Security Posture Management (CSPM) tools focus on identifying infrastructure misconfigurations and vulnerabilities, they don’t account for the movement and exposure of data across environments. DSPM, on the other hand, is designed to monitor sensitive data itself, regardless of where it resides in the cloud.

David Stuart emphasizes this difference: "CSPM focuses on your infrastructure. It’s great for monitoring cloud configurations, but DSPM tracks the movement and exposure of sensitive data. It ensures that security protections follow the data, wherever it goes."

For Benny, adopting a DSPM solution has been crucial in regaining a sense of control over data security. "Our primary goal is to protect data," he says. "While we have tools to monitor our infrastructure, it’s the data that we care most about. DSPM allows us to see where data moves, how it’s controlled, and where potential exposures lie."

Enhancing the Security Stack with DSPM

One of the biggest advantages of DSPM is its ability to complement existing security tools. For example, Benny points out that DSPM helps him make more informed decisions about where to prioritize resources. "I’m willing to take more risks in environments that don’t hold significant data. If a server has a vulnerability but isn’t connected to sensitive data, I know I have time to patch it."

By using DSPM, organizations can optimize their security stack, ensuring that data remains protected even as it moves across different environments. This level of visibility enables CISOs to focus on the most critical threats while mitigating risks to sensitive data.

A Smooth Integration with Minimal Disruption

Implementing new security tools can be a challenge, but Benny notes that the integration of Sentra’s DSPM solution was one of the smoothest experiences his team has had. "Sentra’s solution is non-intrusive. You provide account details, install a sentinel in your VPC, and you start seeing insights right away," he explains. Unlike other tools that require complex integrations, DSPM offers a connector-less architecture that reduces the need for ongoing maintenance and reconfiguration.

This ease of deployment allows security teams to focus on monitoring and securing data, rather than dealing with the technical challenges of integration.

The Future of Data Security with Sentra’s DSPM

As organizations continue to rely on cloud-based services, the need for comprehensive data security solutions will only grow. DSPM is emerging as a critical component of the security stack, offering the visibility and control that CISOs need to protect their most valuable assets: data.

By integrating DSPM with other security tools like CSPM, organizations can ensure that their cloud environments remain secure, even as data moves across borders and infrastructures. As Benny concludes, "You need an ecosystem of tools that complement each other. DSPM gives you the visibility you need to make informed decisions and protect your data, no matter where it resides."

This shift towards data-centric protection is the future of AI-era security, helping organizations stay ahead of threats and maintain control over their ever-expanding digital environments.

Read More
Team Sentra
Team Sentra
October 28, 2024
3
Min Read
Data Security

Spooky Stories of Data Breaches

Spooky Stories of Data Breaches

As Halloween approaches, it’s the perfect time to dive into some of the scariest data breaches of 2024. Just like monsters hiding in haunted houses, cyber threats quietly move through the digital world, waiting to target vulnerable organizations.

The financial impact of cyberattacks is immense. Cybersecurity Ventures estimates global cybercrime will reach $9.5 trillion in 2024 and $10.5 trillion by 2025. Ransomware, the top threat, is projected to cause damages from $42 billion in 2024 to $265 billion by 2031.

If those numbers didn’t scare you, the 2024 Verizon Data Breach Investigations Report highlights that out of 30,458 cyber incidents, 10,626 were confirmed data breaches, with one-third involving ransomware or extortion. Ransomware has been the top threat in 92% of industries and, along with phishing, malware, and DDoS attacks, has caused nearly two-thirds of data breaches in the past three years.

Let's explore some of the most spine-tingling breaches of 2024 and uncover how they could have been avoided.

Major Data Breaches That Shook the Digital World

The Dark Secrets of National Public Data

The latest National Public Data breach is staggering, just this summer, a hacking group claims to have stolen 2.7 billion personal records, potentially affecting nearly everyone in the United States, Canada, and the United Kingdom. This includes American Social Security numbers. They published portions of the stolen data on the dark web, and while experts are still analyzing how accurate and complete the information is (there are only about half a billion people between the US, Canada, and UK), it's likely that most, if not all, social security numbers have been compromised.

The Haunting of AT&T

AT&T faced a nightmare when hackers breached their systems, exposing the personal data of 7.6 million current and 65.4 million former customers. The stolen data, including sensitive information like Social Security numbers and account details, surfaced on the dark web in March 2024.

Change Healthcare Faces a Chilling Breach

In February 2024, Change Healthcare fell victim to a massive ransomware attack that exposed the personal information of millions of individuals, with 145 million records exposed. This breach, one of the largest in healthcare history, compromised names, addresses, Social Security numbers, medical records, and other sensitive data. The incident had far-reaching effects on patients, healthcare providers, and insurance companies, prompting many in the healthcare industry to reevaluate their security strategies.

The Nightmare of Ticketmaster

Ticketmaster faced a horror of epic proportions when hackers breached their systems, compromising 560 million customer records. This data breach included sensitive details such as payment information, order history, and personal identifiers. The leaked data, offered for sale online, put millions at risk and led to potential federal legal action against their parent company, Live Nation.

How Can Organizations Prevent Data Breaches: Proactive Steps

To mitigate the risk of data breaches, organizations should take proactive steps. 

  • Regularly monitor accounts and credit reports for unusual activity.
  • Strengthen access controls by minimizing over-privileged users.
  • Review permissions and encrypt critical data to protect it both at rest and in transit. 
  • Invest in real-time threat detection tools and conduct regular security audits to help identify vulnerabilities and respond quickly to emerging threats.
  • Implement Data Security Posture Management (DSPM) to detect shadow data and ensure proper data hygiene (i.e. encryption, masking, activity logging, etc.) 

These measures, including multi-factor authentication and routine compliance audits, can significantly reduce the risk of breaches and better protect sensitive information.

Best Practices to Secure Your Data 

Enough of the scary news, how do we avoid these nightmares?

Organizations can defend themselves starting with Data Security Posture Management (DSPM) tools. By finding and eliminating shadow data, identifying over-privileged users, and monitoring data movement, companies can significantly reduce their risk of facing these digital threats.

Looking at these major breaches, it's clear the stakes have never been higher. Each incident highlights the vulnerabilities we face and the urgent need for strong protection strategies. Learning from these missteps underscores the importance of prioritizing data security.

As technology continues to evolve and regulations grow stricter, it’s vital for businesses to adopt a proactive approach to safeguarding their data. Implementing proper data security measures can play a critical role in protecting sensitive information and minimizing the risk of future breaches.

Sentra: The Data Security Platform for the AI era

Sentra enables security teams to gain full visibility and control of data, as well as protect against sensitive data breaches across the entire public cloud stack. By discovering where all the sensitive data is, how it's secured, and where it's going, Sentra reduces the 'data attack surface', the sum of all places where sensitive or critical data is stored or traveling to.Sentra’s cloud-native design combines powerful Data Discovery and Classification, DSPM, DAG, and DDR capabilities into a complete Data Security Platform (DSP). With this, Sentra customers achieve enterprise-scale data protection and answer the important questions about their data. Sentra DSP provides a crucial layer of protection distinct from other infrastructure-dependent layers. It allows organizations to scale data protection across multi-clouds to meet enterprise demands and keep pace with ever-evolving business needs. And it does so very efficiently - without creating undue burdens on the personnel who must manage it.

Read More
Meni Besso
Meni Besso
October 10, 2024
3
Min Read
Compliance

The Need for Continuous Compliance

The Need for Continuous Compliance

As compliance breaches rise and hefty fines follow, establishing and maintaining strict compliance has become a top priority for enterprises. However, compliance isn't a one-time or  even periodic task or something you can set and forget. To stay ahead, organizations are embracing continuous compliance - a proactive, ongoing strategy to meet regulatory requirements and uphold security standards.

Let’s explore what continuous compliance is, the advantages it offers, some challenges it may present, and how Sentra can help organizations achieve and sustain it.

What is Continuous Compliance?

Continuous compliance is the ongoing process of monitoring a company’s security practices and applying appropriate controls to ensure they consistently meet regulatory standards and industry best practices. Instead of treating compliance as a one-time task, it involves real-time monitoring to catch and address non-compliance issues as they happen. It also includes maintaining a complete inventory of where your data is at all times, what risks and security posture is associated, and who has access to it. This proactive approach ensures you are always ‘audit ready’ and helps avoid last-minute fixes before audits or cyber attacks, ensuring continuous security across the organization.

Why Do Companies Need Continuous Compliance?

Continuous compliance is essential for companies to ensure they are always aligned with industry regulations and standards, reducing the risk of violations and penalties. 

Here are a few key reasons why it's crucial:

  1. Regulatory Changes: Compliance standards frequently evolve. Continuous monitoring ensures companies can adapt quickly to new regulations without major disruptions.
  2. Avoiding Fines and Penalties: Non-compliance can lead to hefty fines, legal actions, or even loss of licenses. Staying compliant helps avoid these risks.
  3. Protecting Reputation: Data breaches, especially in industries dealing with sensitive data, can damage a company’s reputation. Continuous compliance helps protect established trust with customers, partners, and stakeholders.
  4. Reducing Security Risks: Many compliance frameworks are designed to enhance data security. Continuous compliance ensures that a company’s security posture is always up-to-date, reducing the risk of data breaches.
  5. Operational Efficiency: Automated, continuous compliance monitoring can streamline processes, reducing manual audits and interventions, saving time and resources.

For modern businesses, especially those managing sensitive data in the cloud, a continuous compliance strategy is critical to maintaining a secure, efficient, and trusted operation.

Cost Considerations for Compliance Investments

Investing in continuous compliance can lead to significant long-term savings. By maintaining consistent compliance practices, organizations can avoid the hefty fines associated with non-compliance, minimize resource surges during audits, and reduce the impacts of breaches through early detection. Continuous compliance provides security and financial predictability, often resulting in more manageable and predictable expenses.

In contrast, periodic compliance can lead to fluctuating costs. While expenses may be lower between audits, costs typically spike as audit dates approach. These spikes often result from hiring consultants, deploying temporary tools, or incurring overtime charges. Moreover, gaps between audits increase the risk of undetected non-compliance or security breaches, potentially leading to significant unplanned expenses from fines or mitigation efforts.

When evaluating cost implications, it's crucial to look beyond immediate expenses and consider the long-term financial impact. Continuous compliance not only offers a steadier expenditure pattern but also potential savings through proactive measures. On the other hand, periodic compliance can introduce cost variability and financial uncertainties associated with risk management.

Challenges of Continuous Compliance

  1. Keeping Pace with Technological Advancements
    The fast-evolving tech landscape makes compliance a moving target. Organizations need to regularly update their systems to stay in line with new technology, ensuring compliance procedures remain effective. This requires investment in infrastructure that can adapt quickly to these changes. Additionally, keeping up with emerging security risks requires continuous threat detection and response strategies, focusing on real-time monitoring and adaptive security standards to safeguard against new threats.
  2. Data Privacy and Protection Across Borders
    Global organizations face the challenge of navigating multiple, often conflicting, data protection regulations. To maintain compliance, they must implement unified strategies that respect regional differences while adhering to international standards. This includes consistent data sensitivity tagging and secure data storage, transfer, and processing, with measures like encryption and access controls to protect sensitive information.
  3. Internal Resistance and Cultural Shifts
    Implementing continuous compliance often meets internal resistance, requiring effective change management, communication, and education. Building a compliance-oriented culture, where it’s seen as a core value rather than a box-ticking exercise, is crucial.

Organizations must be adaptable, invest in the right technology, and create a culture that embraces compliance. This both helps meet regulatory demands and also strengthens risk management and security resilience.

How You Can Achieve Continuous Compliance With Sentra

First, Sentra automates data discovery and classification and takes a fraction of the time and effort it would take to manually catalog all sensitive data. It’s far more accurate, especially when using a solution that leverages LLMs to classify data with more granularity and rich context.  It’s also more responsive to the frequent changes in your modern data landscape.

Sentra also can automate the process of identifying regulatory violations and ensuring adherence to compliance requirements using pre-built policies that update and evolve with compliance changes (including policies that map to common compliance frameworks). It ensures that sensitive data stays within the correct environments and doesn’t travel to regions in violation of retention policies or without data encryption.

In contrast, manually tracking data inventory is inefficient, difficult to scale, and prone to errors and inaccuracies. This often results in delayed detection of risks, which can require significant time and effort to resolve as compliance audits approach.

Read More
Karin Zano
Karin Zano
October 1, 2024
3
Min Read
Data Security

5 Cybersecurity Tips for Cybersecurity Awareness Month

5 Cybersecurity Tips for Cybersecurity Awareness Month

Secure our World: Cybersecurity Awareness Month 2024

As we kick off October's Cybersecurity Awareness Month and think about this year’s theme, “Secure Our World,” it’s important to remember that safeguarding our digital lives doesn't have to be complex. Simple, proactive steps can make a world of difference in protecting yourself and your business from online threats. In many cases, these simple steps relate to data — the sensitive information about users’ personal and professional lives. As a business, you are largely responsible for keeping your customers' and employees’ data safe. Starting with cybersecurity is the best way to ensure that this valuable information stays secure, no matter where it’s stored or how you use it.

Keeping Personal Identifiable Information (PII) Safe

Data security threats are more pervasive than ever today, with cybercriminals constantly evolving their tactics to exploit vulnerabilities. From phishing attacks to ransomware, the risks are not just technical but also deeply personal — especially when it comes to protecting Personal Identifiable Information (PII).

Cybersecurity Awareness Month is a perfect time to reflect on the importance of strong data security. Businesses, in particular, can contribute to a safer digital environment through Data Security Posture Management (DSPM). DSPM helps businesses - big and small alike -  monitor, assess, and improve their security posture, ensuring that sensitive data, such as PII, remains protected against breaches. By implementing DSPM, businesses can identify weak spots in their data security and take action before an incident occurs, reinforcing the idea that securing our world starts with securing our data.

Let's take this month as an opportunity to Secure Our World by embracing these simple but powerful DSPM measures to protect what matters most: data.

5 Cybersecurity Tips for Businesses

  1. Discover and Classify Your Data: Understand where all of your data resides, how it’s used, and its levels of sensitivity and protection. By leveraging discovery and classification, you can maintain complete visibility and control over your business’s data, reducing the risks associated with shadow data (unmanaged or abandoned data).
  2. Ensure data always has a good risk posture: Maintain a strong security stance by ensuring your data always has a good posture through Data Security Posture Management (DSPM). DSPM continuously monitors and strengthens your data’s security posture (readiness to tackle potential cybersecurity threats), helping to prevent breaches and protect sensitive information from evolving threats.
  3. Protect Private and Sensitive Data: Keep your private and sensitive data secure, even from internal users. By implementing Data Access Governance (DAG) and utilizing techniques like data de-identification and masking, you can protect critical information and minimize the risk of unauthorized access.
  4. Embrace Least-Privilege Control: Control data access through the principle of least privilege — only granting access to the users and systems who need it to perform their jobs. By implementing Data Access Governance (DAG), you can limit access to only what is necessary, reducing the potential for misuse and enhancing overall data security.
  5. Continual Threat Monitoring for Data Protection: To protect your data in real-time, implement continual monitoring of new threats. With Data Detection and Response (DDR), you can stay ahead of emerging risks, quickly identifying and neutralizing potential vulnerabilities to safeguard your sensitive information.

How Sentra Helps Secure Your Business’s World

Today, a business's “world” is extremely complex and ever-changing. Users can easily move, change, or copy data and connect new applications/environments to your ecosystem. These factors make it challenging to pinpoint where your data resides and who has access to it at any given moment. 

Sentra helps by giving businesses a vantage point of their entire data estate, including multi-cloud and on-premises environments. We combine all of the above practices—granular discovery and classification, end-to-end data security posture management, data access governance, and continuous data detection and response into a single platform. To celebrate Cybersecurity Awareness Day, check out how our data security platform can help improve your security posture.

Read More
David Stuart
David Stuart
September 25, 2024
3
Min Read
Data Security

Top Advantages and Benefits of DSPM

Top Advantages and Benefits of DSPM

Addressing data protection in today’s data estates requires innovative solutions. Data in modern environments moves quickly, as countless employees in a given organization can copy, move, or modify sensitive data within seconds. In addition, many organizations operate across a variety of on premises environments, along with multiple cloud service providers and technologies like PaaS and IaaS. Data quickly sprawls across this multifaceted estate as team members perform daily tasks. 

Data Security Posture Management (DSPM) is a key technology that meets these challenges by discovering and classifying sensitive data and then protecting it wherever it goes. DSPM helps organizations mitigate risks and maintain compliance across a complex data landscape by focusing on the continuous discovery and monitoring of sensitive information. 

If you're not familiar with DSPM, you can check out our comprehensive DSPM guide to get up to speed. But for now, let's delve into why DSPM is becoming indispensable for modern cloud enterprises.

Why is DSPM Important?

DSPM is an innovative cybersecurity approach designed to safeguard and monitor sensitive data as it traverses different environments. This technology focuses on the discovery of sensitive data across the entire data estate, including cloud platforms such as SaaS, IaaS, and PaaS, as well as on-premises systems. DSPM assesses exposure risks, identifies who has access to company data, classifies how data is used, ensures compliance with regulatory requirements like GDPR, PCI-DSS, and HIPAA, and continuously monitors data for emerging threats.

As organizations scale up their data estate and add multiple cloud environments, on-prem databases, and third-party SaaS applications, DSPM also helps them automate key data security practices and keep pace with this rapid scaling. For instance, DSPM offers automated data tags that help businesses better understand the deeper context behind their most valuable assets — regardless of location within the data estate. It leverages integrations with other security tools (DLP, CNAPP, etc.) to collect this valuable data context, allowing teams to confidently remediate the security issues that matter most to the business.

What are the Benefits of DSPM?

DSPM empowers all security stakeholders to monitor data flow, access, and security status, preventing risks associated with data duplication or movement in various cloud environments. It simplifies robust data protection, making it a vital asset for modern cloud-based data management.

Now, you might be wondering, why do we need another acronym? 

Let's explore the top five benefits of implementing DSPM:

1) Sharpen Visibility When Identifying Data Risk

DSPM enables you to continuously analyze your security posture and automate risk assessment across your entire landscape. It can detect data concerns across all cloud-native and unmanaged databases, data warehouses, data lakes, data pipelines, and metadata catalogs. By automatically discovering and classifying sensitive data, DSPM helps teams prioritize actions based on each asset’s sensitivity and relationship to policy guidelines.

Automating the data discovery and classification process takes a fraction of the time and effort it would take to manually catalog all sensitive data. It’s also far more accurate, especially when using a DSPM solution that leverages LLMs to classify data with more granularity and rich meta-data. In addition, it ensures that you stay up-to-date with the frequent changes in your modern data landscape.

2) Strengthen Adherence with Security & Compliance Requirements 

DSPM can also automate the process of identifying regulatory violations and ensuring adherence to custom and pre-built policies (including policies that map to common compliance frameworks). By contrast, manually implementing policies is prone to errors and inaccuracies. It’s common for teams to misconfigure policies that either overalert and inhibit daily work or miss significant user activities and changes to access permissions.

Instead, DSPM offers policies that travel with your data and automatically reveal compliance gaps. It ensures that sensitive data stays within the correct environments and doesn’t travel to regions with retention policies or without data encryption.

3) Improve Data Access Governance

Many DSPM solutions also offer data access governance (DAG). This functionality enforces the appropriate access permissions for all user identities, third parties, and applications within your organization. DAG automatically ensures that the proper controls follow your data, mitigating risks such as excessive permission, unauthorized access, inactive or unused identities and API keys, and improper provisioning/deprovisioning for services and users.

By using DSPM to govern data access, teams can successfully achieve the least privilege within an ever-changing and growing data ecosystem. 


4) Minimize your Data Attack Surface

DSPM also enables teams to detect unmanaged sensitive data, including mislocated, shadow, or duplicate assets. Its powerful data detection capabilities ensure that sensitive data, such as historical assets stored within legacy apps, development test data, or information within shadow IT apps, don’t go unnoticed in a lower environment. By automatically finding and classifying these unknown assets, DSPM minimizes your data attack surface, controls data sprawl, and better protects your most valuable assets from breaches and leaks.


5) Protect Data Used by LLMs

DSPM also extends to LLM applications, enabling you to maintain a strong risk posture as your team adopts new technologies. It considers LLMs as part of the data attack surface, applying the same DAG and data discovery/classification capabilities to any training data leveraged within these applications. 

By including LLMs in your overarching data security approach, DSPM alleviates any GenAI data privacy concerns and sets up your organization for future success as these technologies continue to evolve.

Enhance Your DSPM Strategy with Sentra

Sentra offers an AI-powered DSPM platform that moves at the speed of data, enabling you to strengthen your data risk posture across your entire hybrid ecosystem. Our platform can identify and mitigate data risks and threats with deep context, map identities to permissions, prevent exfiltration with a modern DLP, and maintain a rich data catalog with details on both known and unknown data. 

In addition, our platform runs autonomously and only requires minimal administrative support. It also adds a layer of security by discovering and intelligently categorizing all data with removing it from your environment. 

Conclusion

DSPM is quickly becoming an essential tool for modern cloud enterprises, offering comprehensive benefits to the complex challenges of data protection. By focusing on discovering and monitoring sensitive information, DSPM helps organizations mitigate risks and maintain compliance across various environments, including cloud and on-premises systems.

The rise of DSPM in the past few years highlights its importance in enhancing security. It allows security teams to monitor data flow, access, and status, effectively preventing data duplication or movement risks. With advanced threat detection, improved compliance and governance, detailed access control, rapid incident response, and seamless integration with cloud services, DSPM provides significant benefits and advantages over other data security solutions. Implementing DSPM is a strategic move for organizations aiming to fortify their data protection strategies in today's digital landscape.

Read More
Meni Besso
Meni Besso
September 16, 2024
4
Min Read
Compliance

GDPR Compliance Failures Lead to Surge in Fines

GDPR Compliance Failures Lead to Surge in Fines

In recent years, the landscape of data privacy and protection has become increasingly stringent, with regulators around the world cracking down on companies that fail to comply with local and international standards. 

The latest high-profile case involves Uber, which was recently fined a staggering €290 million ($324 million) by the Dutch Data Protection Authority (DPA) for violations related to the General Data Protection Regulation (GDPR). This is a wake up call for multinational companies. 

Graph showing the rise of GDPR fines from 2018-2024

What is GDPR?

The General Data Protection Regulation (GDPR) is a data protection law that came into effect in the EU in May 2018. Its goal is to give individuals more control over their personal data and unify data protection rules across the EU.

GDPR gives extra protection to special categories of sensitive data. Both 'controllers' (who decide how data is processed) and 'processors' (who act on their behalf) must comply. Joint controllers may share responsibility when multiple entities manage data.

Who Does the GDPR Apply To?

GDPR applies to both EU-based and non-EU organizations that handle the data of EU residents. The regulation requires organizations to get clear consent for data collection and processing, and it gives individuals rights to access, correct, and delete their data. Organizations must also ensure strong data security and report any data breaches promptly.

What Are the Penalties for Non-Compliance with GDPR?

Non-compliance with the General Data Protection Regulation (GDPR) can result in substantial penalties.

Article 83 of the GDPR establishes the fine framework, which includes the following:

Maximum Fine: The maximum fine for GDPR non-compliance can reach up to 20 million euros, or 4% of the company’s total global turnover from the preceding fiscal year, whichever is higher.

Alternative Penalty: In certain cases, the fine may be set at 10 million euros or 2% of the annual global revenue, as outlined in Article 83(4).

Additionally, individual EU member states have the authority to impose their own penalties for breaches not specifically addressed by Article 83, as permitted by the GDPR’s flexibility clause.

So far, the maximum fine given under GDPR was to Meta in 2023, which was fined $1.3 billion for violating GDPR laws related to data transfers. We’ll delve into the details of that case shortly.

Can Individuals Be Fined for GDPR Breaches?

While fines are typically imposed on organizations, individuals can be fined under certain circumstances. For example, if a person is self-employed and processes personal data as part of their business activities, they could be held responsible for a GDPR breach. However, UK-GDPR and EU-GDPR do not apply to data processing carried out by individuals for personal or household activities. 

According to GDPR Chapter 1, Article 4, “any natural or legal person, public authority, agency, or body” can be held accountable for non-compliance. This means that GDPR regulations do not distinguish significantly between individuals and corporations when it comes to breaches.

Specific scenarios where individuals within organizations may be fined include:

  • Obstructing a GDPR compliance investigation.
  • Providing false information to the ICO or DPA.
  • Destroying or falsifying evidence or information.
  • Obstructing official warrants related to GDPR or privacy laws.
  • Unlawfully obtaining personal data without the data controller's permission.

The Top 3 GDPR Fines and Their Impact

1.  Meta - €1.2 Billion ($1.3 Billion), 2023 

In May 2023, Meta, the U.S. tech giant, was hit with a staggering $1.3 billion fine by an Irish court for violating GDPR regulations concerning data transfers between the E.U. and the U.S. This massive penalty came after the E.U.-U.S. Privacy Shield Framework, which previously provided legal cover for such transfers, was invalidated in 2020. The court found that the framework failed to offer sufficient protection for EU citizens against government surveillance. This fine now stands as the largest ever under GDPR, surpassing Amazon’s 2021 record.

2. Amazon - €746 million ($781 million), 2021

Which leads us to Amazon at number 2, not bad. In 2021, Amazon Europe received the second-largest GDPR fine to date from Luxembourg’s National Commission for Data Protection (CNPD). The fine was imposed after it was determined that the online retailer was storing advertisement cookies without obtaining proper consent from its users.

3. Instagram - €405 million ($427 million), 2022

The Irish Data Protection Commission (DPC) fined Instagram for violating children’s privacy online in September 2022. The violations included the public exposure of kids' phone numbers and email addresses. The DPC found that Instagram’s user registration system could default child users' accounts to "public" instead of "private," contradicting GDPR’s privacy by design principles and the regulations aimed at safeguarding children's information.

Uber currently ranks at number 6 with the latest €290 million fine they received from the Dutch Data Protection Authority (DPA) for the GDPR related violations.

Uber’s GDPR Violation

The Dutch DPA accused Uber of transferring sensitive data of European drivers to the United States without implementing appropriate safeguards. This included personal information such as account details, location data, payment information, and even sensitive documents like taxi licenses, criminal records, and medical data. The failure to protect this data adequately, especially after the invalidation of the E.U.-U.S. Privacy Shield in 2020, constituted a serious violation of GDPR.

Despite Uber's claim that its cross-border data transfer process was compliant with GDPR, the DPA's decision to impose the record fine underscores the growing importance of adhering to stringent data protection regulations. Uber has since ceased the practice, but the financial and reputational damage is already done.

The Implications for Global Companies

The growing frequency of such fines sends a clear message to global companies: compliance with data protection regulations is non-negotiable. As European regulators continue to enforce GDPR rigorously, companies that fail to implement adequate data protection measures risk facing severe financial penalties and reputational harm.

In the case of Uber, the company’s failure to use appropriate mechanisms for data transfers, such as Standard Contractual Clauses, led to significant repercussions. This situation emphasizes the importance of staying current with regulatory changes, such as the introduction of the E.U.-U.S. Data Privacy Framework, and ensuring that all data transfer practices are fully compliant.

How Sentra Helps Orgs Stay Compliant with GDPR

Sentra helps organizations maintain GDPR compliance by effectively tagging data belonging to European citizens.

When EU citizens' Personally Identifiable Information (PII) is moved or stored outside of EU data centers, Sentra will detect and alert you in near real-time. Our continuous monitoring and scanning capabilities ensure that any data violations are identified and flagged promptly.

Example of EU citizens PII stored outside of EU data centers

Unlike traditional methods where data replication can obscure visibility and lead to issues during audits, Sentra provides ongoing visibility into data storage. This proactive approach significantly reduces the risk by alerting you to potential compliance issues as they arise.

Sentra does automatic classification of localized data - specifically in this case, EU data. Below you can see an example of how we do this. 

Sentra's automatic classification of localized data

The Rise of Compliance Violations: A Wake-up Call

The increasing number of compliance violations and the related hefty fines should serve as a wake-up call for companies worldwide. As the regulatory environment becomes more complex, it is crucial for organizations to prioritize data protection and privacy. By doing so, they can avoid costly penalties and maintain the trust of their customers and stakeholders.

Solutions such as Sentra provide a cost-effective means to ensure sensitive data always has the right posture and security controls - no matter where the data travels - and can alert on exceptions that require rapid remediation. In this way, organizations can remain regulatory compliant, avoid the steep penalties for violations, and ensure the proper, secure use of data throughout their ecosystem. 

Read More
Yair Cohen
Yair Cohen
September 10, 2024
4
Min Read
Data Security

How Does DSPM Safeguard Your Data When You Have CSPM/CNAPP

How Does DSPM Safeguard Your Data When You Have CSPM/CNAPP

After debuting in Gartner’s 2022 Hype Cycle, Data Security Posture Management (DSPM) has quickly become a transformative category and hot security topic. DSPM solutions are popping up everywhere, both as dedicated offerings and as add-on modules to established cloud native application protection platforms (CNAPP) or cloud security posture management (CSPM) platforms.

But which option is better: adding a DSPM module to one of your existing solutions or implementing a new DSPM-focused platform? On the surface, activating a module within a CNAPP/CSPM solution that your team already uses might seem logical. But, the real question is whether or not you can reap all of the benefits of a DSPM through an add-on module. While some CNAPP platforms offer a DSPM module, these add-ons lack a fully data-centric approach, which is required to make DSPM technology effective for a modern-day business with a sprawling data ecosystem. Let’s explore this further.

How are CNAPP/CSPM and DSPM Different?

While CNAPP/CSPM and DSPM seem similar and can be complementary in many ways, they are distinctly different in a few important ways. DSPMs are all about the data — protecting it no matter where it travels. CNAPP/CSPMs focus on detecting attack paths through cloud infrastructure. So naturally, they tie specifically to the infrastructure and lack the agnostic approach of DSPM to securing the underlying data.

Because a DSPM focuses on data posture, it applies to additional use cases that CNAPP/CSPM typically doesn’t cover. This includes data privacy and data protection regulations such as GDPR, PCI-DSS, etc., as well as data breach detection based on real-time monitoring for risky data access activity. Lastly, data at rest (such as abandoned shadow data) would not necessarily be protected by CNAPP/CSPM since, by definition, it’s unknown and not an active attack path.

What is a Data-Centric Approach?

A data-centric approach is the foundation of your data security strategy that prioritizes the secure management, processing, and storage of data, ensuring that data integrity, accessibility, and privacy are maintained across all stages of its lifecycle. 

Standalone DSPM takes a data-centric approach. It starts with the data, using contextual information such as data location, sensitivity, and business use cases to better control and secure it. These solutions offer preventative measures, such as discovering shadow data, preventing data sprawl, and reducing the data attack surface.

Data detection and response (DDR), often offered within a DSPM platform, provides reactive measures, enabling organizations to monitor their sensitive assets and detect and prevent data exfiltration. Because standalone DSPM solutions are data-centric, many are designed to follow data across a hybrid ecosystem, including public cloud, private cloud, and on-premises environments. This is ideal for the complex environments that many organizations maintain today.

What is an Infrastructure-Centric Approach?

An infrastructure-centric solution is focused on optimizing and protecting the underlying hardware, networks, and systems that support applications and services, ensuring performance, scalability, and reliability at the infrastructure level.

Both CNAPP and CSPM use infrastructure-centric approaches. Their capabilities focus on identifying vulnerabilities and misconfigurations in cloud infrastructure, as well as some basic compliance violations. CNAPP and CSPM can also identify attack paths and use several factors to prioritize which ones your team should remediate first. While both solutions can enforce policies, they can only offer security guardrails that protect static infrastructure. In addition, most CNAPP and CSPM solutions only work with public cloud environments, meaning they cannot secure private cloud or on-premises environments.

How Does a DSPM Add-On Module for CNAPP/CSPM Work?

Typically, when you add a DSPM module to CNAPP/CSPM, it can only work within the parameters set by its infrastructure-centric base solution. In other words, a DSPM add-on to a CNAPP/CSPM solution will also be infrastructure-centric. It’s like adding chocolate chips to vanilla ice cream; while they will change the flavor a bit, they can’t transform the constitution of your dessert into chocolate ice cream. 

A DSPM module in a CNAPP or CSPM solution generally has one purpose: helping your team better triage infrastructure security issues. Its sole functionality is to look at the attack paths that threaten your public cloud infrastructure, then flag which of these would most likely lead to sensitive data being breached. 

However, this functionality comes with a few caveats. While CSPM and CNAPP have some data discovery capabilities, they use very basic classification functions, such as pattern-matching techniques. This approach lacks context and granularity and requires validation by your security team. 

In addition, the DSPM add-on can only perform this data discovery within infrastructure already being monitored by the CNAPP/CSPM solution. So, it can only discover sensitive data within known public cloud environments. It may miss shadow data that has been copied to local stores or personal machines, leaving risky exposure gaps.

Why Infrastructure-Centric Solutions Aren’t Enough

So, what happens when you only use infrastructure-centric solutions in a modern cloud ecosystem? While these solutions offer powerful functionality for defending your public cloud perimeter and minimizing misconfigurations, they miss essential pieces of your data estate. Here are a few types of sensitive assets that often slip through the cracks of an infrastructure-centric approach: 

In addition, DSPM modules within CNAPP/CSPM platforms lack the context to properly classify sensitive data beyond easily identifiable examples, such as social security or credit card numbers. But, the data stores at today’s businesses often contain more nuanced personal or product/service-specific identifiers that could pose a risk if exposed. Examples include a serial number for a product that a specific individual owns or a medical ID number as part of an EHR. Some sensitive assets might even be made up of “toxic combinations,” in which the sensitivity of seemingly innocuous data classes increases when combined with specific identifiers. For example, a random 9-digit number alongside a headshot photo and expiration date is likely a sensitive passport number.

Ultimately, DSPM built into a CSPM or CNAPP solution only sees an incomplete picture of risk. This can leave any number of sensitive assets unknown and unprotected in your cloud and on-prem environments.

Dedicated DSPM Completes the Data Security Picture

A dedicated, best-of-breed DSPM solution like Sentra, on the other hand, offers rich, contextual information about all of your sensitive data — no matter where it resides, how your business uses it, or how nuanced it is. 

Rather than just defending the perimeters of known public cloud infrastructure, Sentra finds and follows your sensitive data wherever it goes. Here are a few of Sentra’s unique capabilities that complete your picture of data security:

  • Comprehensive, security-focused data catalog of all sensitive data assets across the entire data estate (IaaS, PaaS, SaaS, and On-Premises)
  • Ability to detect unmanaged, mislocated, or abandoned data, enabling your team to reduce your data attack surface, control data sprawl, and remediate security/privacy policy violations
  • Movement detection to surface out-of-policy data transformations that violate residency and security policies or that inadvertently create exposures
  • Nuanced discovery and classification, such as row/column/table analysis capabilities that can uncover uncommon personal identifiers, toxic combinations, etc.
  • Rich context for understanding the business purpose of data to better discern its level of sensitivity
  • Lower false positive rates due to deeper analysis of the context surrounding each sensitive data store and asset
  • Automation for remediating a variety of data posture, compliance, and security issues

All of this complex analysis requires a holistic, data-centric view of your data estate — something that only a standalone DSPM solution can offer. And when deployed together with a CNAPP or CSPM solution, a standalone DSPM platform can bring unmatched depth and context to your cloud data security program. It also provides unparalleled insight to facilitate prioritization of issue resolution.

To learn more about Sentra’s approach to data security posture management, read about how we use LLMs to classify structured and unstructured sensitive data at scale.

Read More
Yoav Regev
Yoav Regev
August 28, 2024
3
Min Read
Data Security

Sentra’s 3-Year Journey: From DSPM to Data Security Platform

Sentra’s 3-Year Journey: From DSPM to Data Security Platform

If you had searched for "DSPM" on Google three years ago, you likely would have only found information related to a dspm manufacturing website… But in just a few short years, the concept of Data Security Posture Management (DSPM) has evolved from an idea into a critical component of modern cybersecurity for enterprises.

Let’s rewind to the summer of 2021. Back then, when we were developing what would become Sentra and our DSPM solution, the term didn’t even exist. All that existed was the problem - data was being created, moved and duplicated in the cloud, and its security posture wasn’t keeping pace. Organizations didn’t know where all of their data was, and even if they could find it, its level of protection was inadequate for its level of sensitivity.

After extensive discussions with CISOs and security experts, we realized a critical gap between data security and the modern environments (further exacerbated by the fast pace of AI). Addressing this gap wasn’t just important—it was essential. Through these conversations, we identified the need for a new approach, leading to the creation of the DSPM concept, which didn't exist before. 

It was thrilling to hear my Co-Founder and VP Product, Yair Cohen, declare for the first time, “the world’s first DSPM is coming in 2021.” We embraced the term "Data Security Posture Management," now widely known as "DSPM."

Why DSPM Has Become an Essential Tool

Today, DSPM has become mainstream, helping organizations safeguard their most valuable asset: their data.

"Three years ago, when we founded Sentra, we dreamed of creating a new category called DSPM. It was a huge bet to pursue new budgets, but we believed that data security would be the next big thing due to the shift to the cloud. We could never have imagined that it would become the world’s hottest security category and that the potential would be so significant."

-Ron Reiter, Co-Founder and CTO, Sentra

This summer, Gartner has released its 2024 Hype Cycle for Data Security, and DSPM is in the spotlight for good reason. Gartner describes DSPM as having "transformative" potential, particularly for addressing long-standing data security challenges. 

As companies rapidly move to the cloud, DSPM solutions are gaining traction by filling critical visibility gaps. The best DSPM solutions offer coverage across multi-cloud and on-premises environments, creating a unified approach to data security.

DSPM plays a pivotal role in the modern cybersecurity landscape by providing organizations with real-time visibility into their data security posture. It helps identify, prioritize and mitigate risks across the entire data estate. By continuously monitoring data movement and access patterns, DSPM ensures that any policy violations or deviations from normal behavior are quickly flagged and addressed, preventing potential breaches before they can cause damage.

DSPM is also critical in maintaining compliance with data protection regulations. As organizations handle increasingly complex data environments, meeting regulatory requirements becomes more challenging. DSPM simplifies this process by automating compliance checks and providing clear insights into where sensitive data resides, how it’s being used, and who has access to it. This not only helps organizations avoid hefty fines but also builds trust with customers and stakeholders by demonstrating a commitment to data security and privacy.

In a world where data privacy and security threats rank among the biggest challenges facing society, DSPM provides a crucial layer of protection. Businesses, individuals, and governments are all at risk, with sensitive information constantly under threat. 

That’s why we are committed to developing our data security platform, which ensures your data remains secure and intact, no matter where it travels.

From DSPM to Data Security Platform in the AI Age

We began with a clear understanding of the critical need for Data Security Posture Management (DSPM) to address data proliferation risks in the evolving cloud landscape. As a leading data security platform, Sentra has expanded its capabilities based on our customers’ needs to include Data Access Governance (DAG), Data Detection and Response (DDR), and other essential tools to better manage data access, detect emerging threats, and assist organizations in their journey to implement Data Loss Prevention (DLP). We now do this across all environments (IaaS, PaaS, SaaS, and On-Premises).

We continue to evolve. In a world rapidly changing with advancements in AI, our platform offers the most comprehensive and effective data security solutions to keep pace with the demands of the AI age. As AI reshapes the digital landscape, it also creates new vulnerabilities, such as the risk of data exposure through AI training processes. Our platform addresses these AI-specific challenges, while continuing to tackle the persistent security issues from the cloud era, providing an integrated solution that ensures data security remains resilient and adaptive.

DSPMs facilitate swift AI development and smooth business operations by automatically securing LLM training data. Integrations with platforms like AWS SageMaker and GCP Vertex AI, combined with features such as DAG and DDR, ensure robust data security and privacy. This approach both supports responsible AI applications and also reduces risks such as breaches and bias.

So, Sentra is no longer only a DSPM solution, it’s a data security platform. Today, we provide holistic solutions that allow you to locate any piece of data and access all the information you need. Our mission is to continuously build and enhance the best data security platform, empowering organizations to move faster and succeed in today’s digital world. 

Success Driven by Our Amazing People

We’re proud that Sentra has emerged as a leader in the data security industry, making a significant impact on how organizations protect their data. 

Our success is driven by our incredible team, their hard work, dedication, and energy are the foundation of everything we do. From day one, our people have always been our top priority. It's inspiring to see our team work tirelessly to transform the world of data security and build the best solution out there. This team of champions never stops innovating, inspiring, and striving to be the best version of themselves every day.

Their passion is evident in their work, as shown in recent projects that they initiated, from the new video series, “Answering the Most Searched DSPM Questions”, to a behind the scenes walkthrough of our data security platform, and more.

We’re excited to continue to push the boundaries of what’s possible in data security.

A heartfelt thank you to our incredible team, loyal customers, supportive investors, and dedicated partners. We’re excited to keep driving innovation in data security and to continue our mission of making the digital world a safer place for everyone.

Read More
Daniel Suissa
Daniel Suissa
August 26, 2024
3
Min Read
Data Security

Overcoming Gartner’s Obstacles for DSPM Mass Adoption

Overcoming Gartner’s Obstacles for DSPM Mass Adoption

Gartner recently released its much-anticipated 2024 Hype Cycle for Data Security, and the spotlight is shining bright on Data Security Posture Management (DSPM). Described as having a "transformative" potential, DSPM is lauded for its ability to address long-standing data security challenges. 

DSPM solutions are gaining traction to fill visibility gaps as companies rush to the cloud.  Best of breed solutions provide coverage across multi-clouds and on-premises, providing a holistic approach that can become the authoritative inventory of data for an organization - and a useful up-to-date source of contextual detail to inform other security stack tools such as DLPs, CSPMs/CNAPPS, data catalogs, and more, enabling these to work more effectively. Learn more about this in our latest blog, Data: The Unifying Force Behind Disparate GRC Functions.

However, as with any emerging technology, Gartner also highlighted several obstacles that could hinder its widespread adoption. In this blog, we’ll dive into these obstacles, separating the legitimate concerns from those that shouldn't deter any organization from embracing DSPM—especially when using a comprehensive solution like Sentra.

Obstacle 1: Scanning the Entire Infrastructure for Data Can Take Days to Complete

This concern holds some truth, particularly for organizations managing petabytes of data. Full infrastructure scans can indeed take time. However, this doesn’t mean you're left twiddling your thumbs waiting for results. With Sentra, insights start flowing while the scan is still in progress. Our platform is designed to alert you to data vulnerabilities as they’re detected, ensuring you're never in the dark for long. So, while the scan might take days to finish, actionable insights are available much sooner. And scans for changes occur continuously so you’re always up to date.

Obstacle 2: Limited Integration with Security Controls for Remediation

Gartner pointed out that DSPM tools often integrate with a limited set of security controls, potentially complicating remediation efforts. While it’s true that each security solution prioritizes certain integrations, this is not a challenge unique to DSPM. Sentra, for instance, offers dozens of built-in integrations with popular ticketing systems and data remediation tools. Moreover, Sentra enables automated actions like auto-masking and revoking unauthorized access via platforms like Okta, seamlessly fitting into your existing workflow processes and enhancing your cloud security posture.

Obstacle 3: DSPM as a Function within Broader Data Security Suites

Another obstacle Gartner identified is that DSPM is sometimes offered merely as a function within a broader suite of data security offerings, which may not integrate well with other vendor products. This is a valid concern. Many cloud security platforms are introducing DSPM modules, but these often lack the discovery breadth and classification granularity needed for robust and accurate data security.

Sentra takes a different approach by going beyond surface-level vulnerabilities. Our platform uses advanced automatic grouping to create "Data Assets"—groups of files with similar structures, security postures, and business functions. This allows Sentra to reduce petabytes of cloud data into manageable data assets, fully scanning all data types daily without relying on random sampling. This level of detail and continuous monitoring is something many other solutions simply cannot match.

Obstacle 4: Inconsistent Product Capabilities Across Environments

Gartner also highlighted the varying capabilities of DSPM solutions, especially when it comes to mapping user access privileges and tracking data across different environments—on-premises, cloud services, and endpoints. While it’s true that DSPM solutions can differ in their abilities, the key is to choose a platform designed for multi-cloud and hybrid environments. Sentra is built precisely for this purpose, offering robust capabilities to identify and protect data across diverse environments (IaaS, PaaS, SaaS, and On-premises), ensuring consistent security and risk management no matter where your data resides.

Conclusion

While Gartner's 2024 Hype Cycle for Data Security outlines several obstacles to DSPM adoption, many of these challenges are either surmountable or less significant than they might first appear. With the right DSPM solution, organizations can effectively overcome these obstacles and harness the full transformative power of DSPM.

Curious about how Sentra can elevate your data security? 

Request a demo here.

Read More
David Stuart
David Stuart
August 22, 2024
3
Min Read
Data Security

Data: The Unifying Force Behind Disparate GRC Functions

Data: The Unifying Force Behind Disparate GRC Functions

In the ever-evolving world of cybersecurity, a common thread weaves its way through the seemingly disconnected disciplines of data security, data privacy, and compliancedata. This critical element forms the cornerstone of each function, yet existing solutions often fall short in fostering a holistic approach to data governance and security.

This blog delves into the importance of data as the unifying force behind disparate GRC (Governance, Risk & Compliance) functions. We'll explore how a data-centric approach can overcome the limitations of traditional solutions, paving the way for a more efficient and secure future.

The Expanding Reach of DSPM: Evidence from the Hype Cycle

Gartner's Hype Cycles serve as an insightful snapshot of emerging trends within the cybersecurity landscape. Both the "2024 Hype Cycle for Data Security" and the "2024 Gartner Hype Cycle for Cyber-Risk Management" highlight Data Security Posture Management (DSPM) as a key area of focus. This analyst perspective signifies a significant shift, recognizing DSPM as a discipline, not merely a set of features within existing security solutions. It's a recognition that data security is fundamental to achieving all GRC objectives.

Traditionally, data security has been the domain of security teams and Chief Information Security Officers (CISOs). Data privacy, on the other hand, resides with Chief Data Privacy Officers (CDPUs). Compliance, a separate domain altogether, falls under the responsibility of Chief Compliance Officers (CCOs). This siloed approach often leads to a disjointed view of data security and privacy, creating vulnerabilities and inefficiencies.

Data: The Universal Element

Data, however, transcends these functional boundaries. It's the universal element that binds security, privacy, and compliance together. Regardless of its form – financial records, customer information, intellectual property – securing data forms the foundation of a strong security posture. 

Identity, too, plays a crucial role in data security. Understanding user access and behavior is critical for data security and compliance. An effective data security solution will require deep integration with identity management to ensure proper access controls and policy enforcement.

Imagine a Venn diagram formed by the three disciplines: Data Security (CISO), Data Privacy (CDPO), and Compliance (CCO). At the center, where all three circles intersect, lies the critical element – Data. Each function operates within its own domain yet shares ownership of data at its core.

While these functions may seem distinct, the underlying element—data—connects them all. Data is the common thread woven throughout every GRC activity. It's the lifeblood of any organization, and its security and privacy are paramount. We can't talk about securing data without considering privacy, and compliance often hinges on controls that safeguard sensitive data.

For a truly comprehensive approach, organizations need a standardized method for classifying data based on its sensitivity. This common ground allows each GRC function to view and manage data through a shared lens. A unified data discovery and classification layer increases chances for collaboration amongst functions - DSPM provides this.

Existing Solutions Fall Short in a Dynamic Landscape

Traditional GRC solutions often fall short due to their myopic nature. They cater primarily to a single function – data security, data privacy, or compliance – leaving a fragmented landscape.

These solutions also struggle to keep pace with the dynamic nature of data. Data volumes are constantly growing, changing formats, and moving across diverse platforms. Mapping such a dynamic resource can be a nightmare with traditional approaches. Here at Sentra, we've explored this challenge in detail in a previous blog, Understanding Data Movement to Avert Proliferation Risks.

A New Approach: Cloud-Native DSPM for Agility and Scalability

The future of GRC demands a new approach, one that leverages the unifying force of data. Enter cloud-native Data Security Posture Management (DSPM) solutions, specifically designed for scalability and agility. This new breed of platforms offers several key advantages:

  • Comprehensive Data Discovery: The platform actively identifies all data across your organization, regardless of location or format. This holistic view provides a solid foundation for understanding and managing your data security posture.
  • Consistent Data Classification: With a central platform, data classification becomes a unified process. Sensitive data can be identified and flagged consistently across various functions, ensuring consistent handling.
  • Pre-built Integrations: Streamline your workflows with seamless integrations to existing tools across your organization, such as data catalogs, Incident Response (IR) platforms, IT Service Management (ITSM) systems, and compliance management solutions.

Towards a Unified Data Governance and Security Platform

The need for best-of-breed DSPM solutions like Sentra will remain strong to meet the ever-expanding requirements of data security and privacy. However, a future where GRC functionalities are more closely integrated is also emerging.

We're already witnessing a shift in our own customer base, where initial deployments for one specific use case have evolved into broader platform adoption for multiple use cases. Organizations are beginning to recognize the value of a unified platform for data governance and security.

Imagine a future where data officers, application owners, developers, compliance officers, and security teams all utilize a common data governance and security platform. This platform would be built on a foundation of consistent data sensitivity definitions, promoting a shared understanding of data security risks and responsibilities across the entire organization.

This interconnected future is closer than you might think. By embracing the unifying power of data and leveraging cloud-native DSPM solutions, organizations can achieve a more holistic and unified approach to GRC. With data at the center, everyone wins: security, privacy, and compliance all benefit from a more collaborative and data-driven approach.

At Sentra, we believe the inclusion of DSPM in multiple hype cycles signifies the increasing importance of these solutions for security teams worldwide. As DSPM solutions become more integrated into cybersecurity strategies, their impact on enhancing overall security posture is becoming increasingly evident.

Curious about how Sentra can elevate your data security? 

Talk to our data security experts and request a demo today.

Read More
Roy Levine
Roy Levine
August 12, 2024
3
Min Read
Data Security

How Contextual Data Classification Complements Your Existing DLP

How Contextual Data Classification Complements Your Existing DLP

Using data loss prevention (DLP) technology is a given for many organizations. Because these solutions have historically been the best way to prevent data exposure, many organizations already have DLP solutions deeply entrenched within their infrastructure and security systems to assist with data discovery and classification.

However, as we discussed in a previous blog post, traditional DLP often struggles to keep up with disparate cloud environments and the sheer volume of data that comes with them. As a result, many teams experience false alarms and alert fatigue — not to mention extensive manual tuning — as they try to get their DLP solutions to work with their cloud-based or hybrid data ecosystems. However, simply ripping out and replacing these solutions isn’t an option for most organizations, as they are costly and play such a significant role in security programs. 

Many organizations need a complementary solution instead of a replacement for their DLP — something that will improve the effectiveness and accuracy of their existing data discovery and “border control” security technologies. Contextual data classification can play this role with cloud-aware functionality that can discover all data, identify what data is at risk, and gauge the actions that cloud users take and differentiate between routine activities and anomalies that could indicate actual threats. This can then be used to better harden the policies and controls governing data movement.

Why Cloud Data Security Requires More than DLP

While traditional data loss prevention (DLP) technology plays an integral role in many businesses’ data security approaches, it can start to falter when used within a cloud environment. Why? DLP uses pre-defined patterns to detect suspicious activity. Often, this doesn’t work in the context of regular cloud activities. Here are the two main ways that DLP conflicts with the cloud:

Perimeter-Based Security Controls

DLP was originally created for on-premise environments with a clearly defensible perimeter. A DLP solution can only see general patterns, such as a file getting sent, shared, or copied, and cannot capture nuanced information beyond this. So, a DLP solution often flags routine activities (e.g., sharing data with third-party applications) as suspicious in the data discovery process. When the DLP blocks these everyday actions, it impedes business velocity and alerts the security team needlessly.

In modern cloud-first organizations, data needs to move freely to / from the cloud in order to meet dynamic business demands. DLP often is too restrictive (or, conversely, too permissive) since it lacks a fundamental understanding of the data sensitivity and only sees data when it moves. As a result, it misses the opportunity to protect data at rest. If too restrictive, it can disrupt business. If too permissive, it can miss numerous insider, supply chain, or other threats that look like authorized activity to the DLP.

Limited Classification Engines

The classification engines built into traditional DLPs are limited to common data types, such as social security or credit card numbers. As a result, they can miss nuanced, sensitive data, which is more common in a cloud ecosystem. For example, passport numbers stored alongside the passport holders’ names could pose a risk if exposed, while either the names or numbers on their own are not a risk. Or, DLP solutions could miss intellectual property or trade secrets, a form of data that wasn’t even stored online twenty years ago but is now prevalent in cloud environments. Data unique to the industry or specific business may also be missed if proper classifiers don’t detect it. The ability to tailor classifiers for these proprietary data types is very important (but often absent in commercial DLP offerings!)

Because of these limitations, many businesses see a gap between traditional DLP solutions' discovery and classification patterns and the realities of a multi-cloud and/or hybrid data estate. Existing DLP solutions ultimately can’t comprehend what’s going on within a cloud environment because they don’t understand the following pieces of information:

  • Where sensitive data exists, whether within structured or unstructured data. 
  • Who uses it and how they use it in an everyday business context. 
  • Which data is likely sensitive because of its origins, neighboring data, or other unique characteristics.

Without this information, the DLP technology will likely flag non-risky actions as suspicious (e.g., blocking services in IaaS/PaaS environments) and overlook legitimate threats (e.g., exfiltration of unstructured sensitive data). 

Improve Data Security with Sentra’s Contextual Data Classification

Adding contextual data classification to your DLP can provide this much-needed context. Sentra’s data security posture management (DSPM) solution offers data classification functionality that can work alongside or feed your existing DLP technology. We leverage LLM-based algorithms to accurately understand the context of where and how data is used, then detect when any sensitive data is misplaced or misused based on this information. Applicable sensitivity tags can be sent via API directly to the DLP solution for actioning. 

When you integrate Sentra into your existing DLP solution, our classification engine will tag and label files, and then add this rich, contextual information as metadata. 

Here are some examples of how our technology complements and extends the abilities of DLP solutions:

  1. Sentra can discover nuanced proprietary, sensitive data and detect new identifiers such as “transaction ID” or “intellectual property.” 
  2. Sentra can use exact data matching to detect whether data was partially copied from production and flag it as sensitive.
  3. Sentra can detect when a given file likely contains business context because of its owner, location, etc. For example, a file taken from the CEO’s Google Drive or from a customer’s data lake can be assumed to be sensitive.  

In addition, we offer a simple, agentless deployment and prioritize the security of your data by keeping it all within your environment during scanning.

Watch a one-minute video to learn more about how Sentra discovers and classifies nuanced, sensitive data in a cloud environment.

Read More
Ron Reiter
Ron Reiter
June 26, 2024
3
Min Read
Data Security

AI & Data Privacy: Challenges and Tips for Security Leaders

AI & Data Privacy: Challenges and Tips for Security Leaders

Balancing Trust and Unpredictability in AI

AI systems represent a transformative advancement in technology, promising innovative progress across various industries. Yet, their inherent unpredictability introduces significant concerns, particularly regarding data security and privacy. Developers face substantial challenges in ensuring the integrity and reliability of AI models amidst this unpredictability.

This uncertainty complicates matters for buyers, who rely on trust when investing in AI products. Establishing and maintaining trust in AI necessitates rigorous testing, continuous monitoring, and transparent communication regarding potential risks and limitations. Developers must implement robust safeguards, while buyers benefit from being informed about these measures to mitigate risks effectively.

AI and Data Privacy

Data privacy is a critical component of AI security. As AI systems often rely on vast amounts of personal data to function effectively, ensuring the privacy and security of this data is paramount. Breaches of data privacy can lead to severe consequences, including identity theft, financial loss, and erosion of trust in AI technologies. Developers must implement stringent data protection measures, such as encryption, anonymization, and secure data storage, to safeguard user information.

The Role of Data Privacy Regulations in AI Development

Data privacy regulations are playing an increasingly significant role in the development and deployment of AI technologies. As AI continues to advance globally, regulatory frameworks are being established to ensure the ethical and responsible use of these powerful tools.

  • Europe:

The European Parliament has approved the AI Act, a comprehensive regulatory framework designed to govern AI technologies. This Act is set to be completed by June and will become fully applicable 24 months after its entry into force, with some provisions becoming effective even sooner. The AI Act aims to balance innovation with stringent safeguards to protect privacy and prevent misuse of AI.

  • California:

In the United States, California is at the forefront of AI regulation. A bill concerning AI and its training processes has progressed through legislative stages, having been read for the second time and now ordered for a third reading. This bill represents a proactive approach to regulating AI within the state, reflecting California's leadership in technology and data privacy.

  • Self-Regulation:

In addition to government-led initiatives, there are self-regulation frameworks available for companies that wish to proactively manage their AI operations. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) and the ISO/IEC 42001 standard provide guidelines for developing trustworthy AI systems. Companies that adopt these standards not only enhance their operational integrity but also position themselves to better align with future regulatory requirements.

  • NIST Model for a Trustworthy AI System:

The NIST model outlines key principles for developing AI systems that are ethical, accountable, and transparent. This framework emphasizes the importance of ensuring that AI technologies are reliable, secure, and unbiased. By adhering to these guidelines, organizations can build AI systems that earn public trust and comply with emerging regulatory standards.Understanding and adhering to these regulations and frameworks is crucial for any organization involved in AI development. Not only do they help in safeguarding privacy and promoting ethical practices, but they also prepare organizations to navigate the evolving landscape of AI governance effectively.

How to Build Secure AI Products

Ensuring the integrity of AI products is crucial for protecting users from potential harm caused by errors, biases, or unintended consequences of AI decisions. Safe AI products foster trust among users, which is essential for the widespread adoption and positive impact of AI technologies.

These technologies have an increasing effect on various aspects of our lives, from healthcare and finance to transportation and personal devices, making it such a critical topic to focus on. 

How can developers build secure AI products?

  1. Remove sensitive data from training data (pre-training): Addressing this task is challenging, due to the vast amounts of data involved in AI-training, and the lack of automated methods to detect all types of  sensitive data.
  2. Test the model for privacy compliance (pre-production): Like any software, both manual tests and automated tests are done before production. But, how can users guarantee that sensitive data isn’t exposed during testing? Developers must explore innovative approaches to automate this process and ensure continuous monitoring of privacy compliance throughout the development lifecycle.
  3. Implement proactive monitoring in production: Even with thorough pre-production testing, no model can guarantee complete immunity from privacy violations in real-world scenarios. Continuous monitoring during production is essential to promptly detect and address any unexpected privacy breaches. Leveraging advanced anomaly detection techniques and real-time monitoring systems can help developers identify and mitigate potential risks promptly.

Secure LLMs Across the Entire Development Pipeline With Sentra

Gain Comprehensive Visibility and Secure Training Data (Sentra’s DSPM)

  • Automatically discover and classify sensitive information within your training datasets.
  • Protect against unauthorized access with robust security measures.
  • Continuously monitor your security posture to identify and remediate vulnerabilities.

Monitor Models in Real Time (Sentra’s DDR)

  • Detect potential leaks of sensitive data by continuously monitoring model activity logs.
  • Proactively identify threats such as data poisoning and model theft.
  • Seamlessly integrate with your existing CI/CD and production systems for effortless deployment.

Finally, Sentra helps you effortlessly comply with industry regulations like NIST AI RMF and ISO/IEC 42001, preparing you for future governance requirements. This comprehensive approach minimizes risks and empowers developers to confidently state:

"This model was thoroughly tested for privacy safety using Sentra," fostering trust in your AI initiatives.

As AI continues to redefine industries, prioritizing data privacy is essential for responsible AI development. Implementing stringent data protection measures, adhering to evolving regulatory frameworks, and maintaining proactive monitoring throughout the AI lifecycle are crucial. 

By prioritizing strong privacy measures from the start, developers not only build trust in AI technologies but also maintain ethical standards essential for long-term use and societal approval.

Read More
Meni Besso
Meni Besso
June 18, 2024
4
Min Read
Compliance

Understanding the FTC Data Breach Reporting Requirements

Understanding the FTC Data Breach Reporting Requirements

More Companies Need to Report Data Breaches

In a significant move towards enhancing data security and transparency, new data breach reporting rules have taken effect for various financial institutions. Since May 13, 2024, non-banking financial institutions, including mortgage brokers, payday lenders, and tax preparation firms, must report data breaches to the Federal Trade Commission (FTC) within 30 days of discovery. This new mandate, part of the FTC's Safeguards Rule, expands the breach notification requirements to a broader range of financial entities not overseen by the Securities and Exchange Commission (SEC). 

Furthermore, by June 15, 2024, smaller reporting companies—those with a public float under $250 million or annual revenues under $100 million—must comply with the SEC’s new cybersecurity incident reporting rules, aligning their disclosure obligations with those of larger corporations. These changes mark a significant step towards enhancing transparency and accountability in data breach reporting across the financial sector.

How Can Financial Institutions Secure Their Data?

Understanding and tracking your sensitive data is fundamental to robust data security practices. The first step in safeguarding data is detecting and classifying what you have. It's far easier to protect data when you know it exists. This allows for appropriate measures such as encryption, controlling access, and monitoring for unauthorized use. By identifying and mapping your data, you can ensure that sensitive information is adequately protected and compliance requirements are met.

Identify Sensitive Data: Data is constantly moving, which makes it a challenge to know exactly what data you have and where it resides. This includes customer information, financial records, intellectual property, and any other data deemed sensitive. Discovering all your data is a crucial first step. This includes ‘shadow’ data that may not be well known or well managed.

Data Mapping: Create and maintain an up-to-date map of your data landscape. This map should show where data is stored, processed, and transmitted, and who has access to it. It helps in quickly identifying which systems and data were affected by a breach and the impact blast radius (how extensive is the damage).

"Your Data Has Been Breached, Now What?"

When a data breach occurs, the immediate response is critical in mitigating damage and addressing the aftermath effectively. The investigation phase is particularly crucial as it determines the extent of the breach, the type and sensitivity of the data compromised, and the potential impact on the organization.

A key challenge during the investigation phase is understanding where the sensitive data was located at the time of the data breach and why or how existing controls were insufficient. 

Without a proper data classification process or solution in place, it is difficult to ascertain the exact locations of the sensitive data or the applicable security posture at the time of the breach within the short timeframe required by the SEC and FTC reporting rules. 

Here's a breakdown of the essential steps and considerations during the investigation phase:

1. Develop Appropriate Posture Policies and Enforce Adherence:

Establish policies that alert on and can help enforce appropriate security posture and access controls - these can be out-of-the-box fitting various compliance frameworks or can be customized for unique business or privacy requirements. Monitor for policy violations and initiate appropriate remediation actions (which can include ticket issuance, escalation notification, and automated access revocation or de-identification).

2. Conduct the Investigation: Determine Data Breach Source:

Identify how the breach occurred. This could involve phishing attacks, malware, insider threats, or vulnerabilities in your systems.

According to the FTC, it is critical to clearly describe what you know about the compromise. 

This includes:

  • How it happened
  • What information was taken
  • How the thieves have used the information (if you know)
  • What actions you have taken to remedy the situation
  • What actions you are taking to protect individuals, such as offering free credit monitoring services
  • How to reach the relevant contacts in your organization

Create a Comprehensive Plan: Additionally, create a comprehensive plan that reaches all affected audiences, such as employees, customers, investors, business partners, and other stakeholders.

Affected and Duplicated Data: Ascertain which data sets were accessed, altered, or exfiltrated. This involves checking logs, access records, and utilizing forensic tools. Assess if sensitive data has been duplicated or moved to unauthorized locations. This can compound the risk and potential damage if not addressed promptly.

How Sentra Helps Automate Compliance and Incident Response

Sentra’s Data Security Posture Management solution provides organizations with full visibility into their data’s locations (including shadow data) and an up-to-date data catalog with classification of sensitive data. Sentra provides this without any complex deployment or operational work involved, this is achieved due to a cloud-native agentless architecture, using cloud provider APIs and mechanisms.

Below you can see the different data stores on the Sentra dashboard.

Sentra Dashboard data stores

Sentra Makes Data Access Governance (DAG) Easy

Sentra helps you understand which users have access to what data and enrich metadata catalogs for comprehensive data governance. The accurate classification of cloud data provides advanced classification labels, including business context regarding the purpose of data, and automatic discovery, enabling organizations to gain deeper insights into their data landscape. This both enhances data governance while also providing a solid foundation for informed decision-making.

Sentra's detection capabilities can pinpoint over permissioning to sensitive data, prompting organizations to swiftly control them. This proactive measure not only mitigates the risk of potential breaches but also elevates the overall security posture of the organization by helping to institute least-privilege access.

Below you can see an example of a user’s access and privileges to which sensitive data.

An example of a user’s access and privileges to which sensitive data

Breach Reporting With Sentra

Having a proper classification solution helps you understand what kind of data you have at all times.

With Sentra, it's easier to pull the information for the report and understand whether there was sensitive data at the time of breach,  what kind of data there was, and who/what had access to it, in order to have an accurate report.

Example of Sentra's Data Breach Report

To learn more about how you can gain full coverage and an up-to-date data catalog with classification of sensitive data, schedule a live demo with our experts. 

Read More
Meni Besso
Meni Besso
June 10, 2024
3
Min Read
Compliance

Key Practices for Responding to Compliance Framework Updates

Key Practices for Responding to Compliance Framework Updates

Most privacy, IT, and security teams know the pain of keeping up with ever-changing data compliance regulations. Because data security and privacy-related regulations change rapidly over time, it can often feel like a game of “whack a mole” for organizations to keep up. Plus, in order to adhere to compliance regulations, organizations must know which data is sensitive and where it resides. This can be difficult, as data in the typical enterprise is spread across multiple cloud environments, on premises stores, SaaS applications, and more. Not to mention that this data is constantly changing and moving.

While meeting a long list of constantly evolving data compliance regulations can seem daunting, there are effective ways to set a foundation for success. By starting with data security and hygiene best practices, your business can better meet existing compliance requirements and prepare for any future changes.

Recent Updates to Common Data Compliance Frameworks 

The average organization comes into contact with several voluntary and mandatory compliance frameworks related to security and privacy. Here’s an overview of the most common ones and how they have changed in the past few years:

Payment Card Industry Data Security Standard (PCI DSS)

What it is: PCI DSS is a set of over 500 requirements for strengthening security controls around payment cardholder data. 

Recent changes to this framework: In March 2022, the PCI Security Standards Council announced PCI DSS version 4.0. It officially went into effect in Q1 2024. This newest version has notably stricter standards for defining which accounts can access environments containing cardholder data and authenticating these users with multi-factor authentication and stronger passwords. This update means organizations must know where their sensitive data resides and who can access it.  

U.S. Securities and Exchange Commission (SEC) 4-Day Disclosure Requirement

What it is:  The SEC’s 4-day disclosure requirement is a rule that requires more established SEC registrants to disclose a known cybersecurity incident within four business days of its discovery.

Recent changes to this framework: The SEC released this disclosure rule in December 2023. Several Fortune 500 organizations had to disclose cybersecurity incidents, including a description of the nature, scope, and timing of the incident. Additionally, the SEC requires that the affected organization release which assets were impacted by the incident. This new requirement significantly increases the implications of a cyber event, as organizations risk more reputational damage and customer churn when an incident happens.

In addition, the SEC will require smaller reporting companies to comply with these breach disclosure rules in June 2024. In other words, these smaller companies will need to adhere to the same breach disclosure protocols as their larger counterparts.

Health Insurance Portability and Accountability Act (HIPAA)

What it is: HIPPA safeguards that protect patient information through stringent disclosure and privacy standards.

Recent changes to this framework: Updated HIPAA guidelines have been released recently, including voluntary cybersecurity performance goals created by the U.S. Department of Health and Human Services (HHS). These recommendations focus on data security best practices such as strengthening access controls, implementing incident planning and preparedness, using strong encryption, conducting asset inventory, and more. Meeting these recommendations strengthens an organization’s ability to adhere to HIPAA, specifically protecting electronic protected health information (ePHI).

General Data Protection Regulation (GDPR) and EU-US Data Privacy Framework

What it is: GDPR is a robust data privacy framework in the European Union. The EU-US Data Privacy Framework (DPF) adds a mechanism that enables participating organizations to meet the EU requirements for transferring personal data to third countries.

Recent changes to this framework: The GDPR continues to evolve as new data privacy challenges arise. Recent changes include the EU-U.S. Data Privacy framework, enacted in July 2023. This new framework requires that participating organizations significantly limit how they use personal data and inform individuals about their data processing procedures. These new requirements mean organizations must understand where and how they use EU user data.

National Institute of Standards and Technology (NIST) Cybersecurity Framework

What it is:  NIST is a voluntary guideline that provides recommendations to organizations for managing cybersecurity risk. However, companies that do business with or a part of the U.S. government, including agencies and contractors, are required to comply with NIST.

Recent changes to this framework: NIST recently released its 2.0 version. Changes include a new core function, “govern,” which brings in more leadership oversight. It also highlights supply chain security and executing more impactful cyber incident responses. Teams must focus on gaining complete visibility into their data so leaders can fully understand and manage risk.    

ISO/IEC 27001:2022

What it is: ISO/IEC 27001 is a certification that requires businesses to achieve a level of information security standards. 

Recent changes to this framework: ISO 27001 was revised in 2022. While this addendum consolidated many of the controls listed in the previous version, it also added 11 brand-new ones, such as data leakage protection, monitoring activities, data masking, and configuration management. Again, these additions highlight the importance of understanding where and how data gets used so businesses can better protect it.

California Consumer Privacy Act (CCPA)

What it is: CCPA is a set of mandatory regulations for protecting the data privacy of California residents.

Recent changes to this framework: The CCPA was amended in 2023 with the California Privacy Rights Act (CPRA). This new edition includes new data rights, such as consumers’ rights to correct inaccurate personal information and limit the use of their personal information. As a result, businesses must have a stronger grasp on how their CA users’ data is stored and used across the organization.

2024 FTC Mandates

What it is: The Federal Trade Commission (FTC)’s new mandates require some businesses to disclose data breaches to the FTC as soon as possible — no later than 30 days after the breach is discovered. 

Recent changes to this framework: The first of these new data breach reporting rules is the Standards for Safeguarding Customer Information (Safeguards Rule) which took effect in May 2024. The Safeguards Rule puts disclosure requirements on non-banking financial institutions and financial institutions that aren’t required to register with the SEC (e.g, mortgage brokers, payday lenders, and vehicle dealers). 

Key Data Practices for Meeting Compliance

These frameworks are just a portion of the ever-changing compliance and regulatory requirements that businesses must meet today. Ultimately, it all goes back to strong data security and hygiene: knowing where your data resides, who has access to it, and which controls are protecting it. 

To gain visibility into all of these areas, businesses must operationalize the following actions throughout their entire data estate:

  • Discover data in both known and unknown (shadow) data stores.
  • Accurately classify and organize discovered data so they can adequately protect their most sensitive assets.
  • Monitor and track access keys and user identities to enforce least privilege access and to limit third-party vendor access to sensitive data.
  • Detect and alert on risky data movement and suspect activity to gain early warning into potential breaches.

Sentra enables organizations to meet data compliance requirements with data security posture management (DSPM) and data access governance (DAG) that travel with your data. We help organizations gain a clear view of all sensitive data, identify compliance gaps for fast resolution, and easily provide evidence of regulatory controls in framework-specific reports. 

Find out how Sentra can help your business achieve data and privacy compliance requirements.

If you want to learn more, schedule a call with our data security experts.

Read More
David Stuart
David Stuart
May 28, 2024
3
Min Read
Data Security

Retail Data Breaches: How to Secure Customer Data With DSPM

Retail Data Breaches: How to Secure Customer Data With DSPM

In 2023, the average cost of a retail data breach reached $2.96 million, with the retail sector representing 6% of global data breaches, a rise from 5% in the prior year. 

Consequently, retail now ranks as the 8th most frequently targeted industry in cyber attacks, climbing from 10th place in 2022. According to the Sophos State of Ransomware in Retail report, ransomware affected 69% of retail enterprises in 2023. Nearly 75% of these ransomware incidents led to data encryption, marking an increase from 68% and 54% in the preceding two years.

Yet, these breaches aren't merely a concern for retailers alone; they pose a severe threat to customer confidence at large. 

The need for retailers to focus on data security is crucial since the retail sector serves such a large community (and therefore is a huge target for fraud, account compromise, etc.).  Retailers, increasingly conducting business online, are subject to evolving privacy and credit card regulations, to protect consumers. One compromise or breach event can prove disastrous to the customer trust that retailers may have built over years.  

With the evolving cyber threats, the proliferation of cloud computing, and the persistent risk of human error, retailers confront a multifaceted security landscape. Retailers should take proactive measures, and gain a deeper understanding of the potential risks in order to properly harden their defenses.

The year 2024 had just begun when VF Corporation, a global apparel and footwear giant, experienced a significant breach. This incident served as a stark reminder of the far-reaching consequences of ransomware attacks in the retail industry. Approximately 35 million individuals, including employees, customers, and vendors, were affected. Personal information such as names, addresses, and Social Security numbers fell into the hands of malicious actors, emphasizing the urgent need for retailers to secure sensitive data.

How to Secure Customer Data

Automatically Discover, Classify and Secure All Customer Data

Automatically discovering, classifying, and securing all customer data is essential for businesses today. Sentra offers a comprehensive retail data security solution, uncovering sensitive customer data such as personally identifiable information (PII), cardholder data, payment account information, and order details across both known and unknown cloud data stores. 

With Sentra's Data Security Posture Management (DSPM) solution, no sensitive data is left undiscovered; the platform provides extensive coverage of data assets, custom data classes, and detailed cataloging of tables and objects. This not only ensures compliance but also supports data-driven decision-making through safe collaboration and data sharing. As a cloud-native solution, Sentra offers full coverage across major platforms like AWS, Azure, Snowflake, GCP, and Office 365, as well as on-premise file shares and databases. Your cloud data remains within your environment, ensuring you retain control of your sensitive data at all times.

Comply with Data Security and Privacy Regulations

Ensuring compliance with data security and privacy regulations is paramount in today's business landscape. With Sentra’s DSPM solution, you can streamline the process of preparing for security audits concerning customer and credit card/account data. Sentra’s platform efficiently identifies compliance discrepancies, enabling swift and proactive remediation measures.

You can also simplify the translation of requirements from various regulatory frameworks such as PCI-DSS, GDPR, CCPA, DPDPA, among others, using straightforward rules and policies. For instance, you'll receive notifications if regulated data is transferred between regions or to an insecure environment. 

Sentra Dashboard Issues showing top compliance frameworks

Furthermore, our system detects specific policy violations, such as uncovering PCI-DSS violations that indicate classified information, including credit cards and bank account numbers, being publicly accessible or located outside of a PCI compliant environment. Finally, we generate comprehensive compliance reports containing all necessary evidence, including sensitive data categories, regulatory measures, security posture, and the status of relevant regulatory standards.

Mitigate Supply Chain Risks and Emerging Threats

Addressing supply chain risks and emerging threats is critical for safeguarding your organization. Sentra leverages real-time threat monitoring, Data Detection and Response (DDR) to prevent fraud, data exfiltration, or breaches, thereby reducing downtime and ensuring the security of sensitive customer data.

Sentra dashboard example of sensitive data accessed from suspicious IP address

Sentra’s DSPM solution offers automated detection capabilities to alert you when third parties gain access to sensitive account and customer data, empowering you to take immediate action. By implementing least privilege access based on necessity, we help minimize supply chain risks, ensuring that only authorized individuals can access sensitive information. 

Additionally, Sentra’s DSPM enables you to enforce security posture and retention policies, thereby mitigating the risks associated with abandoned data. You'll receive instant alerts regarding suspicious data movements or accesses, such as those from unknown IP addresses, enabling you to promptly investigate and respond. In the event of a breach, our solution facilitates swift evaluation of its impact and enables you to initiate remedial actions promptly, thereby limiting potential damage to your organization.

Read More
Yair Cohen
Yair Cohen
May 16, 2024
3
Min Read
Data Security

How to Prevent Data Breaches in Healthcare and Protect PHI

How to Prevent Data Breaches in Healthcare and Protect PHI

The hardest part about securing sensitive healthcare data is continuously knowing where it is, and what type of data it is. This creates data security and compliance challenges - especially when healthcare data is constantly shared and moved between teams and departments.

The Importance of Data Security in Healthcare

Healthcare organizations are facing a heightened risk of data breaches, posing a significant threat to trust and reputation. According to a recent study by Cybersecurity Ventures, healthcare is the most targeted industry for cyberattacks, with a projected cost of $25 billion annually by 2024. 

The reality is that healthcare cyber attacks come at nearly double the cost of data breaches in other industries. Data breaches in the healthcare industry were the costliest at $10.93 million on average, whereas the financial services were at an average of $5.90 million. This discrepancy can be attributed to the expansive attack surface within the healthcare domain, where organizations prioritize operational outcomes over security. The value of Protected Health Information (PHI) data to threat actors and the stringent regulatory landscape further contribute to the higher costs associated with healthcare breaches.

Healthcare data breaches 2009-2023

The advent of cloud-based data sharing, while fostering collaboration, introduces a spectrum of risks. These include the potential for excessive permissions, unauthorized access, and the challenge of accurately classifying the myriad combinations of Protected Health Information (PHI).

Some of the top causes of data breaches in the healthcare sector are misdelivery and privilege misuse. Failure to effectively address these issues elevates the vulnerability to data theft, and emphasizes the critical need for robust security measures. Attacks on healthcare organizations can serve as a means to various ends. Cybercriminals may steal a victim's healthcare information to perpetrate identity fraud, carry out attacks on financial institutions or insurance companies, or pursue other nefarious objectives. 

As the healthcare industry continues to embrace technological advancements, striking a delicate balance between innovation and security becomes imperative to navigate the evolving landscape of healthcare cybersecurity.

Healthcare Cybersecurity Regulations & Standards

For healthcare organizations, it is especially crucial to protect patient data and follow industry rules. Transitioning to the cloud shouldn't disrupt compliance efforts. But staying on top of strict data privacy regulations adds another layer of complexity to managing healthcare data.

Below are some of the top healthcare cybersecurity regulations relevant to the industry.

Health Insurance Portability and Accountability Act of 1996 (HIPAA)

HIPAA is pivotal in healthcare cybersecurity, mandating compliance for covered entities and business associates. It requires regular risk assessments and adherence to administrative, physical, and technical safeguards for electronic Protected Health Information (ePHI).

HIPAA, at its core, establishes national standards to protect sensitive patient health information from being disclosed without the patient's consent or knowledge. For leaders in healthcare data management, understanding the nuances of HIPAA's Titles and amendments is essential. Particularly relevant are Title II's (HIPAA Administrative Simplification), Privacy Rule, and Security Rule.

HHS 405(d)

HHS 405(d) regulations, under the Cybersecurity Act of 2015, establish voluntary guidelines for healthcare cybersecurity, embodied in the Healthcare Industry Cybersecurity Practices (HICP) framework. This framework covers email, endpoint protection, access management, and more.

Health Information Technology for Economic and Clinical Health (HITECH) Act

The HITECH Act, enacted in 2009, enhances HIPAA requirements, promoting the adoption of healthcare technology and imposing stricter penalties for HIPAA violations. It mandates annual cybersecurity audits and extends HIPAA regulations to business associates.

Payment Card Industry Data Security Standard (PCI DSS)

PCI DSS applies to healthcare organizations processing credit cards, ensuring the protection of cardholder data. Compliance is necessary for handling patient card information.

Quality System Regulation (QSR)

The Quality System Regulation (QSR), enforced by the FDA, focuses on securing medical devices, requiring measures like access prevention, risk management, and firmware updates. Proposed changes aim to align QSR with ISO 13485 standards.

Health Information Trust Alliance (HITRUST)

HITRUST, a global cybersecurity framework, aids healthcare organizations in aligning with HIPAA guidelines, offering guidance on various aspects including endpoint security, risk management, and physical security. Though not mandatory, HITRUST serves as a valuable resource for bolstering compliance efforts.

Preventing Data Breaches in Healthcare with Sentra

Sentra’s Data Security Posture Management (DSPM) automatically discovers and accurately classifies your sensitive patient data. By seamlessly building a well-organized data catalog, Sentra ensures all your patient data is secure, stored correctly and in compliance. The best part is, your data never leaves your environment.

Discover and Accurately Classify your High Risk Patient Data

Discover and accurately classify your high-risk patient data with ease using Sentra. Within minutes, Sentra empowers you to uncover and comprehend your Protected Health Information (PHI), spanning patient medical history, treatment plans, lab tests, radiology images, physician notes, and more. 

Seamlessly build a well-organized data catalog, ensuring that all your high-risk patient data is securely stored and compliant. As a cloud-native solution, Sentra enables you to scale security across your entire data estate. Your cloud data remains within your environment, putting you in complete control of your sensitive data at all times.

Sentra Reduces Data Risks by Controlling Posture and Access

Sentra is your solution for reducing data risks and preventing data breaches by efficiently controlling posture and access. With Sentra, you can enforce security policies for sensitive data, receiving alerts to violations promptly. It detects which users have access to sensitive Protected Health Information (PHI), ensuring transparency and accountability. Additionally, Sentra helps you manage third-party access risks by offering varying levels of access to different providers. Achieve least privilege access by leveraging Sentra's continuous monitoring and tracking capabilities, which keep tabs on access keys and user identities. This ensures that each user has precisely the right access permissions, minimizing the risk of unauthorized data exposure.

Stay on Top of Healthcare Data Regulations with Sentra

Sentra’s Data Security Posture Management (DSPM) solution streamlines and automates the management of your regulated patient data, preparing you for significant security audits. Gain a comprehensive view of all sensitive patient data, allowing our platform to automatically identify compliance gaps for proactive and swift resolution.

Sentra dashboard showing compliance frameworks
Sentra Dashboard shows the issues grouped by compliance frameworks, such as HIPAA and what the compliance posture is

Easily translate your compliance requirements for HIPAA, GDPR, and HITECH into actionable rules and policies, receiving notifications when data is copied or moved between regions. With Sentra, running compliance reports becomes a breeze, providing you with all the necessary evidence, including sensitive data types, regulatory controls, and compliance status for relevant regulatory frameworks.

To learn more about how you can enhance your data security posture, schedule a demo with one of our data security experts.

Read More
David Stuart
David Stuart
May 6, 2024
3
Min Read
Data Security

Securing Your Microsoft 365 Environment with Sentra

Securing Your Microsoft 365 Environment with Sentra

Picture this scenario: a senior employee at your organization has access to a restricted folder in SharePoint that contains sensitive data. Another employee needs access to a specific document in the folder and asks the senior employee for help. To save time, the senior employee simply copies the entire document and drops it into a folder with less stringent access controls so the other employee can easily access it. Because of this action taken by the senior employee, which only took seconds to complete, there’s now a copy of sensitive data — outside a secure folder and unknown to the data security team. 

The Sentra team hears repeatedly that Microsoft 365 services, like SharePoint, are a pressing concern for data security teams because this type of data proliferation is so common. While Microsoft services like OneDrive, SharePoint, Office Online, and Teams drive productivity and collaboration, they also pose a unique challenge for data security teams: identifying and securing the constantly changing data landscape without inhibiting collaboration or slowing down innovation. 

Today’s hybrid environments — including Microsoft 365 services — present many new security challenges. Teams must deal with vast and dynamic data within SharePoint, coupled with explosive cloud growth and data movement between environments (cloud to on prem or vice versa). They must also find ways to find and secure the unstructured sensitive data stored within Microsoft 365 services.

Legacy, connector- and agent-based solutions can’t fit the bill — they face performance and scaling constraints and are an administrative nightmare for teams trying to keep pace. Instead, teams need a data security solution that can automatically comprehend unstructured data in several formats and is more responsive and reliable than legacy tools. 

A cloud-native approach is one viable, scalable solution to address the multitude of security challenges that complex, modern environments create. It provides versatile, agile protection for the multi-cloud, hybrid, SaaS (i.e., Microsoft), and on-prem environments that comprise a business’s operations. 

The Challenge of Protecting Your Microsoft 365 Environment

When employees use Microsoft 365, they can copy, move, or delete data instantly, making it challenging to keep track of where sensitive data resides and who has access to it. For instance, sensitive data can easily be stored improperly or left behind in a OneDrive after an employee leaves an organization. This is commonplace when using Teams and/or SharePoint for document collaborations. This misplaced sensitive data can become ammunition for an insider threat, such as a disgruntled employee who wants to cause company damage.

Assets contain plain text credit card numbers

Defending your Microsoft 365 environment against these risks can be difficult because Microsoft 365 stores data, such as Teams messages or OneDrive documents, in a free-form layout. It’s far more challenging to classify this unstructured data than it is to classify structured data because it doesn’t follow a clear schema and formatting protocol. For instance, in a structured database, sensitive information like names and birthdates would be stored in neighboring columns labeled “names” and “birthdates.” However, in an unstructured data environment like Microsoft 365, someone might share their birthdate or other PII in a quick Teams message to an HR staff member, which is then stored in SharePoint behind the scenes. 

In addition, unstructured data lacks context. Some data is only considered sensitive under certain conditions. For example, 9-digit passport numbers alone wouldn’t pose a significant risk if exposed, while a combination of passport numbers and the identity of the passport holders would. Structured databases make it easy to see these relationships, as they likely contain column titles (e.g., “passport number,” “passport holder name”) or other clear schemas. Unstructured file repositories, on the other hand, might have all of this information buried in documents with a free-form block of text, making it especially difficult for teams to understand the context of each data asset fully.

Protection Measures to Address Microsoft 365 Data Risks

Today’s businesses must get ahead of these challenges by instituting best practices such as least privilege access, or else face consequences such as violating compliance regulations or putting sensitive data at risk of exposure

Since sensitive data is far more nuanced and complex to discern in Microsoft 365, businesses need a cloud-native solution that identifies the subtle signs associated with sensitive data in unstructured cloud environments and takes appropriate action to protect it. 

Sentra’s Integration with Microsoft 365

Sentra’s data security posture management (DSPM) platform enables secure collaboration and file sharing across services such as SharePoint, OneDrive, Teams, OneNote, and Office Online.

Its new integration with Microsoft 365 offers unmatched discovery and classification capabilities for security, data owners and risk management teams to secure data — not stopping activity but allowing it to happen securely. Here are a few of the features we offer teams using Microsoft 365: 

Advanced ML/AI analysis for accurate data discovery.

Sentra’s data security platform can autonomously discover data across your entire environment, including shadow data (i.e., misplaced, abandoned, or unknown data) or migrated data (data that may have sprawled to a lesser protected environment). It can then accurately rank data sensitivity levels by conducting in-depth analysis based on nuanced contextual information such as metadata, location, neighboring assets, and file path.

Sensitive data that is stored on-premise was found in a cloud environment

This contextual approach differs from traditional security methods, which rely on very prescriptive data formats and overlook unstructured data that doesn’t fit into these formats. Sentra’s high level of accuracy minimizes the number of false positives, requiring less hands-on validation from your team.

Use case scenario: An employee has set up their company OneDrive account to be directly accessible through their personal computer’s central file system. While working on personal tasks on their computer, this employee accidentally saves their child’s medical paperwork inside the company OneDrive rather than a personal file. To prevent this situation, Sentra can discover and notify the appropriate users if PII is residing in a OneDrive business account and violating company policy.

Precise data classification to support remediation. 

After discovering sensitive data, Sentra classifies the data using data context classes. This granular classification level provides rich usage context and enables teams to perform better risk prioritization, sensitivity analysis, and control actioning. Its data context classes can identify very specific types of data: configuration, log, tabular, image, etc. By labeling their resources with this level of precision and context, businesses can better understand usage and which files are more likely to contain sensitive information and which are not. 

In addition, Sentra consolidates classified data security findings from across your entire data estate into a single platform. This includes insights from multiple cloud environments, SaaS platforms, and on-premises data stores. Sentra offers a centralized, always-up-to-date data catalog and visualizations of data movement between environments.

Use case scenario: An employee requests access to a SharePoint folder containing a nonsensitive document. A senior employee authorizes access without realizing that sensitive documents are also stored within this folder. To prevent this type of excessive privileged access, Sentra labels sensitive documents, emails, and other Microsoft file formats so your team can enforce access policies and take the correct actions to secure these assets. 

Guardrails to enforce data hygiene across your environment.

Sentra also enforces data hygiene best practices across your Microsoft 365. environment, proactively preventing staff from taking risky actions or going against company policies.

For instance, it can determine excessive access permission and alert on these violations. Sentra can also monitor sharing permissions to enforce least privilege access on sensitive files. 

Use case scenario: During onboarding, a new junior employee is given access permissions across Microsoft 365 services. By default, they now have access to confidential intellectual property stored in SharePoint, even though they’ll never need this information in their daily work. To prevent this type of excessive access control, Sentra can enforce more stringent access controls for sensitive SharePoint folders.

Automation to accelerate incident response.

Sentra also supports automated incident response with early breach detections. It can identify data similarities to instigate an investigation of potentially risky data proliferation. In addition, it provides real-time alerting when any anomalous activity occurs within the environment and supports incident investigation and breach impact analysis with automated remediation and in-product guidance. Sentra also integrates with data catalogs and other incident response/ITSM tools to quickly alert the proper teams and kick off the right response processes. 

Use case example: An employee who was just laid off feels disgruntled with the company. They decide to go into SharePoint and start a large download of several files containing intellectual property. To protect your data from these types of internal threats, Sentra can immediately detect and alert you to suspicious activities, such as unusual activity, within your Microsoft 365 environment.

DSPM, the Key to Securing Microsoft 365

After talking with many customers and prospects facing challenges securing Microsoft 365, the Sentra team has seen the significance of a DSPM platform compatible with services like SharePoint, OneDrive, and Office Online. We prioritize bringing all data, including assets buried in your Microsoft 365 environment, into view so you can better safeguard it without slowing down innovation and collaboration. 

Dive deeper into the world of data security posture management (DSPM) and discover how it helps organizations secure their entire data estate, including cloud, on-prem, and SaaS data stores (like Microsoft 365)

Read More
David Stuart
David Stuart
April 30, 2024
4
Min Read
Data Security

How to Meet the Security Challenges of Hybrid Data Environments

How to Meet the Security Challenges of Hybrid Data Environments

It’s an age-old question at this point: should we operate in the cloud or on premises? But for many of today’s businesses, it’s not an either-or question, as the answer is both.

Although cloud has been the ‘latest and greatest’ for the past decade, very few organizations rely on it completely, and that’s probably not going to change anytime soon. According to a survey conducted by Foundry in 2023, 70% of organizations have brought some cloud apps or services back to on premises after migration due to security concerns, budget/cost control, and performance/reliability issues. 

But at the same time, the cloud is still growing in importance within organizations. Gartner projects that public cloud spending will increase by 20.4% in just the next year. With all of this in mind, it’s safe to say that most businesses are leveraging a hybrid approach and will continue to do so for a long time. 

But where does this leave today’s data security professionals, who must simultaneously secure cloud and on prem operations? The key to building a robust data security approach and future-proofing your hybrid organization is to adopt cloud-native data security that serves both areas equally well and, importantly, can match the expected cloud growth demands of the future.

On Prem Data Security Considerations

Because on premises data stores are here to stay for most organizations, teams must consider how they will respond to the unique challenges of on prem data security. Let’s dive into two areas that are unique to on premises data stores and require specific security considerations:

Network-Attached Storage (NAS) and File Servers

File shares, such as SMB (CIFS), NFS and FTP, play an integral role in making on prem data accessible. However, the specific structure and data formats used within file servers can pose challenges for data security professionals, including:

  • Identifying where sensitive data is stored and preventing its sprawl to unknown locations.
  • Nested or inherited permissions structures that could lead to overly permissive access.
  • Ensuring security and compliance across massive amounts of data that change continuously.

On Prem Databases With Structured and Unstructured Data

The variety in on prem databases also brings security challenges. Different databases such as MSSQL, Oracle, PostgreSQL, MongoDB, and MySQL and others use different data structures. Security professionals often struggle to compile structured, unstructured, and semi-structured data from these different sources to monitor their data security posture continuously. ETL operations do the heavy lifting, but this can lead to further obfuscation of the underlying (and often sensitive!) data. Plus, access control is managed separately within each of these databases, making it hard to institute least privilege.

Businesses need to use data security solutions that can scan all of these distinct store and data types, centralize security administration for these disparate storage areas, and respond to security issues commonly appearing in hybrid environments, such as misconfigurations, weak security, data proliferation and compliance violations. Legacy premise or cloud-only solutions won’t cut it in these situations, as they aren’t adapted to work with these specific considerations. 

Cloud Data Security Considerations

In addition to all these on prem data and storage variations, most organizations also leverage multiple cloud environments. This reality makes managing a holistic view of data security even more complex. A single organization might use several different cloud service providers (AWS, Azure, Google Cloud Platform, etc.), along with a variety of data lakes and data warehouses (e.g., Snowflake). Each of these platforms has a unique architecture and must be managed separately, making it challenging to centralize data security efforts.

Here are a few aspects of cloud environments that data security professionals must consider:

Massive Data Attack Surface

Because it’s so easy to move, change, or modify data in the cloud, data proliferates at an unprecedented speed. This leads to a huge attack surface of unregulated and unmonitored data. Security professionals face a new challenge in the cloud: securing data regardless of where it resides. But this can prove to be difficult when security teams might not even know that a copied or modified version of sensitive data exists in the first place. This organizational data that exists outside the centralized and secured data management framework, known as shadow data, poses a considerable threat to organizations, as they can’t protect what they don’t know.

Business Agility

In addition, security teams must figure out how to secure cloud data without slowing down other teams’ innovation and agility in the cloud. In many cases, teams must copy cloud data to complete their daily tasks. For example, a developer might need to stage a copy of production data for test purposes, or a business intelligence analyst might need to mine a copy of production data for new revenue opportunities. They must learn how to enforce critical policies without gatekeeping sensitive data that teams need to access for the business to succeed. 

Variety in Data Store Types

Cloud infrastructure often includes a variety of data store types as well. This includes cloud computing infrastructure such as IaaS, PaaS, DBaaS, application development components such as repositories and live applications, and, in many cases, several different public cloud providers. Each of these data stores exists in a silo, making it challenging for data security professionals to gain a centralized view of the entire organization’s data security posture. 

Unifying Cloud and On Prem Hybrid Environments With Cloud-Native Data Security

Because of its massive scale, dynamic nature, and service-oriented architecture, cloud infrastructure is more complex to secure than on prem. Generally speaking, anyone with a username and password for a cloud instance can access most of the data inside it by default. In other words, you can’t just secure its boundaries as you would with on premises data. And because new cloud instances are so easy to spin up, there are no assurances that a new cloud asset, that may contain data copies, will have the same protections as the original.  

Because of this complexity, legacy tools originally created for on prem environments, such as traditional data loss prevention (DLP), just won’t cut it in cloud environments. Yet cloud-only security offerings, such as those from the cloud service providers themselves, exclude the unique aspects of on premises environments or may be myopic in what they support. Instead, organizations must consider solutions that address both on prem and multi-cloud environments simultaneously. The answer lies in cloud-native data security that supports both

Because it’s built for the complexity of the cloud but includes support for on prem infrastructure, a cloud-native data security platform can follow your data across your entire hybrid environment and compile complex security posture information into a single location. Sentra approaches this concept in a unique way, enabling teams to see data similarity and movement between on prem and cloud stores. By understanding data movement, organizations can minimize the risks associated with data sprawl, while simultaneously securely enabling the business.

With a unified platform, teams can see a complete picture of their data security posture without needing to jump back and forth between the contexts and differing interfaces of on premises and cloud tools. A centralized platform also enables teams to consistently define and enforce policies for all types of data across all types of environments. In addition, it makes it easier to generate audit-ready reports and feed data into remediation tools from a single integration point.


Sentra’s Cloud-Native Approach to Hybrid Environments

Sentra offers a cloud-native data security posture management (DSPM) solution for monitoring various data types across all environments — from premises to SaaS to public cloud.

This is a major development, as our solution uniquely enables security teams to…

  • Automatically discover all data without agents or connectors, including data within multiple cloud environments, NFS / SMB File Servers, and both SQL/NoSQL on premises databases.
  • Compile information inside a single data catalog that lists sensitive data and its security and compliance posture.
  • Receive alerts for misconfigurations, weak encryptions, compliance violations, and much more.
  • Identify duplicated data between environments, including on prem, cloud, and SaaS, enabling organizations to clean up unused data, control sprawl and reduce risks.
  • Track access to sensitive data stores from a single interface and ensure least privilege access.

Plus, when you use Sentra, your data never leaves your environment - it remains in place, secure and without disruption. We leverage native cloud serverless processing functions (ex. AWS Lambda) to scan your cloud data. For on premises, we scan all data within your secure networks and only send metadata to the Sentra cloud platform for further reporting and analysis.

Sentra also won’t interrupt your production flow of data, as it works asynchronously in both cloud and on premises environments (it scans on prem by creating temporary copies to scan in the customer cloud environment).

Dive deeper into how Sentra’s data security posture management (DSPM) helps hybrid organizations secure data everywhere. 

To learn more about DSPM, schedule a demo with one of our experts.

Read More
Meni Besso
Meni Besso
April 11, 2024
4
Min Read
Compliance

How PCI DSS 4.0 Improves Your Security Posture

How PCI DSS 4.0 Improves Your Security Posture

The Payment Card Industry Data Security Standard (PCI DSS) sets the bar for organizations handling cardholder information - any business that stores, processes, or transmits cardholder data. With the release of version 4.0, there are significant changes on the horizon. 

Staying compliant with industry standards is crucial, especially when it comes to protecting sensitive payment card data.

In this blog, we will explore how PCI DSS can enhance your security posture by establishing a continuous process to secure cardholder data.

Understanding PCI DSS v4.0

PCI DSS v4.0 brings several notable updates, emphasizing a more comprehensive and risk-based approach to data security. Companies in the payment card ecosystem must take note of these changes to ensure they remain compliant and resilient against evolving threats.

Increased Focus on Cloud and Service Providers

One of the key highlights of PCI DSS v4.0 is its focus on cloud environments and third-party service providers. As more businesses leverage cloud services for storing and processing payment data, it's imperative to extend security controls to these environments.

Expanded Scope of Requirements

With the proliferation of digital transactions, PCI DSS v4.0 expands the scope of requirements to address emerging technologies and evolving threats. The standard now covers a broader range of systems, applications, and processes involved in payment card transactions.

Emphasis on Risk-Based Approach

Recognizing that not all security threats are created equal, PCI DSS v4.0 places a greater emphasis on a risk-based approach to security. Organizations should assess risks systematically and prioritize security measures based on potential impact and likelihood of occurrence.

Enhanced Focus on Data Protection

From encryption and access control to data retention policies, organizations are expected to implement robust measures to prevent unauthorized access and data breaches. This will help mitigate the risk of data theft and ensure compliance with regulatory standards.

New PCI DSS 4.0 Release Implementation by March 2025

Out of the 64 of the new requirements, 51 are future dated due to their complexity and/or cost of implementation. This is relevant and important for any business that stores, processes or transmits cardholder data.

Further, it is crucial to focus on establishing a continuous process:

  • Automated log analysis for threat detection (Req: 10.4.1.1)
  • On-going review of access to sensitive data (Req: 7.2.4)
  • Detection of stored PAN anywhere it is not expected (Req: 12.10.7)

How Sentra Helps Comply With PCI DSS 4.0

Below are a few examples of how Sentra can assist you in complying with PCI DSS 4.0 by continuously monitoring your environment for threats and vulnerabilities.

In today's threat landscape, security is an ongoing process. PCI DSS v4.0 emphasizes the importance of continuous monitoring and testing to detect and respond to security incidents in real-time. By implementing automated monitoring tools and conducting regular security assessments, organizations can proactively identify vulnerabilities and address them before they are exploited by attackers.

PCI DSS 4.0 New Requirement How Sentra Solves It
10.4.1.1 Automated mechanisms are used to perform audit log reviews. Sentra's Data Detection and Response (DDR) module continuously monitors logs from sensitive data stores, identifying threats and anomalies in real time that may indicate potential data breaches or unauthorized access to sensitive data.

7.2.4 All user accounts and related access privileges, including third party/vendor accounts, are reviewed as follows:

  • At least once every six months.
  • Ensure user accounts and access remain appropriate based on job function.
  • Any inappropriate access is addressed.
  • Management acknowledges that access remains appropriate.
Sentra's Data Security Posture Management (DSPM) data access module frequently scans your sensitive data stores, mapping out the various identities with access to your data, including third-party entities, internal users, and applications. This aids in ensuring least privilege access and allows for the analysis of each identity's security posture through a risk-based approach.

12.10.7 Incident response procedures are in place, to be initiated upon the detection of stored PAN anywhere it is not expected, and include:

  • Determining what to do if PAN is discovered outside the CDE, including its retrieval, secure deletion, and/or migration into the currently defined CDE, as applicable.
  • Identifying whether sensitive authentication data is stored with PAN.
  • Determining where the account data came from and how it ended up where it was not expected.
  • Remediating data leaks or process gaps that resulted in the account data being where it was not expected.
Sentra's scanning and classification engine detects all types of sensitive data, including PII, digital identities, and financial data, especially PAN, across all your cloud accounts. It highlights potential "shadow data" suspected of being misplaced. Additionally, Sentra's DataTreks module tracks the movement of sensitive data across accounts, regions, and environments, helping you understand the root cause and take preventive steps.

Use Sentra's Reporting Capabilities to Adhere With PCI DSS

Here you can see a detected S3 bucket which contains credit card numbers and personal information which are not properly encrypted.

This is an example of how Sentra creates a threat in real time, detecting suspicious activity in a sensitive AWS S3 bucket.

In the dashboard below, you can see open security issues grouped by different compliances frameworks.

Proactive Integration of New Compliance Controls

Sentra remains vigilant in staying up to date with changes in PCI-DSS, GDPR, CCPA and other compliance frameworks. To ensure continuous compliance and security, Sentra actively monitors updates and integrates new controls as they become available. This proactive approach allows users to automate the validation process on an ongoing basis, ensuring that they always adhere to the latest standards and maintain a robust security posture.

Implementation Timeline and Best Practices

It's essential for relevant companies to understand the implementation timeline for PCI DSS v4.0. With a two-phase approach, certain requirements are future-dated due to their complexity or cost of implementation. However, it's crucial not to overlook these future requirements, as they will eventually become mandatory for compliance.

These requirements will be considered best practices until March 31, 2025, after which they will become obligatory. This transition period allows organizations to gradually adapt to the new standards while ensuring they meet current compliance requirements.

Conclusion

As the payment card industry continues to evolve, so must the security measures used to protect sensitive data. PCI DSS v4.0 represents a significant step forward in enhancing data security and resilience against emerging threats. Understanding the key changes and implementation timeline is crucial for companies to proactively adapt to the new standard and maintain compliance in an ever-changing regulatory landscape.

Sentra plays a pivotal role in this ongoing compliance effort. Its comprehensive features align closely with the requirements of PCI DSS v4.0, providing automated log analysis for threat detection, ongoing review of access to sensitive data, and detection of stored PAN outside expected locations. Through Sentra's Data Detection and Response (DDR) module, organizations can continuously monitor logs from sensitive data stores, identifying threats and anomalies in real-time, thus aiding in compliance with PCI DSS 4.0 requirements such as automated log reviews.

Furthermore, Sentra's Data Security and Posture Management (DSPM) module facilitates the review of user accounts and access privileges, ensuring that access remains appropriate based on job function and addressing any inappropriate access, in line with PCI DSS v4.0 requirements. In addition, Sentra's scanning and classification engine, coupled with its DataTreks module, assists in incident response procedures by detecting all types of sensitive data, including PAN, across cloud accounts and tracking the movement of sensitive data, aiding in the remediation of data leaks or process gaps.

By leveraging these capabilities, organizations can streamline their compliance efforts, mitigate risks, and maintain the security and integrity of cardholder data in accordance with PCI DSS v4.0 requirements.

Read More
Ran Shister
Ran Shister
April 10, 2024
4
Min Read
Data Sprawl

Understanding Data Movement to Avert Proliferation Risks

Understanding Data Movement to Avert Proliferation Risks

Understanding the perils your cloud data faces as it proliferates throughout your organization and ecosystems is a monumental task in the highly dynamic business climate we operate in. Being able to see data as it is being copied and travels, monitor its activity and access, and assess its posture allows teams to understand and better manage the full effect of data sprawl. 

It ‘connects the dots’ for security analysts who must continually evaluate true risks and threats to data so they can prioritize their efforts. Data similarity and movement are important behavioral indicators in assessing and addressing those risks. This blog will explore this topic in depth.

What Is Data Movement

Data movement is the process of transferring data from one location or system to another – from A to B. This transfer can be between storage locations, databases, servers, or network locations. Copying data from one location to another is simple, however, data movement can get complicated when managing volume, velocity, and variety.

  • Volume: Handling large amounts of data.
  • Velocity: Overseeing the pace of data generation and processing.
  • Variety: Managing a variety of data types.

How Data Moves in the Cloud

Data is free and can be shared anywhere. The way organizations leverage data is an integral part of their success. Although there are many business benefits to moving and sharing data (at a rapid pace), there are also many concerns that arise, mainly dealing with privacy, compliance, and security. Data needs to move quickly, securely, and have the proper security posture at all times.  

These are the main ways that data moves in the cloud:

1. Data Distribution in Internal Services: Internal services and applications manage data, saving it across various locations and data stores.

2. ETLs: Extract, Transform, Load processes, involve combining data from multiple sources into a central repository known as a data warehouse. This centralized view supports applications in aggregating diverse data points for organizational use.

3. Developer and Data Scientist Data Usage: Developers and data scientists utilize data for testing and development purposes. They require both real and synthetic data to test applications and simulate real-life scenarios to drive business outcomes.

4. AI/ML/LLM and Customer Data Integration: The utilization of customer data in AI/ML learning processes is on the rise. Organizations leverage such data to train models and apply the results across various organizational units, catering to different use-cases.

What Is Misplaced Data

"Misplaced data" refers to data that has been moved from an approved environment to an unapproved environment. For example, a folder that is stored in the wrong location within a computer system or network. This can result from human error, technical glitches, or issues with data management processes. 

When unauthorized data is stored in an environment that is not designed for the type of data, it can lead to data leaks, security breaches, compliance violations, and other negative outcomes.

With companies adopting more cloud services, and being challenged with properly managing the subsequent data sprawl, having misplaced data is becoming more common, which can lead to security, privacy, and compliance issues.

The Challenge of Data Movement and Misplaced Data

Organizations strive to secure their sensitive data by keeping it within carefully defined and secure environments. The pervasive data sprawl faced by nearly every organization in the cloud makes it challenging to effectively protect data, given its rapid multiplication and movement.

It is encouraged for business productivity to leverage data and use it for various purposes that can help enhance and grow the business. However, with the advantages, come disadvantages. There are risks to having multiple owners and duplicate data..

To address this challenge, organizations can leverage the analysis of similar data patterns to gain a comprehensive understanding on how data flows within the organization and help security teams first get visibility of those movement patterns, and then identify whether this movement is authorized. Then they can protect it accordingly and understand which unauthorized movement should be blocked.

This proactive approach allows them to position themselves strategically. It can involve ensuring robust security measures for data at each location, re-confining it by relocating, or eliminating unnecessary duplicates. Additionally, this analytical capability proves valuable in scenarios tied to regulatory and compliance requirements, such as ensuring GDPR - compliant data residency.

 Identifying Redundant Data and Saving Cloud Storage Costs

The identification of similarities empowers Chief Information Security Officers (CISOs) to implement best practices, steering clear of actions that lead to the creation of redundant data.

Detecting redundant data helps reduce cloud storage costs and drive up operational efficiency from targeted and prioritized remediation efforts that focus on the critical data risks that matter. 

This not only enhances data security posture, but also contributes to a more streamlined and efficient data management strategy.

“Sentra has helped us to reduce our risk of data breaches and to save money on cloud storage costs.”

-Benny Bloch, CISO at Global-e

Security Concerns That Arise

  1. Data Security Posture Variations Across Locations: Addressing instances where similar data, initially secure, experiences a degradation in security posture during the copying process (e.g., transitioning from private to public, or from encrypted to unencrypted).
  1. Divergent Access Profiles for Similar Data: Exploring scenarios where data, previously accessible by a limited and regulated set of identities, now faces expanded access by a larger number of identities (users), resulting in a loss of control.
  1. Data Localization and Compliance Violations: Examining situations where data, mandated to be localized in specific regions, is found to be in violation of organizational policies or compliance rules (with GDPR as a prominent example). By identifying similar sensitive data, we can pinpoint these issues and help users mitigate them.
  1. Anonymization Challenges in ETL Processes: Identifying issues in ETL processes where data is not only moved but also anonymized. Pinpointing similar sensitive data allows users to detect and mitigate anonymization-related problems.
  1. Customer Data Migration Across Environments: Analyzing the movement of customer data from production to development environments. This can be used by engineers to test real-life use-cases.
  2. Data Data Democratization and Movement Between Cloud and Personal Stores: Investigating instances where users export data from organizational cloud stores to personal drives (e.g., OneDrive) for purposes of development, testing, or further business analysis. Once this data is moved to personal data stores, it typically is less secure. This is due to the fact that these personal drives are less monitored and protected, and in control of the private entity (the employee), as opposed to the security/dev teams. These personal drives may be susceptible to security issues arising from misconfiguration, user mistakes or insufficient knowledge.

How Sentra’s DSPM Helps Navigate Data Movement Challenges

  1. Discover and accurately classify the most sensitive data and provide extensive context about it, for example:
  • Where it lives
  • Where it has been copied or moved to
  • Who has access to it
  1. Highlight misconfigurations by correlating similar data that has different security posture. This helps you pinpoint the issue and adjust it according to the right posture.
  2. Quickly identify compliance violations, such as GDPR - when European customer data moves outside of the allowed region, or when financial data moves outside a PCI compliant environment.
  3. Identify access changes, which helps you to understand the correct access profile by correlating similar data pieces that have different access profiles.

For example, the same data is well kept in a specific environment and can be accessed by 2 very specific users. When the same data moves to a developers environment, it can then be accessed by the whole data engineering team, which exposes more risks.

Leveraging Data Security Posture Management (DSPM) and Data Detection and Response (DDR) tools proves instrumental in addressing the complexities of data movement challenges. These tools play a crucial role in monitoring the flow of sensitive data, allowing for the swift remediation of exposure incidents and vulnerabilities in real-time. The intricacies of data movement, especially in hybrid and multi-cloud deployments, can be challenging, as public cloud providers often lack sufficient tooling to comprehend data flows across various services and unmanaged databases. 

Our innovative cloud DLP tooling takes the lead in this scenario, offering a unified approach by integrating static and dynamic monitoring through DSPM and DDR. This integration provides a comprehensive view of sensitive data within your cloud account, offering an updated inventory and mapping of data flows. Our agentless solution automatically detects new sensitive records, classifies them, and identifies relevant policies. In case of a policy violation, it promptly alerts your security team in real time, safeguarding your crucial data assets.

In addition to our robust data identification methods, we prioritize the implementation of access control measures. This involves establishing Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC) policies, so that the right users have permissions at the right times.

Identifying data movement with Sentra

Identifying Data Movement With Sentra

Sentra has developed different methods to identify data movements and similarities based on the content of two assets. Our advanced capabilities allow us to pinpoint fully duplicated data, identify similar data, and even uncover instances of partially duplicated data that may have been copied or moved across different locations. 

Moreover, we recognize that changes in access often accompany the relocation of assets between different locations. 

As part of Sentra’s Data Security Posture Management (DSPM) solution, we proactively manage and adapt access controls to accommodate these transitions, maintaining the integrity and security of the data throughout its lifecycle.

These are the 3 methods we are leveraging:

  1. Hash similarity - Using each asset unique identifier to locate it across the different data stores of the customer environment.
  2. Schema similarity - Locate the exact or similar schemas that indicated that there might be similar data in them and then leverage other metadata and statistical methods to simplify the data and find necessary correlations.
  3. Entity Matching similarity - Detects when parts of files or tables are copied to another data asset. For example, an ETL that extracts only some columns from a table into a new table in a data warehouse. 

Another example would be if PII is found in a lower environment, Sentra could detect if this is real or mock customer PII, based on whether this PII was also found in the production environment.

PII found in a lower environment

Conclusion

Understanding and managing data sprawl are critical tasks in the dynamic business landscape. Monitoring data movement, access, and posture enable teams to comprehend the full impact of data sprawl, connecting the dots for security analysts in assessing true risks and threats. 

Sentra addresses the challenge of data movement by utilizing advanced methods like hash, schema, and entity similarity to identify duplicate or similar data across different locations. Sentra's holistic Data Security Posture Management (DSPM) solution not only enhances data security but also contributes to a streamlined data management strategy. 

The identified challenges and Sentra's robust methods emphasize the importance of proactive data management and security in the dynamic digital landscape.

To learn more about how you can enhance your data security posture, schedule a demo with one of our experts.

Read More
David Stuart
David Stuart
March 11, 2024
4
Min Read
Data Loss Prevention

It's Time to Embrace Cloud DLP and DSPM

It's Time to Embrace Cloud DLP and DSPM

What’s the best way to prevent data exfiltration or exposure? In years past, the clear answer was often data loss prevention (DLP) tools. But today, the answer isn’t so clear — especially in light of the data democratization trend and for those who have adopted multi-cloud or cloud-first strategies. 

Data loss prevention (DLP) emerged in the early 2000s as a way to secure web traffic, which wasn’t encrypted at the time. Without encryption, anyone could tap into data in transit, creating risk for any data that left the safety of on-premise storage. As Cyber Security Review describes, “The main approach for DLP here was to ensure that any sensitive data or intellectual property never saw the outside web. The main techniques included (1) blocking any actions that copy or move data to unauthorized devices and (2) monitoring network traffic with basic keyword matching.”

Although DLP has evolved for securing endpoints, email and more, its core functionality has remained the same: gatekeeping data within a set perimeter. But, this approach simply doesn’t perform well in cloud environments, as the cloud doesn’t have a clear perimeter. Instead, today’s multi-cloud environment includes constantly changing data stores, infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and more.

And thanks to data democratization, people across an organization can access all of these areas and move, change, or copy data within seconds. Cloud applications do so as well—even faster.

Traditional DLP tools weren’t built for cloud-native environments and can cause significant challenges for today’s organizations. Data security teams need a new approach, purpose-built for the realities of the cloud, digital transformation and today’s accelerated pace of innovation.

Why Traditional DLP Isn’t Ideal for the Cloud

Traditional DLPs are often unwieldy for the engineers who must work with the solution and ineffective for the leaders who want to see positive results and business continuity from the tool. There are a few reasons why this is the case:

1. Traditional DLP tools often trigger false alarms.

Traditional DLPs are prone to false positives. Because they are meant to detect any sensitive data that leaves a set perimeter, these solutions tend to flag normal cloud activities as security risks. For instance, traditional DLP is notorious for erroneously blocking apps and services in IaaS/PaaS environments. These “false positives” disrupt business continuity and innovation, which is frustrating for users who want to use valuable cloud data in their daily work.

Not only do traditional DLPs block the wrong signals, but they also overlook the right ones, such as suspicious activities happening over cloud-based applications like Slack, Google Drive or generative AI/LLM apps. Plus, traditional DLP doesn’t follow data as users move, change or copy it, meaning it can easily miss shadow data.

2. Traditional DLP tools cause alert fatigue.

In addition, these tools lack detailed data context, meaning that they can’t triage alerts based on severity. Combine this factor with the high number of false positives, and teams end up with an overwhelming list of alerts that they must sort manually. This reality leads to alert fatigue and can cause teams to overlook legitimate security issues.

3. Traditional DLP tools rely on lots of manual intervention.

Traditional DLP deployment and maintenance take up lots of time and resources for a cloud-based or hybrid organization. For instance, teams must often install several legacy agents and proxies across the environment to make the solution work accurately.

Plus, these legacy tools rely on clear-cut data patterns and keywords to uncover risk. These patterns are often hidden or nonexistent because they are often disguised or transformed in the data that exists in or moves to cloud environments. This means that teams must manually tune their DLP solution to align with what their sensitive cloud data actually looks like. In many cases, this manual intervention is very difficult—if not impossible—since many cloud pipelines rely on ETL data, which isn’t easy to manually alter or inspect. 

Plus, today’s organizations use vast amounts of unstructured data within cloud file shares such as Sharepoint. They must parse through tens or even hundreds of petabytes of this unstructured data, making it challenging to find hidden sensitive data. Traditional DLP solutions lack the technology that would make this process far easier, such as AI/ML analysis.

Cloud DLP: A Cloud-Native Approach to Data Loss Prevention

Because the cloud is so different from traditional, on-premise environments, today’s cloud-based and hybrid organizations need a new solution. This is where a cloud DLP solution comes into the picture. We are seeing lots of cloud DLP tools hit the market, including solutions that fall into two main categories:

SaaS DLP products that leverage APIs to provide access control. While these products help to protect from loss within some SaaS applications, they are limited in scope, only covering a small percentage of the cloud services that a typical cloud-native organization uses. These limitations mean that a SaaS DLP product can’t provide a truly comprehensive view of all cloud data or trace data lineage if it’s not based in the cloud. 

IaaS + PaaS DLP products that focus on scanning and classifying data. Some of these tools are simply reporting tools that uncover data but don’t take action to remediate any issues. This still leaves extra manual work for security teams. Other IaaS + PaaS DLP offerings include automated remediation capabilities but can cause business interruptions if the automation occurs in the wrong situation.  

To directly address the limitations inherent in traditional DLPs and avoid these pitfalls, next-generation cloud DLPs should include the following:

  • Scalability in complex, multi-cloud environments
  • Automated prioritization for detected risks based on rich data context
  • Auto-detection and remediation capabilities that use deep context to correct configuration issues, creating efficiency without blocking everyday activities
  • Integration and workflows that are compatible with your existing environments
  • Straightforward, cloud-native agentless deployment without extensive tuning or maintenance
Attribute Cloud DLP DSPM DDR
Security Use Case Data Leakage Prevention Data Posture Improvement, Compliance Threat Detection and Response
Environments SaaS, Cloud Storage, Apps Public Cloud, SaaS and OnPremises Public Cloud, SaaS, Networks
Risk Prioritization Limited: based only on predefined policies - not based on discovered data or data context Analyzes Data Context, Access Controls, and Vulnerabilities Threat Activity Context such as anomalous traffic, volume, access
Remediation Block or Redact Data Transfers, Encryption, Alert Alerts, IR/Tool Integration & Workflow Initiation Alerts, Revoke Users/Access, Isolate Data Breach

Further Enhancing Cloud DLP by Integrating DSPM & DDR

While Cloud Data Loss Prevention (DLP) helps to secure data in multi-cloud environments by preventing loss, DSPM and DDR capabilities can complete the picture. These technologies add contextual details, such as user behavior, risk scoring and real-time activity monitoring, to enhance the accuracy and actionability of data threat and loss mitigation. 

Data Security Posture Management (DSPM) enforces good data hygiene no matter where the data resides. It takes a proactive approach, significantly reducing data exposure by preventing employees from taking risky actions in the first place. Data Detection and Response (DDR) alerts teams to the early warning signs of a breach, including suspicious activities such as data access by an unknown IP address. By bringing together Cloud DLP, DSPM and DDR, your organization can establish holistic data protection with both proactive and reactive controls. There is already much overlap in these technologies. As the market evolves, it is likely they will continue to combine into holistic cloud-native data security platforms.  

Sentra’s data security platform brings a cloud-native approach to DLP by automatically detecting and remediating data risks at scale. Built for complex multi-cloud and premise environments, Sentra empowers you with a unified platform to prioritize all of your most critical data risks in near real-time.

Request a demo to learn more about our cloud DLP, DSPM and DDR offerings.

Read More
Ron Reiter
Ron Reiter
March 5, 2024
3
Min Read
AI and ML

New AI-Assistant, Sentra Jagger, Is a Game Changer for DSPM and DDR

New AI-Assistant, Sentra Jagger, Is a Game Changer for DSPM and DDR

Evolution of Large Language Models (LLMs)

In the early 2000s, as Google, Yahoo, and others gained widespread popularity. Users found the search engine to be a convenient tool, effortlessly bringing a wealth of information to their fingertips. Fast forward to the 2020s, and we see Large Language Models (LLMs) are pushing productivity to the next level. LLMs skip the stage of learning, seamlessly bridging the gap between technology and the user.

LLMs create a natural interface between the user and the platform. By interpreting natural language queries, they effortlessly translate human requests into software actions and technical operations. This simplifies technology to make it close to invisible. Users no longer need to understand the technology itself, or how to get certain data — they can just input any query, and LLMs will simplify it.

Revolutionizing Cloud Data Security With Sentra Jagger

Sentra Jagger is an industry-first AI assistant for cloud data security based on the Large Language Model (LLM).

It enables users to quickly analyze and respond to security threats, cutting task times by up to 80% by answering data security questions, including policy customization and enforcement, customizing settings, creating new data classifiers, and reports for compliance. By reducing the time for investigating and addressing security threats, Sentra Jagger enhances operational efficiency and reinforces security measures.

Empowering security teams, users can access insights and recommendations on specific security actions using an interactive, user-friendly interface. Customizable dashboards, tailored to user roles and preferences, enhance visibility into an organization's data. Users can directly inquire about findings, eliminating the need to navigate through complicated portals or ancillary information.

Benefits of Sentra Jagger

  1. Accessible Security Insights: Simplified interpretation of complex security queries, offering clear and concise explanations in plain language to empower users across different levels of expertise. This helps users make informed decisions swiftly, and confidently take appropriate actions.
  1. Enhanced Incident Response: Clear steps to identify and fix issues, offering users clear steps to identify and fix issues, making the process faster and minimizing downtime, damage, and restoring normal operations promptly. 
  1. Unified Security Management: Integration with existing tools, creating a unified security management experience and providing a complete view of the organization's data security posture. Jagger also speeds solution customization and tuning.

Why Sentra Jagger Is Changing the Game for DSPM and DDR

Sentra Jagger is an essential tool for simplifying the complexities of both Data Security Posture Management (DSPM) and Data Detection and Response (DDR) functions. DSPM discovers and accurately classifies your sensitive data anywhere in the cloud environment, understands who can access this data, and continuously assesses its vulnerability to security threats and risk of regulatory non-compliance. DDR focuses on swiftly identifying and responding to security incidents and emerging threats, ensuring that the organization’s data remains secure. With their ability to interpret natural language, LLMs, such as Sentra Jagger, serve as transformative agents in bridging the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR.

Data Security Posture Management (DSPM)

When it comes to data security posture management (DSPM), Sentra Jagger empowers users to articulate security-related queries in plain language, seeking insights into cybersecurity strategies, vulnerability assessments, and proactive threat management.

Meet Sentra Jagger, your new data security assistant

The language models not only comprehend the linguistic nuances but also translate these queries into actionable insights, making data security more accessible to a broader audience. This democratization of security knowledge is a pivotal step forward, enabling organizations to empower diverse teams (including privacy, governance, and compliance roles) to actively engage in bolstering their data security posture without requiring specialized cybersecurity training.

Data Detection and Response (DDR)

In the realm of data detection and response (DDR), Sentra Jagger contributes to breaking down technical barriers by allowing users to interact with the platform to seek information on DDR configurations, real-time threat detection, and response strategies. Our AI-powered assistant transforms DDR-related technical discussions into accessible conversations, empowering users to understand and implement effective threat protection measures without grappling with the intricacies of data detection and response technologies.

The integration of LLMs into the realms of DSPM and DDR marks a paradigm shift in how users will interact with and comprehend complex cybersecurity concepts. Their role as facilitators of knowledge dissemination removes traditional barriers, fostering widespread engagement with advanced security practices. 

Sentra Jagger is a game changer by making advanced technological knowledge more inclusive, allowing organizations and individuals to fortify their cybersecurity practices with unprecedented ease. It helps security teams better communicate with and integrate within the rest of the business. As AI-powered assistants continue to evolve, so will their impact to reshape the accessibility and comprehension of intricate technological domains.

How CISOs Can Leverage Sentra Jagger 

Consider a Chief Information Security Officer (CISO) in charge of cybersecurity at a healthcare company. To assess the security policies governing sensitive data in their environment, the CISO leverages Sentra’s Jagger AI assistant.. If the CISO, let's call her Sara, needs to navigate through the Sentra policy page, instead of manually navigating, Sara can simply queryJagger, asking, "What policies are defined in my environment?" In response, Jagger provides a comprehensive list of policies, including their names, descriptions, active issues, creation dates, and status (enabled or disabled).

Sara can then add a custom policy related to GDPR, by simply describing it. For example, "add a policy that tracks European customer information moving outside of Europe". Sentra Jagger will translate the request using Natural Language Processing (NLP) into a Sentra policy and inform Sara about potential non-compliant data movement based on the recently added policy.

Upon thorough review, Sara identifies a need for a new policy: "Create a policy that monitors instances where credit card information is discovered in a datastore without audit logs enabled." Sentra Jagger initiates the process of adding this policy by prompting Sara for additional details and confirmation. 

The LLM-assistant, Sentra Jagger, communicates, "Hi Sara, it seems like a valuable policy to add. Credit card information should never be stored in a datastore without audit logs enabled. To ensure the policy aligns with your requirements, I need more information. Can you specify the severity of alerts you want to raise and any compliance standards associated with this policy?" Sara responds, stating, "I want alerts to be raised as high severity, and I want the AWS CIS benchmark to be associated with it."

Having captured all the necessary information, Sentra Jagger compiles a summary of the proposed policy and sends it to Sara for her review and confirmation. After Sara confirms the details, the LLM-assistant, Sentra Jagger seamlessly incorporates the new policy into the system. This streamlined interaction with LLMs enhances the efficiency of policy management for CISOs, enabling them to easily navigate, customize, and implement security measures in their organizations.

Create a policy with Sentra Jagger
Creating a policy with Sentra Jagger

Conclusion 

The advent of Large Language Models (LLMs) has changed the way we interact with and understand technology. Building on the legacy of search engines, LLMs eliminate the learning curve, seamlessly translating natural language queries into software and technical actions. This innovation removes friction between users and technology, making intricate systems nearly invisible to the end user.

For Chief Information Security Officers (CISOs) and ITSecOps, LLMs offer a game-changing approach to cybersecurity. By interpreting natural language queries, Sentra Jagger bridges the comprehension gap between cybersecurity professionals and the intricate worlds of DSPM and DDR. This standardization of security knowledge allows organizations to empower a wider audience to actively engage in bolstering their data security posture and responding to security incidents, revolutionizing the cybersecurity landscape.

To learn more about Sentra, schedule a demo with one of our experts.

Read More
Yoav Regev
Yoav Regev
February 20, 2024
3
Min Read
Data Security

Emerging Data Security Challenges In the LLM Era

Emerging Data Security Challenges In the LLM Era

In April of 2023, it was discovered that several Samsung employees reportedly leaked sensitive data via OpenAI’s chatbot ChatGPT. The data leak included the source code of software responsible for measuring semiconductor equipment. This leak emphasizes the importance of taking preventive measures against future breaches associated with Large Language Models (LLMs).

LLMs are created to generate responses to questions with data that they continuously receive, which can unintentionally expose confidential information. Even though OpenAI specifically tells users not to share “any sensitive information in your conversations”, ChatGPT and other LLMs are simply too useful to ban for security reasons. You wouldn’t ban an employee from using Google or an engineer from Github. Business productivity (almost) always comes first.

This means that the risks of spilling company secrets and sharing sensitive data with LLMs are not going anywhere. And you can be sure that more generative AI tools will be introduced to the workplace in the near future.

“Banning chatbots one by one will start feeling “like playing whack-a-mole” really soon.”

  • Joe Payne, the CEO of insider risk software solutions provider Code42.


In many ways, the effect of LLMs on data security is similar to the changes we saw 10-15 years ago when companies started moving their data to the cloud.

Broadly speaking, we can say there have been three ‘eras’ of data and data security….

The Era of On-Prem Data

The first was the era of on-prem data. For most of the history of computing, enterprises stored their data in on-prem data centers, and secured access to sensitive data by fortifying the perimeter. The data also wasn’t going anywhere on its own. It lived on company servers, was managed by company IT teams, and they controlled who accessed anything that lived on those systems. 

The Era of the Cloud

Then came the next era - the cloud. Suddenly, corporate data wasn’t static anymore. Data was free and could be shared anywhere - engineers, BI tools, and data scientists were accessing and moving thus free-flowing data to drive the business forward. How you leverage your data becomes an integral part of a company’s success. While the business benefits were clear, this created a number of concerns - particularly around privacy, compliance, and security. Data needed to move quickly, securely, and have the proper security posture at all times. 

The challenge was that now security teams were struggling with basic questions about the data  like: 

  • Where is my data? 
  • Who has access to it? 
  • How can I comply with regulations? 

It was during this era that Data Security Posture Management (DSPM) emerged as a solution to this problem - by ensuring that data always had proper access controls wherever it traveled, this solution promised to address security and compliance issues for enterprises with fast-moving cloud data.

And while we were answering these questions, a new era emerged, with a host of new challenges. 

The Era of AI

The recent rise of Large Language Models (LLMs) as indispensable business tools in just the past few years has introduced a new dimension to data security challenges. It has significantly amplified the existing issues in the cloud era, presenting an unparalleled and exploding problem. While it has accelerated business operations to new heights, this development has also taken the cloud to another level of risk and challenge.

While securing data in the cloud was a challenge, at least you controlled (somehow) your cloud. You could decide who could access it, and when. You could decide what data to keep and what to remove. That has all changed as LLMs and AI play a larger role in company operations. 

Globally, and specifically in the US, organizations are facing the challenge of managing these new AI technology initiatives efficiently while maintaining speed and ensuring regulatory compliance. CEOs and boards are increasingly urging companies to leverage LLMs and AI and use them as databases. However, there is a limited understanding of associated risks and difficulties in controlling the data input into these models. The ultimate goal is to mitigate and prevent such situations effectively. 


LLMs are a black box. You don't know what data your engineers are feeding into it, and you can’t be sure that users aren’t going to be able to manipulate your LLMs into disclosing sensitive information. For example, an engineer training a model might accidentally use real customer data that now exists somewhere in the LLM and might be inadvertently disclosed. Or an LLM powered chatbot might have a vulnerability that leads it to respond with sensitive company data to an inquiry. This is the challenge facing the data security team in this new era. 

How can you know what the LLM has access to, how it’s using that data, and who it’s sharing that data with?

Solving The Challenges of the Cloud and AI Eras at the Same Time

Adding to the complexity for security and compliance professionals is that we’re still dealing with the challenges from the cloud era. Fortunately, Data Security Posture Management (DSPM) has adapted to solve these eras’ primary data security headaches.

For data in the cloud, DSPM can discover your sensitive data anywhere in the cloud environment, understand who can access this data, and assess its vulnerability to security threats and risk of regulatory non-compliance. Organizations can harness advanced technologies while ensuring privacy and compliance seamlessly integrated into their processes. Further, DSPM tackles issues such as finding shadow data, identifying sensitive information with inadequate security postures, discovering duplicate data, and ensuring proper access control.

For the LLM data challenges, DSPMs can automatically secure LLM training data, facilitating swift AI application development, and letting the business run as smoothly as possible.

Any DSPM solution that collaborates with platforms like AWS SageMaker and GCP Vertex AI, as well as other AI IDEs, can ensure secure data handling during ML training. Full integrations with features like Data Access Governance (DAG) and Data Detection and Response (DDR), provide a robust approach to data security and privacy.

AI has the remarkable capacity to reshape our world, yet this must be balanced with a firm dedication to maintaining data integrity and privacy. Ensuring data integrity and privacy in LLMs is crucial for the creation of ethical and responsible AI applications. By utilizing DSPM, organizations are equipped to apply best practices in data protection, thereby reducing the dangers of data breaches, unauthorized access, and bias. This approach is key to fostering a safe and ethical digital environment as we advance in the LLM era.

To learn more about DSPM, schedule a demo with one of our experts.

Read More
David Stuart
David Stuart
February 5, 2024
3
Min Read
Data Security

Solving M&A Integration Challenges with Sentra's DSPM

Solving M&A Integration Challenges with Sentra's DSPM

Mergers and acquisitions (M&A) integrations bring forth various risks that can significantly impact the success of the combined entity. The complexity involved in merging diverse systems, technologies, and operational processes may result in IT integration challenges, disrupting day-to-day operations and impeding synergy realization. Beyond these challenges, there are additional risks such as regulatory compliance issues, customer dissatisfaction due to service disruptions, and strategic misalignment that must be adeptly navigated during the M&A integration process. Effective risk mitigation requires proactive planning, clear communication, and meticulous execution to ensure a smooth transition for both organizations involved. Further complicating these challenges are the data security concerns inherent in M&A integrations.

Data Security Challenges in M&A Integrations

As organizations merge, they combine vast amounts of sensitive information, such as customer data, proprietary technology, and internal processes. The integration process itself can introduce vulnerabilities as systems are connected and data is migrated, potentially exposing sensitive information to cyber threats. Neglecting cybersecurity measures during M&A integrations may lead to incurring unnecessary risks, compliance violations and fines, or worse—data breaches, jeopardizing the confidentiality, integrity, and availability of critical information.

This can affect millions of individuals, and in certain situations even a billion… One notable instance of a major data breach of this size was during the 2017 acquisition of Yahoo by Verizon. Throughout the due diligence phase, Yahoo revealed two significant data breaches that it had initially tried to conceal. In the months preceding the deal, hackers compromised the personal information of 500 million Yahoo users, followed by another breach affecting one billion accounts. Despite the breaches, the acquisition proceeded at a reduced price of nearly $4.5 billion, with Verizon negotiating a $350 million reduction in the transaction value.

Navigating the M&A integration process involves addressing several critical challenges, such as:

  • Hidden vulnerabilities: Undetected breaches in acquired companies become sudden liabilities for the merged entity.
  • Integration chaos: Merging disparate data systems creates confusion, increasing access risks and potential leaks.
  • Compliance minefield: Navigating a web of new regulations across various industries and territories raises compliance burdens.
  • Insider threats: Disgruntled employees in both companies pose increased risks during integration and restructuring.

In order to achieve a seamless transition and safeguard sensitive data, it is crucial to conduct thorough due diligence on the security measures of both merging entities. It also requires the implementation of robust cybersecurity protocols and clear communication to all stakeholders about the steps being taken to protect sensitive information.

Failure to address data security challenges can result in not only financial losses but also reputational damage, eroding trust among customers and stakeholders alike. Therefore, a comprehensive approach to data security is essential to navigate M&A integrations successfully. Data Security Posture Management (DSPM) is an essential tool for easily and quickly assessing the risk of data exposure and related compliance adherence of candidate acquisition and integration targets.

Rapid Assessment of Data Risk

DSPM provides a rapid and straightforward assessment of data exposure risks, ensuring compliance with standards throughout the acquisition and integration efforts. Its unique capabilities include unparalleled detection of both known and unknown shadow data repositories, exceptional granular data classification, and posture and risk assessment for data, regardless of its location.

security posture score

Cloud-native Data Security Posture Management (DSPM) requires no connectors, agents, or credentials for operation. This simplicity makes it a valuable asset for organizations seeking a comprehensive and efficient solution to enhance their data security measures throughout the intricate process of M&A integrations. Set up is quick and easy and no data ever leaves the target environment - so there is no impact to operations or increased security risk.


DSPM is agnostic to infrastructure, so it works across the entire cloud estate - despite variance in the host public cloud provider. It supports all leading Cloud Service Providers (CSPs), or in the underlying data structure - it works equally for structured as well as unstructured data. Assessment time is short, generally within hours to a few days max, and takes place autonomously. 

Risk Sensitivity Score

Once the assessment is complete, a risk sensitivity score is generated for each discovered data store, for example, S3, RDS, Snowflake, OneDrive, etc., and the underlying data assets contained within. These scores can be easily compared with other portfolio members (as long as they also have actively configured accounts) to determine the level of risk a new portfolio member brings to the organization. This is done granularly, and can be filtered by account type (AWS, GCP, Azure, etc.),  by environment (development, production, etc.), by region or can be custom defined.

Adherence to Compliance Frameworks

Ensuring adherence to compliance frameworks in the context of M&A integration is a critical aspect of assessing risk associated with potential integration targets. 

It involves a thorough examination of an organization's compliance with industry data security standards and regulations, as well as the adoption of best practices. Sentra's Data Security Posture Management (DSPM) offers a comprehensive range of frameworks for independent assessment of compliance levels, while also providing alerts for potential policy violations. This proactive approach aids in a more accurate evaluation of the risk of audit failure and potential regulatory fines. Maintaining compliance with global regulations and internal policies for cloud data is essential. Examples include General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standard (PCI-DSS). 

In the era of multi-cloud operations, sensitive cloud data is in constant motion, leading to various challenges such as:

  • Unknown data risks due to lack of visibility and inaccurate data classification.
  • Undetected data movement across regions.
  • Unnoticed changes to access permissions and user activity.
  • Misconfigurations of data security posture resulting in avoidable violations. 

The continuous movement and changes in data activity make it challenging to achieve the necessary visibility and control to comply with global regulations. Your data security posture management needs the ability to keep pace by being fully automated and continuously on guard.

Conclusion

To conclude, successful mergers and acquisitions (M&A) integrations demand a meticulous strategy to address data security challenges. In the integration process, organizations merge vast amounts of sensitive information, introducing vulnerabilities as systems are connected and data is migrated, potentially exposing this sensitive information to cyber threats. 

Data Security Posture Management (DSPM), stands out for its simplicity and rapid risk assessment capabilities. Its agnostic nature, quick setup, and autonomous assessment make it a valuable asset during the intricate M&A process.

The Risk Sensitivity Score provided by Sentra's DSPM solution enables granular evaluation of risks associated with each data store, facilitating informed decision-making. Adherence to compliance frameworks is crucial, and Sentra's DSPM plays a vital role by offering a comprehensive range of frameworks for independent assessment, ensuring compliance with industry standards.

In the dynamic multi-cloud landscape, where sensitive data is in constant motion, DSPM becomes indispensable. It addresses challenges such as unknown data risks, undetected data movement, and misconfigurations, providing the needed visibility and control for compliance with global regulations. In essence, a proactive approach, coupled with tools like DSPM, is essential for secure M&A integrations. Failure to address data security challenges not only poses financial threats but also jeopardizes reputational integrity. Prioritizing data security throughout the integration journey is crucial for success.

To learn more about DSPM, schedule a demo with one of our experts.

Read More
David Trigano
David Trigano
January 21, 2024
4
Min Read
Data Security

Prevent Sensitive Data Breaches With Data Detection & Response (DDR)

Prevent Sensitive Data Breaches With Data Detection & Response (DDR)

Amidst the dynamic cybersecurity landscape, the need for advanced Threat Detection and Incident Response (TDIR) solutions has never been more crucial. Traditional tools often focus on addressing the complexities of security without data awareness. This deficiency can result in signal fatigue, and increased time to investigate.

Data Detection and Response (DDR) distinguishes itself by focusing on data-first threats, such as: compromise or manipulation of sensitive databases, unauthorized disclosure of sensitive information, intellectual property theft, and many other malicious activities targeting sensitive information. Finally, the obligation to inform and potentially compensate affected parties in compliance with regulatory requirements strengthens the need to enrich TDIR with a data-focused technology.

In this blog, we will start by explaining the difference between data detection and response (DDR) and cloud detection and response (CDR), and how data detection and response (DDR) fits into a cloud data security platform. We will then decode the distinctions between DDR and other TDIR solutions like Endpoint Detection and Response (EDR) and Extended Detection and Response (XDR). Lastly, we will explore why Sentra, with its DDR approach, emerges as a comprehensive and efficient data security solution.

Challenges in Traditional Approaches

Classifying data accurately poses a significant challenge to most traditional cybersecurity approaches. Behavioral analysis, while effective, often overlooks the critical aspect of data type, leading to potential blind spots and excessive false positives. Real-time prevention measures also face limitations, such as they can only protect the platforms they have visibility into, often restricting them to known and managed infrastructure, leaving organizations vulnerable to sophisticated cyber threats that target the public cloud.

Differences Between Data Detection and Response (DDR) and Cloud Detection and Response (CDR)

Cloud detection and response (CDR) solutions focus on overseeing and safeguarding cloud infrastructure, while data detection and response (DDR) specialize in the surveillance and protection of data. DDR plays a crucial role in identifying potential threats to sensitive data, irrespective of its location or format, providing an essential layer of security that goes beyond the capabilities of solutions focusing solely on infrastructure. Additionally, DDR empowers organizations to concentrate on detecting and addressing potential risks to their most sensitive data, reducing noise, cutting costs, and preventing alert fatigue.

When incorporating DDR into a cloud data security platform, organizations should see it as a crucial part of a strategy that encompasses technologies like data security posture management (DSPM), data access governance, and compliance management. This integration enables comprehensive security measures throughout the data lifecycle, enhancing overall cloud data security.

Why do I need a DDR if I’m already using a CDR product?

Data Detection and Response (DDR) is focused on monitoring data access activities that are performed by users and applications, while CDR is focused on infrastructure resources, such as their creation and configuration changes. DDR and CDR serve as detection and response tools, yet they offer distinct sets of threat detection capabilities essential for organizations aiming to prevent cloud data breaches and ransomware attacks.

Some examples where DDR can identify data-centric threats that might go unnoticed by CDR:

  1. Users who download sensitive data types that they don’t usually access.
  2. A ransomware attack in which amounts of business-critical data is being encrypted or deleted.
  3. Users or applications who gain access to sensitive data via a privilege escalation. 
  4. Tampering or poisoning of a Large Language Model (LLM) training dataset by a 3rd party application.
  5. Supply chain attack detection when a compromised third party app is exfiltrating sensitive data from your cloud environment.
  6. Credentials extraction of high-impact keys that have access to sensitive data.

Lastly, DDR offers security operations center (SOC) teams to focus on what matters the most – attacks on their sensitive data, hence reducing the noise and saving time. While CDR detects threats such as impossible travel or brute force log-in attempts on any cloud resources, DDR detects such threats only when the target cloud resources contain sensitive data.

Threat Detection and Incident Response (TDIR) Solutions

Endpoint Detection and Response (EDR)

In the ever-evolving landscape of cybersecurity, Endpoint Detection and Response (EDR) plays a pivotal role in safeguarding the digital perimeters of organizations. Focused on monitoring and responding to suspicious activities at the endpoint level, EDR solutions are crucial for identifying and neutralizing threats before they escalate. Armed with advanced analytics and machine learning algorithms, EDR empowers technical teams to detect anomalous behavior, conduct thorough investigations, and orchestrate rapid responses to potential security incidents.

Extended Detection and Response (XDR)

Extended Detection and Response (XDR) is a solution designed to fortify organizations against sophisticated threats and extend protection beyond EDR. XDR seamlessly integrates threat intelligence, endpoint detection, and incident response across multiple security layers, offering a unified defense strategy. By aggregating and correlating data from various sources such as servers, applications, and other infrastructure, XDR provides unparalleled visibility into potential threats, enabling rapid detection and response. Its proactive approach enhances incident investigation and remediation, ultimately minimizing the impact of cyber threats across an organization's IT estate.

Enter DDR: Revolutionizing Data Security

Data Detection and Response (DDR) brings real-time threat detection to complement data posture controls, hence combining with Data Security Posture Management (DSPM) to address these longstanding challenges. Sentra, a leading player in this domain, ensures real-time data protection across various cloud environments, offering a comprehensive solution to safeguard data wherever it resides. DDR provides a layer of real-time threat detection that is agnostic to infrastructure and works well in multi-cloud environments - it works no matter where data travels.

DDR provides rich near real-time context to complement DSPM. Sentra’s DDR is not dependent on scanning your data. Instead, it continually monitors log activity (ex. AWS CloudTrail events) and can alert on any suspicious or unusual activity such as an exfiltration or unusual access - this can be from a malicious insider or outsider or simply unintended actions from an authorized user or a supply chain partner. Combined with DSPM, DDR provides enhanced context regarding data usage and related exposure. Sentra can help an organization to focus monitoring efforts on areas of greatest risk and reduce the ‘noise’ (false positives or inactionable alarms) from less contextually aware activity monitors.

Proactive and Reactive Data Security with Sentra's DSPM and DDR

Sentra takes a dual-pronged approach, combining proactive and reactive controls to fortify data security at every stage of a potential cyberattack:

  • Weakening Defenses Detection: Continuously monitor for unauthorized changes to data security posture, identifying escalated access privileges or changes in encryption levels.
  • Suspicious Access Detection: Instant alerts are triggered when a third party or insider accesses sensitive information, enabling swift action to prevent potential malicious activities.
  • Reconnaissance: Detect an early stage of the attack when an attacker moves sensitive data across and within cloud networks in order to prepare for the data exfiltration stage.
  • Data Loss and Ransomware Prevention: Real-time monitoring and alerts for accidental or unauthorized data movement, coupled with the enforcement of least privilege data access, prevent potential breaches.
  • Data Exfiltration Detection: Sentra detects anomalous sensitive data movement in near real-time, providing quick notification and remediation before significant damages occur.
  • Breach Recovery Acceleration: In the unfortunate event of a breach, Sentra provides guidance and contextual information, streamlining post-incident analysis and remediation.

Seamless Integration for Enhanced Efficiency

Sentra provides seamless integration into your security workflow. With over 20 pre-built or custom integrations, Sentra ensures that alert context is directly fed to the appropriate teams, expediting issue resolution. This integrated approach enables organizations to respond to potential threats with unmatched speed and efficiency.

Attribute EDR XDR CDR DDR
Monitored environment Endpoints (laptops, desktops, servers, mobile devices) Multiple security layers (endpoints, networks, cloud, email, etc.) Cloud assets and infrastructure Data repositories within the cloud environment
Threat detection method Behavior-based, signature-based, machine learning Correlation of data from multiple sources, machine learning, AI Log analysis, anomaly detection, machine learning Data-aware detection rules and behavioral analysis based on data access
Presence requirement Agent installed on endpoints Integration with multiple security tools Typically agentless, can have agents on cloud resources Typically agentless, Data collection from various sources, not limited to endpoint
Example Vendor CrowdStrike, SentinelOne, Microsoft Defender for Endpoint Trend Micro Vision One, Palo Alto Networks Cortex XDR, Cisco SecureX Wiz, Rapid7 InsightIDR, FireEye Helix Sentra DDR, Exabeam, Securonix, LogRhythm


Data Detection and Response (DDR) is not a replacement or superior solution, it is complementary to the others.

Companies need these technologies for different reasons:

  • EDR for endpoint
  • XDR for on premise
  • CDR for cloud infrastructure
  • DDR for cloud data stores
sensitive data that was accessed from suspicious IP address

With Sentra, organizations get the best of both worlds – proactive and reactive controls integrated for complete data protection. Sentra combines DDR with powerful Data Security Posture Management (DSPM), allowing users to detect and remediate data security risks efficiently. It's time to revolutionize data security with Sentra’s Data Detection and Response (DDR) – your comprehensive solution to safeguarding your most valuable asset: your data.

To learn more, schedule a demo with one of our data security experts.

Read More
Meni Besso
Meni Besso
January 11, 2024
4
Min Read
Compliance

Navigating the SEC's New Cybersecurity and Incident Disclosure Rules

Navigating the SEC's New Cybersecurity and Incident Disclosure Rules

Recently, the U.S Securities and Exchange Commission (SEC) had adopted stringent cybersecurity and incident disclosure rules, placing a heightened emphasis on the imperative need for robust incident detection, analysis, and reporting processes.

Following these new rules, public companies are finding themselves under a microscope, obligated to promptly disclose any cybersecurity incident deemed material. This disclosure mandates a detailed account of the incident's nature, scope, and timing within a stringent 4-business-day window. In essence, companies are now required to offer swift detection, thorough analysis, and the delivery of a comprehensive report on the potential impact of a data breach for shareholders and investors.

SEC's Decisive Actions in 2023: A Wake-Up Call for CISOs

The SEC's resolute stance on cybersecurity became clear with two major actions in the latter half of 2023. In July, the SEC implemented rules, effective December 18, mandating the disclosure of "material" threat/breach incidents within a four-day window. Simultaneously, annual reporting on cybersecurity risk management, strategy, and governance became a new norm. These actions underscore the SEC's commitment to getting tough on cybersecurity, prompting Chief Information Security Officers (CISOs) and their teams to broaden their focus to the boardroom. The evolving threat landscape now demands a business-centric approach, aligning cybersecurity concerns with overarching organizational strategies.

Adding weight to the SEC's commitment, in October, SolarWinds Corporation and its CISO, Timothy G. Brown was charged with fraud and internal control failures relating to allegedly known cybersecurity risks and vulnerabilities. This marked a historic moment, as it was the first time the SEC brought cybersecurity enforcement claims against an individual. SolarWinds' case, where the company disclosed only "generic and hypothetical risks" while facing specific security issues, serves as a stark reminder of the SEC's intolerance towards non-disclosure and intentional fraud in the cybersecurity domain. It's evident that the SEC's cybersecurity mandates are reshaping compliance norms.

This blog will delve into the intricacies of these rules, their implications, and how organizations, led by their CISOs, can proactively meet the SEC's expectations.

Implications for Compliance Professionals

Striking the Balance: Over-Reporting vs. Under-Reporting

Compliance professionals must navigate the fine line between over-reporting and under-reporting, a task akin to a high-stakes tightrope walk.

Over-Reporting: The consequences of hyper-vigilance can't be underestimated. Reporting every incident, regardless of its material impact, might instigate unwarranted panic in the market. This overreaction could lead to a domino effect, causing a downturn in stock prices and inflicting reputational damage.

Under-Reporting: On the flip side, failing to report within the prescribed time frame has its own set of perils. Regulatory penalties loom large, and the erosion of investor trust becomes an imminent risk. The SEC's strict adherence to disclosure timelines emphasizes the need for precision and timeliness in reporting.

Market Perception

Shareholder & Investor Trust: Balancing reporting accuracy is crucial for maintaining shareholder and investor trust. Over-reporting may breed skepticism and lead to potential divestment, while delayed reporting can erode trust and raise questions about the organization's cybersecurity commitment.

Regulatory Compliance: The SEC mandates timely and accurate reporting. Failure to comply incurs penalties, impacting both finances and the organization's regulatory standing. Regulatory actions, combined with market fallout, can significantly affect the long-term reputation of the organization.

Strategies for Success

The Day Before - Minimize the Impact of the Data Breach

To minimize the impact of a data breach, the first crucial step is knowing the locations of your sensitive data. Identifying and mapping this data within your infrastructure, along with proper classification, lays the foundation for effective protection and risk mitigation.

Data Security Posture Management (DSPM) solutions provide advanced tools and processes to actively monitor, analyze, and fortify the security posture of your sensitive data, ensuring robust protection in the face of evolving threats.

  • Discovers any piece of data you have and classifies the different data types in your organization.
  • Automatically detects the risks of your sensitive data (including data movement) and remediation. 
  • Aligns your data protection practices with security regulations and best practices. Incorporates compliance measures for handling personally identifiable information (PII), protected health information (PHI), credentials, and other sensitive data.

From encryption to access controls, adopting a comprehensive security approach safeguards your organization against potential breaches. It’s crucial to conduct a thorough risk assessment to measure vulnerabilities and potential threats to your data. Understanding the risks allows for targeted and proactive risk management strategies.

Security posture score, which includes the data and issues overview, highlighting the top data classes at risk.
An example of a security posture score, which includes the data and issues overview, highlighting the top data classes at risk.

The Day After: Maximizing the Pace to Handle the Impact (reputation, money, recovery, etc)

In the aftermath of a breach, having a “Data Catalog” with data sensitivity ranking helps with understanding the materiality of the breach and quick resolution and reporting within the 4-day window.

Swift incident response is also paramount; and this can be accomplished by establishing a rapid plan for mitigating the impact on reputation, finances, and overall recovery. This is where the data catalog comes into play again, by helping you understand which data was extracted, facilitating quick and accurate resolution. The next step for the ‘day after’ is actively managing your organization's reputation post-incident through transparent communication and decisive action, which contributes to trust and credibility rebuilding.

A complete catalog, showing the data stores, the account, the sensitivity and category of the data, as well as the data context.
An example of a complete catalog, showing the data stores, the account, the sensitivity and category of the data, as well as the data context.

Finally, always conduct a comprehensive post-incident analysis for valuable insights, and enhance future security measures through a continuous improvement cycle. Building resilience into your cybersecurity framework by proactively adapting and fortifying defenses, best positions your organization to withstand future challenges. Adhering to these strategies enables organizations to navigate the cybersecurity landscape effectively, minimizing risks, ensuring compliance, and enhancing their ability to respond swiftly to potential incidents.

Empowering Compliance in the Face of SEC Regulations with Sentra’s DSPM

Sentra’s DSPM solution both discovers and classifies sensitive data, and aligns seamlessly with SEC's cybersecurity and incident disclosure rules. The real-time monitoring swiftly identifies potential breaches, offering a critical head start within the 4-day disclosure window.

Efficient impact analysis enables compliance professionals to gauge materiality and consequences for shareholders during reporting. Sentra's DSPM streamlines incident analysis processes, adapting to each organization's needs. Having a "Data Catalog" aids in understanding breach materiality for quick resolution and reporting, while detailed reports ensure SEC compliance.

By integrating Sentra, organizations meet regulatory demands, fortify data security, and navigate evolving compliance requirements. As the SEC shapes the cybersecurity landscape, Sentra guides towards a future where proactive incident management is a strategic imperative.

To learn more, schedule a demo with one of our experts.

Read More
Romi Minin
Romi Minin
January 3, 2024
3
Min Read
Data Security

Top Data Security Resolutions

Top Data Security Resolutions

As we reflect on 2023, a year marked by a surge in cyber attacks, we are reminded of the critical importance of prioritizing data security. Widespread breaches in various industries, such as the significant AT&T data breach impacting 9 million users, have highlighted vulnerabilities and led to both financial losses and damage to reputations. In response, regulatory bodies have imposed strict penalties for non-compliance, emphasizing the importance of aligning security practices with industry-specific regulations.

According to data from enforcementtracker.com, approximately €1.6 billion in fines have been imposed only in the first six months of 2023, due to violations of the General Data Protection Regulation (GDPR). In that short period of time, more fines were incurred than in 2019, 2020 and 2021 combined...

Entering 2024, the dynamic threat landscape demands a proactive approach. Technology's rapid advancement and cybercriminals' adaptability require organizations to stay ahead. The importance of bolstering data security cannot be overstated, given potential legal consequences, reputational risks, and disruptions to business operations that a data breach can cause.

The data security resolutions for 2024 outlined below serve as a guide to fortify defenses effectively. Compliance with regulations, reducing attack surfaces, governing data access, safeguarding AI models, and ensuring data catalog integrity are crucial steps. Adopting these resolutions enables organizations to navigate the complexities of data security, mitigating risks and proactively addressing the evolving threat landscape.

Adhere to data security and compliance regulations such as GDPR, PCI-DSS, CCPA, etc.

The first data security resolution you should keep in mind is aligning your data security practices with industry-specific data regulations and standards. Data protection regulatory requirements are becoming more stringent (for example, note the recent SEC requirement of public US companies for notification within 4 days of a material breach). Penalties for non compliance are also increasing.

With explosive growth of cloud data it is incumbent upon regulated organizations to facilitate effective data security controls and to while keeping pace with the dynamic business climate. One way to achieve this is through adopting Data Security Posture Management (DSPM) which automates cloud-native discovery and classification, improving accuracy and reporting timeliness. Sentra supports more than a dozen leading frameworks, for policy enforcement and streamlined reporting.

Reduce attack surface by protecting shadow data and enforcing data lifecycle policies (and save storage costs as a bi-product)

As cloud adoption accelerates, data proliferates. This data sprawl, also known as shadow data, brings with it new risks and exposures. When a developer moves a copy of the production database into a lower environment for testing purposes, do all the same security controls and usage policies travel with it? Likely not. 

Organizations must institute security controls that stay with the data - no matter where it goes. Additionally, automating redundant, trivial, obsolete (ROT) data policies can offload the arduous task of ‘policing’ data security, ensuring data remains protected at all times and allowing the business to innovate safely. This has an added bonus of avoiding unnecessary data storage expenditure.

Implement least privilege access for sensitive data

Organizations can reduce their attack surface by limiting access to sensitive information. This applies equally to users, applications, and machines (identities). Data Access Governance (DAG) offers a way to implement policies that alert on and can enforce least privilege data access automatically. This has become increasingly important as companies build cloud-native applications, with complex supply chain / ecosystem partners, to improve customer experience. DAG often works in concert with IAM systems, providing added context regarding data sensitivity to better inform access decisions. DAG is also useful if a breach occurs - allowing responders to rapidly determine the full impact and reach (blast radius) of an exposure event to more quickly contain damages.

Protect Large Language Models (LLMs) training by detecting security risks

AI holds immense potential to transform our world, but its development and deployment must be accompanied by a steadfast commitment to data integrity and privacy. Protecting the integrity and privacy of data in Large Language Models (LLMs) is essential for building responsible and ethical AI applications. By implementing data protection best practices, organizations can mitigate the risks associated with data leakage, unauthorized access, and bias/data corruption. Sentra's Data Security Posture Management (DSPM) solution provides a comprehensive approach to data security and privacy, enabling organizations to develop and deploy LLMs with speed and confidence.

Ensure the integrity of your data catalogs

Enrich data catalog accuracy for improved governance with Sentra's classification labels and automatic discovery. Companies with data catalogs (from leading providers such as Alation, Collibra, Atlan) and data catalog initiatives struggle to keep pace with the rapid movement of their data to the cloud and the dynamic nature of cloud data and data stores. DSPM automates the discovery and classification process - and can do so at immense scale - so that organizations can accurately know at any time what data they have, where it is located, and what its security posture is. DSPM also provides usage context (owner, top users, access frequency, etc.) that enables validation of information in data catalogs, ensuring they remain current, accurate, and trustworthy as the authoritative source for their organization. This empowers organizations to maintain security and ensure the proper utilization of their most valuable asset—data!

How Sentra’s DSPM can help achieve your 2024 data security resolutions

By embracing these resolutions, organizations can gain a holistic framework to fortify their data security posture. This approach emphasizes understanding, implementing, and adapting these resolutions as practical steps toward resilience in the face of an ever-evolving threat landscape.

Staying committed to these data security resolutions can be challenging, as nearly 80% of individuals tend to abandon their New Year’s resolutions by February. However, having Sentra’s Data Security Posture Management (DSPM) by your side in 2024 ensures that adhering to these data security resolutions and refining your organization's data security strategy becomes guaranteed.

To learn more, schedule a demo with one of our experts.

Read More
Ron Reiter
Ron Reiter
December 27, 2023
3
Min Read
Data Security

What Is Shadow Data? Examples, Risks and How to Detect It

What Is Shadow Data? Examples, Risks and How to Detect It

What is Shadow Data?

Shadow data refers to any organizational data that exists outside the centralized and secured data management framework.

This includes data that has been copied, backed up, or stored in a manner not subject to the organization's preferred security structure. This elusive data may not adhere to access control limitations or be visible to monitoring tools, posing a significant challenge for organizations.

Shadow data is the ultimate ‘known unknown’. You know it exists, but you don’t know where it is exactly. And, more importantly, because you don’t know how sensitive the data is you can’t protect it in the event of a breach. 

You can’t protect what you don’t know.

Where Does Shadow Data Come From?

Whether it’s created inadvertently or on purpose, data that becomes shadow data is simply data in the wrong place, at the wrong time.
Let's delve deeper into some common examples of where shadow data comes from:

  • Persistence of Customer Data in Development Environments:

The classic example of customer data that was copied and forgotten. When customer data gets copied into a dev environment from production, to be used as test data… But the problem starts when this duplicated data gets forgotten and never is erased or is backed up to a less secure location. So, this data was secure in its organic location, and never intended to be copied – or at least not copied and forgotten.

Unfortunately, this type of human error is common.

If this data does not get appropriately erased or backed up to a more secure location, it transforms into shadow data, susceptible to unauthorized access.

  • Decommissioned Legacy Applications:

Another common example of shadow data involves decommissioned legacy applications. Consider what becomes of historical customer data or Personally Identifiable Information (PII) when migrating to a new application. Frequently, this data is left dormant in its original storage location, lingering there until a decision is made to delete it - or not.  It may persist for a very long time, and in doing so, become increasingly invisible and a vulnerability to the organization.

  • Business Intelligence and Analysis:

Your data scientists and business analysts will make copies of production data to mine it for trends and new revenue opportunities.  They may test historic data, often housed in backups or data warehouses, to validate new business concepts and develop target opportunities.  This shadow data may not be removed or properly secured once analysis has completed and become vulnerable to misuse or leakage.

  • Migration of Data to SaaS Applications:

The migration of data to Software as a Service (SaaS) applications has become a prevalent phenomenon. In today's rapidly evolving technological landscape, employees frequently adopt SaaS solutions without formal approval from their IT departments, leading to a decentralized and unmonitored deployment of applications. This poses both opportunities and risks, as users seek streamlined workflows and enhanced productivity. On one hand, SaaS applications offer flexibility and accessibility, enabling users to access data from anywhere, anytime. On the other hand, the unregulated adoption of these applications can result in data security risks, compliance issues, and potential integration challenges.

  • Use of Local Storage by Shadow IT Applications:

Last but not least, a breeding ground for shadow data is shadow IT applications, which can be created, licensed or used without official approval (think of a script or tool developed in house to speed workflow or increase productivity). The data produced by these applications is often stored locally, evading the organization's sanctioned data management framework. This not only poses a security risk but also introduces an uncontrolled element in the data ecosystem.

Shadow Data vs Shadow IT

You're probably familiar with the term "shadow IT," referring to technology, hardware, software, or projects operating beyond the governance of your corporate IT. Initially, this posed a significant security threat to organizational data, but as awareness grew, strategies and solutions emerged to manage and control it effectively.

Technological advancements, particularly the widespread adoption of cloud services, ushered in an era of data democratization. This brought numerous benefits to organizations and consumers by increasing access to valuable data, fostering opportunities, and enhancing overall effectiveness.

However, employing the cloud also means data spreads to different places, making it harder to track. We no longer have fully self-contained systems on-site. With more access comes more risk. Now, the threat of unsecured shadow data has appeared.

Unlike the relatively contained risks of shadow IT, shadow data stands out as the most significant menace to your data security. 

The common questions that arise:

Do you know the whereabouts of your sensitive data?
What is this data’s security posture and what controls are applicable? 

Do you possess the necessary tools and resources to manage it effectively? 

Shadow data, a prevalent yet frequently underestimated challenge, demands attention. Fortunately, there are tools and resources you can use in order to secure your data without increasing the burden on your limited staff.

Data Breach Risks Associated with Shadow Data

The risks linked to shadow data are diverse and severe, ranging from potential data exposure to compliance violations. Uncontrolled shadow data poses a threat to data security, leading to data breaches, unauthorized access, and compromise of intellectual property.

The Business Impact of Data Security Threats

Shadow data represents not only a security concern but also a significant compliance and business issue. Attackers often target shadow data as an easily accessible source of sensitive information. Compliance risks arise, especially concerning personal, financial, and healthcare data, which demands meticulous identification and remediation. Moreover, unnecessary cloud storage incurs costs, emphasizing the financial impact of shadow data on the bottom line.

Businesses can return investment and reduce their cloud cost by better controlling shadow data.

As more enterprises are moving to the cloud, the concern of shadow data is increasing. Since shadow data refers to data that administrators are not aware of, the risk to the business depends on the sensitivity of the data. Customer and employee data that is improperly secured can lead to compliance violations, particularly when health or financial data is at risk. There is also the risk that company secrets can be exposed. 

An example of this is when Sentra identified a large enterprise’s source code in an open S3 bucket. Part of working with this enterprise, Sentra was given 7 Petabytes in AWS environments to scan for sensitive data. Specifically, we were looking for IP - source code, documentation, and other proprietary data.

As usual, we discovered many issues, however there were 7 that needed to be remediated immediately. These 7 were defined as ‘critical’.

The most severe data vulnerability was source code in an open S3 bucket with 7.5 TB worth of data. The file was hiding in a 600 MB .zip file in another .zip file. We also found recordings of client meetings and a 8.9 KB excel file with all of their existing current and potential customer data. 

Unfortunately, a scenario like this could have taken months, or even years to notice - if noticed at all. Luckily, we were able to discover this in time.

How You Can Detect and Minimize the Risk Associated with Shadow Data

Strategy 1: Conduct Regular Audits

Regular audits of IT infrastructure and data flows are essential for identifying and categorizing shadow data. Understanding where sensitive data resides is the foundational step toward effective mitigation. Automating the discovery process will offload this burden and allow the organization to remain agile as cloud data grows.

Strategy 2: Educate Employees on Security Best Practices

Creating a culture of security awareness among employees is pivotal. Training programs and regular communication about data handling practices can significantly reduce the likelihood of shadow data incidents.

Strategy 3: Embrace Cloud Data Security Solutions

Investing in cloud data security solutions is essential, given the prevalence of multi-cloud environments, cloud-driven CI/CD, and the adoption of microservices. These solutions offer visibility into cloud applications, monitor data transactions, and enforce security policies to mitigate the risks associated with shadow data.

How You Can Protect Your Sensitive Data with Sentra’s DSPM Solution

The trick with shadow data, as with any security risk, is not just in identifying it – but rather prioritizing the remediation of the largest risks. Sentra’s Data Security Posture Management follows sensitive data through the cloud, helping organizations identify and automatically remediate data vulnerabilities by:

  • Finding shadow data where it’s not supposed to be:

Sentra is able to find all of your cloud data - not just the data stores you know about.

  • Finding sensitive information with differing security postures:

Finding sensitive data that doesn’t seem to have an adequate security posture.

  • Finding duplicate data:

Sentra discovers when multiple copies of data exist, tracks and monitors them across environments, and understands which parts are both sensitive and unprotected.

  • Taking access into account:

Sometimes, legitimate data can be in the right place, but accessible to the wrong people. Sentra scrutinizes privileges across multiple copies of data, identifying and helping to enforce who can access the data.

Key Takeaways

Comprehending and addressing shadow data risks is integral to a robust data security strategy. By recognizing the risks, implementing proactive detection measures, and leveraging advanced security solutions like Sentra's DSPM, organizations can fortify their defenses against the evolving threat landscape. 

Stay informed, and take the necessary steps to protect your valuable data assets.

To learn more about how Sentra can help you eliminate the risks of shadow data, schedule a demo with us today.

Read More
Aviv Zisso
Aviv Zisso
December 18, 2023
3
Min Read
Data Security

SoFi's Cloud Data Security Journey with Sentra

SoFi's Cloud Data Security Journey with Sentra

The recent webinar, featuring SoFi’s Director of Product Security, Pritam H Mungse, along with Senior Staff Application Security Engineer, Zachary Schulze, and Sentra’s Director of Customer Success, Aviv Zisso, provided valuable insights into managing data security in cloud-native environments. This discussion is crucial for organizations grappling with the challenges of data sprawl, security, and compliance in the ever-evolving digital landscape.

Understanding the Challenges

The webinar kicked off by exploring complexities faced by security teams in cloud-native environments. Pritam highlighted issues such as data duplication, lack of visibility, and the risks of unauthorized access and compliance violations.

These challenges emphasize the importance of developing robust strategies for data management and protection in cloud environments. Businesses need to be smart about how they manage and protect their data in the cloud. It's not just a one-and-done thing; it's an ongoing process of figuring out the best way to keep your data safe in the ever-changing world of cloud computing.

Proactive Data Protection: The Starting Point

A significant portion of the discussion centered on proactive data protection. The speakers emphasized understanding where and how data is stored and accessed in the cloud. Pritam noted, “understanding where your data is...is the first step for you to be able to say, now I can protect that data.” This statement encapsulates the essential first step in any data security strategy: gaining visibility into data creation and storage.

Prioritizing Risks: Aligning with Organizational Goals

Addressing the challenge of risk prioritization, the conversation shifted to aligning security measures with the organization's goals and risk appetite. Pritam elaborated on the importance of this alignment and the need for a well-defined internal policy framework to guide the prioritization process effectively.

Action and Remediation: Building a Framework

The panelists then delved into the processes of taking action and remediating potential data security issues. They discussed the need for systematic and repeatable approaches to address data security concerns, emphasizing the significance of a structured remediation framework within organizations. This makes it clear that building a robust framework is also an investment in the ongoing health and strength of an organization's data security. This strategic focus helps organizations navigate current challenges while also positioning them to proactively address future threats in an ever-evolving digital landscape.

Leveraging Sentra for Enhanced Data Security

SoFi's experience with Sentra formed a core part of the discussion, highlighting three main usage aspects:

  • Data Catalog Creation: Utilizing Sentra's discovery and classification capabilities, SoFi developed a centralized data catalog, enhancing the visibility and management of their data. Zach shared, “The next almost natural step to that is like the creation of a single place to understand and direct you to where all this data actually exists.”
data catalog creation
  • Compliance Adherence: The webinar explored how SoFi used Sentra to map data to various compliance frameworks. Zach discussed the importance of custom data classes and policies, allowing for alignment with both industry standards and internal requirements. Sentra's capabilities extended beyond mere automation, becoming an integral part of SoFi's proactive approach to meeting and exceeding compliance expectations.
compliance adherence
  • Data Access Governance: The conversation also covered how Sentra improved SoFi’s data access governance. Pritam highlighted, “being able to go from a different lens and answer those questions is super nice.” This reflects the depth of insight and control that Sentra provided in managing data access.
data access governance

The Critical Role of Accurate Data Classification

Accurate data classification was a key topic, with the speakers discussing the challenges and importance of correctly identifying sensitive data. They stressed that accurate classification is foundational to successful data security programs, as it directly impacts the effectiveness of protection strategies. Further, they discussed how automating data classification with Sentra proved crucial in their diverse data ecosystem, spanning various stores and cloud environments. Manual classification, given the complexity, would have taken a very long time, making the automated approach significantly valuable in streamlining the process and ensuring timely and accurate identification of sensitive data.

SoFi's data classification with Sentra

Integrating Sentra into SoFi’s Security Framework

The webinar concluded with reflections on the integration of Sentra into SoFi's existing security workflows and policies. The speakers underscored how Sentra's capabilities have been instrumental in SoFi's efforts to tackle data security challenges comprehensively, from discovery and classification to compliance adherence and governance.

The insights from SoFi’s journey provide valuable lessons for organizations looking to enhance their data security in cloud-native environments. The discussion highlighted the importance of visibility, accurate classification, and a structured approach to data security, underlining the benefits of integrating advanced tools like Sentra into security strategies.

Watch the full SoFi webinar recording.

Read More
Aviv Zisso
Aviv Zisso
December 12, 2023
3
Min Read
Data Security

Navigating Data Security Challenges: Tales from the Front Lines

Navigating Data Security Challenges: Tales from the Front Lines

As the Director of Customer Success at Sentra, I've embarked on an amazing journey witnessing the transformative impact our Data Security Posture Management (DSPM) platform has on organizations, particularly in the dynamic landscape of Fintech and e-commerce. Today, I'm excited to share some firsthand insights into the benefits our customers have experienced, demonstrating the core use cases that set Sentra apart.

Online Retail Leader Ensures Regulatory Compliance with Ease

In an era of ever-evolving data security and compliance regulations like GDPR, PCI-DSS, and local ones like CCPA and India’s DPDPA, Sentra has emerged as a steadfast ally for organizations in their quest for improved data security. The core of what Sentra does—discovery and accurate classification of cloud data—is the cornerstone of maintaining a data security policy in growing complex environments. I've seen our customers better align their data security practices with the latest regulatory standards, gaining not just compliance but also a competitive edge by demonstrating a commitment to safeguarding sensitive information.

Example:

A strong example was when I worked closely with a leading e-commerce provider facing a GDPR compliance challenge. Unbeknownst to them, sensitive customer Personally Identifiable Information (PII) data was being duplicated across regions. Within a few hours of deploying Sentra, our platform discovered this critical data residency issue, allowing the organization to swiftly rectify the situation and fortify their compliance stance.

Global Payment Processing Company Reduces Data Attack Surface and Costs

Sentra's expertise in the ability to reduce the data attack surface by mitigating shadow data and enforcing data lifecycle policies has become a game-changer in a cost aware environment. The accurate classification of cloud data not only enhances security but also leads to substantial savings. Our customers have reported streamlined operations, reduced storage costs, and a more efficient use of resources, thanks to Sentra's proactive approach to data management.

Example:

A Fintech startup witnessed a significant reduction in storage utilization and costs by leveraging Sentra's data lifecycle policies. The platform's unique ability to group objects on Blob storage (such as S3, GCS and Azure Blob) provides a one-of-a-kind high level view of groups of objects which are not being used and are stored in an expensive storage tier. Sentra detected multiple cases of inefficient storage for such archives, which resulted in an increase of $50,000 a month in their monthly cloud bill, and this was quickly remediated.

Sentra sheds light on significant storage costs of unused shadow data

Fintech Startup Implements Least Privilege Access and Access Governance

In the realm of sensitive data, implementing Least Privilege Access and Access Governance is paramount. Sentra empowers organizations to fortify their defenses by ensuring that only authorized personnel have access to sensitive information, and by creating a crystal clear data access graph for every identity. The accurate classification of cloud data enhances control over data, supporting routine access reviews, reducing the potential blast radius of a security incident.

Example:

In response to a suspected security incident, one of our forward-thinking financial customers leveraged Sentra to enhance their access governance. Sentra's detection capabilities pinpointed unnecessary permissions, prompting the organization to swiftly reduce them. This proactive measure not only mitigated the risk of potential breaches but also elevated the overall security posture.

Data Access Governance
Data Access Governance

Global Payroll Solution Provider Enriches Metadata Catalogs for Robust Data Governance

Sentra can also help enrich metadata catalogs for comprehensive data governance. The accurate classification of cloud data provides advanced classification labels and automatic discovery, enabling organizations to gain deeper insights into their data landscape. This not only enhances data governance but also provides a solid foundation for informed decision-making.

Example:

I'm thrilled to share the success of an ongoing cataloging project with another customer, a prominent player in the finance sector. Prior to Sentra, they were manually classifying data within Snowflake using tags. However, Sentra's automatic classification process and Snowflake integration has become a game-changer, saving tons of time for their data owners and engineers. This efficiency not only expedites their cataloging project but also positions them for future audits with unparalleled ease.


At Sentra, I believe we go beyond providing a solution; we're here to help you build a secure and compliant data environment. The success stories shared here underscore the dedication and innovation our customers bring to the table, and I’m honored to be a part of it.

If you are eager to explore how Sentra can elevate your data security posture, don't hesitate to reach out. Let's embark on this journey together, where security meets success.

Read More
David Stuart
David Stuart
December 6, 2023
4
Min Read
Data Security

Safeguarding Data Integrity and Privacy in the Age of AI-Powered Large Language Models (LLMs)

Safeguarding Data Integrity and Privacy in the Age of AI-Powered Large Language Models (LLMs)

In the burgeoning realm of artificial intelligence (AI), Large Language Models (LLMs) have emerged as transformative tools, enabling the development of applications that revolutionize customer experiences and streamline business operations. These sophisticated AI models, trained on massive amounts of text data, can generate human-quality text, translate languages, write different kinds of creative content, and answer questions in an informative way.

Unfortunately, the extensive data consumption and rapid adoption of LLMs has also brought to light critical challenges surrounding the protection of data integrity and privacy during the training process. As organizations strive to harness the power of LLMs responsibly, it is imperative to address these vulnerabilities and ensure that sensitive information remains secure.

Challenges: Navigating the Risks of LLM Training

The training of LLMs often involves the utilization of vast amounts of data, often containing sensitive information such as personally identifiable information (PII), intellectual property, and financial records. This wealth of data presents a tempting target for malicious actors seeking to exploit vulnerabilities and gain unauthorized access.

One of the primary challenges is preventing data leakage or public disclosure. LLMs can inadvertently disclose sensitive information if not properly configured or protected. This disclosure can occur through various means, such as unauthorized access to training data, vulnerabilities in the LLM itself, or improper handling of user inputs.

Another critical concern is avoiding overly permissive configurations. LLMs can be configured to allow users to provide inputs that may contain sensitive information. If these inputs are not adequately filtered or sanitized, they can be incorporated into the LLM's training data, potentially leading to the disclosure of sensitive information.

Finally, organizations must be mindful of the potential for bias or error in LLM training data. Biased or erroneous data can lead to biased or erroneous outputs from the LLM, which can have detrimental consequences for individuals and organizations.

OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLM Applications identifies and prioritizes critical vulnerabilities that can arise in LLM applications. Among these, LLM03 Training Data Poisoning, LLM06 Sensitive Information Disclosure, LLM08 Excessive Agency, and LLM10 Model Theft pose significant risks that cybersecurity professionals must address. Let's dive into these:

OWASP Top 10 for LLM Applications

LLM03: Training Data Poisoning

LLM03 addresses the vulnerability of LLMs to training data poisoning, a malicious attack where carefully crafted data is injected into the training dataset to manipulate the model's behavior. This can lead to biased or erroneous outputs, undermining the model's reliability and trustworthiness.

The consequences of LLM03 can be severe. Poisoned models can generate biased or discriminatory content, perpetuating societal prejudices and causing harm to individuals or groups. Moreover, erroneous outputs can lead to flawed decision-making, resulting in financial losses, operational disruptions, or even safety hazards.

LLM06: Sensitive Information Disclosure

LLM06 highlights the vulnerability of LLMs to inadvertently disclosing sensitive information present in their training data. This can occur when the model is prompted to generate text or code that includes personally identifiable information (PII), trade secrets, or other confidential data.

The potential consequences of LLM06 are far-reaching. Data breaches can lead to financial losses, reputational damage, and regulatory penalties. Moreover, the disclosure of sensitive information can have severe implications for individuals, potentially compromising their privacy and security.

LLM08: Excessive Agency

LLM08 focuses on the risk of LLMs exhibiting excessive agency, meaning they may perform actions beyond their intended scope or generate outputs that cause harm or offense. This can manifest in various ways, such as the model generating discriminatory or biased content, engaging in unauthorized financial transactions, or even spreading misinformation.

Excessive agency poses a significant threat to organizations and society as a whole. Supply chain compromises and excessive permissions to AI-powered apps can erode trust, damage reputations, and even lead to legal or regulatory repercussions. Moreover, the spread of harmful or offensive content can have detrimental social impacts.

LLM10: Model Theft

LLM10 highlights the risk of model theft, where an adversary gains unauthorized access to a trained LLM or its underlying intellectual property. This can enable the adversary to replicate the model's capabilities for malicious purposes, such as generating misleading content, impersonating legitimate users, or conducting cyberattacks.

Model theft poses significant threats to organizations. The loss of intellectual property can lead to financial losses and competitive disadvantages. Moreover, stolen models can be used to spread misinformation, manipulate markets, or launch targeted attacks on individuals or organizations.

Recommendations: Adopting Responsible Data Protection Practices

To mitigate the risks associated with LLM training data, organizations must adopt a comprehensive approach to data protection. This approach should encompass data hygiene, policy enforcement, access controls, and continuous monitoring.

Data hygiene is essential for ensuring the integrity and privacy of LLM training data. Organizations should implement stringent data cleaning and sanitization procedures to remove sensitive information and identify potential biases or errors.

Policy enforcement is crucial for establishing clear guidelines for the handling of LLM training data. These policies should outline acceptable data sources, permissible data types, and restrictions on data access and usage.

Access controls should be implemented to restrict access to LLM training data to authorized personnel and identities only, including third party apps that may connect. This can be achieved through role-based access control (RBAC), zero-trust IAM, and multi-factor authentication (MFA) mechanisms.

Continuous monitoring is essential for detecting and responding to potential threats and vulnerabilities. Organizations should implement real-time monitoring tools to identify suspicious activity and take timely action to prevent data breaches.

Solutions: Leveraging Technology to Safeguard Data

In the rush to innovate, developers must remain keenly aware of the inherent risks involved with training LLMs if they wish to deliver responsible, effective AI that does not jeopardize their customer's data.  Specifically, it is a foremost duty to protect the integrity and privacy of LLM training data sets, which often contain sensitive information.

Preventing data leakage or public disclosure, avoiding overly permissive configurations, and negating bias or error that can contaminate such models should be top priorities.

Technological solutions play a pivotal role in safeguarding data integrity and privacy during LLM training. Data security posture management (DSPM) solutions can automate data security processes, enabling organizations to maintain a comprehensive data protection posture.

DSPM solutions provide a range of capabilities, including data discovery, data classification, data access governance (DAG), and data detection and response (DDR). These capabilities help organizations identify sensitive data, enforce access controls, detect data breaches, and respond to security incidents.

Cloud-native DSPM solutions offer enhanced agility and scalability, enabling organizations to adapt to evolving data security needs and protect data across diverse cloud environments.

Sentra: Automating LLM Data Security Processes

Having to worry about securing yet another threat vector should give overburdened security teams pause. But help is available.

Sentra has developed a data privacy and posture management solution that can automatically secure LLM training data in support of rapid AI application development.

The solution works in tandem with AWS SageMaker, GCP Vertex AI, or other AI IDEs to support secure data usage within ML training activities.  The solution combines key capabilities including DSPM, DAG, and DDR to deliver comprehensive data security and privacy.

Its cloud-native design discovers all of your data and ensures good data hygiene and security posture via policy enforcement, least privilege access to sensitive data, and monitoring and near real-time alerting to suspicious identity (user/app/machine) activity, such as data exfiltration, to thwart attacks or malicious behavior early. The solution frees developers to innovate quickly and for organizations to operate with agility to best meet requirements, with confidence that their customer data and proprietary information will remain protected.

LLMs are now also built into Sentra’s classification engine and data security platform to provide unprecedented classification accuracy for unstructured data.

Learn more about Large Language Models (LLMs) here.

Conclusion: Securing the Future of AI with Data Privacy

AI holds immense potential to transform our world, but its development and deployment must be accompanied by a steadfast commitment to data integrity and privacy. Protecting the integrity and privacy of data in LLMs is essential for building responsible and ethical AI applications. By implementing data protection best practices, organizations can mitigate the risks associated with data leakage, unauthorized access, and bias. Sentra's DSPM solution provides a comprehensive approach to data security and privacy, enabling organizations to develop and deploy LLMs with speed and confidence.

Read More
Meni Besso
Meni Besso
November 14, 2023
6
Min Read
Compliance

Manage Data Security and Compliance Risks with DSPM - A Deep Dive into Common Data Regulations

Manage Data Security and Compliance Risks with DSPM - A Deep Dive into Common Data Regulations

Cloud innovation necessitates migrating more workloads to the cloud, creating an exponential increase in data volume. As a result, data proliferation and sprawl make it almost impossible to gain the right visibility into the cloud infrastructure to identify sensitive data and its security posture. What’s more, data owners constantly load and move data, while security analysts and compliance officers have the responsibility to enforce regulations and monitor these actions. This dynamic presents challenges for data security professionals and Governance, Risk, and Compliance (GRC) teams in managing complex compliance requirements across different regulatory frameworks.

Understanding and accurately classifying cloud data is a critical foundational step towards maintaining a stable compliance posture against regulatory compliance framework benchmarks.

Here are a few examples of how DSPM, with its advanced and granular visibility into complex cloud environments, can help enterprises to efficiently detect sensitive data and accurately quantify the data risk:  

  • Not all sensitive data resides in data stores: Data is scattered across various services from different vendors, including managed cloud services, containerized environments, SaaS services, and hosted cloud drives. DSPM has the ability to detect and classify data at the most granular level (including tables and objects). This ensures that no sensitive data is left undetected, when monitoring for compliance gaps.
  • Defining data classes plays a pivotal role in quantifying data compliance risks: Accurate classification means having very clearly categorized data classes that relate to the relevant compliance frameworks. A scenario in which multiple data classes reside in a single data store, will expand the data attack surface, raising the risk score. For instance, a database might contain Social Security Numbers (SSNs) and Personal Addresses, or Credit Card Numbers and CVVs. Such data stores are often replicated and moved between production and development environments, and their log files may contain sensitive information. That’s why DSPM is an invaluable tool to proactively scan and detect these issues on an ongoing basis.
  • Always track the security posture of your data stores: For instance, keeping PCI data outside of your PCI compliant environment or storing PII data outside of the designated region could create vulnerabilities. This often happens when a testing or debugging environment is created from production data.

Lets take a look at the specific requirements of some common compliance frameworks and how DSPM will automatically discover, classify, quantify the data risk and alert on issues to maintain a strong and stable compliance posture.

PCI-DSS

The Payment Card Industry Data Security Standard (PCI DSS) comprises security protocols created to guarantee the secure handling of credit card information by companies engaged in acceptance, processing, storage, or transmission of such data. 

Here are some of the issues that a DSPM platform will proactively detect, to support the PCI-DSS requirements of safeguarding cardholder data and implementing robust access control measures to fortify the security environment: 

  • Identify inadvertent leaks of Primary Account Numbers (PAN) into log files
  • Detect instances where PAN lacks proper encryption at rest or is stored without being masked
  • Pinpoint the storage locations of encryption keys, ensuring that they are not stored in undesignated areas 
  • Prevent unauthorized access to PCI data

GDPR 

GDPR, a regulation created to safeguard the privacy of EU citizen data, sets stringent standards applicable to both EU and non-EU organizations. It mandates adherence to principles such as data minimization, requiring organizations to collect only the necessary data for their declared purposes. Additionally, GDPR demands the timely correction, deletion, or termination of inaccurate data and imposes restrictions on the duration of data retention. Organizations must ensure data protection, privacy, and the ability to substantiate GDPR compliance. 

Here is how DSPM proves instrumental in aligning with GDPR requirements: 

  • Detect Personally Identifiable Information (PII) stored across various cloud accounts, datastores and SaaS providers
  • Ensure adherence to the 'Data Minimization Principle' by enabling access to authorized users only 
  • Proactively alert organizations to instances where sensitive data lacks safeguards against potential loss or theft
  • Ensure all regulated data meets the specified data retention and auditing requirements

HIPAA

HIPAA, the Health Insurance Portability and Accountability Act, is a United States compliance framework designed to safeguard the health information of patients. Covering privacy, security, breach notifications, and enforcement rules, HIPAA imposes strict regulations on Protected Health Information (PHI), encompassing identifiable details such as names, addresses, birthdates, Social Security Numbers (SSNs), and medical records. Guidelines include implementing access control, audit control, integrity control, and transmission security for electronic PHI. Electronic Health Record (EHR) systems, considered the future of medical records, must adhere to all security rules and HIPAA guidelines. 

This is how DSPM is indispensable in achieving HIPPA compliance:

  • Identify all Protected Health Information (PHI) stored in cloud accounts, including patient identifying details such as names, addresses, birthdates, SSNs, phone numbers, test results, and health insurance information
  • Scan various data repositories to locate stored PHI, including managed databases, structured files, documents, and scanned images
  • Ensure all data storage for PHI has proper access control, logging, backups, and security measures to prevent unauthorized access, loss, or theft 

DSPM's advanced visibility into the entire multi-cloud data estate, combined with its classification accuracy, ensures no data is overlooked, even at the most granular level, automatically strengthening compliance posture and readiness.

Sentra Dashboard

Here you can see how Sentra measures an organization’s compliance posture in relation to industry benchmarks. 

To learn more, book a demo and talk to a DSPM expert.

Read More
Team Sentra
Team Sentra
November 2, 2023
3
Min Read
Data Security

Why DSPM Should Take A Slice of Your 2024 Cyber Security Budget

Why DSPM Should Take A Slice of Your 2024 Cyber Security Budget

We find ourselves in interesting times. Enterprise cloud transformations have given rise to innovative cloud security technologies that are running at a pace even seasoned security leaders find head-spinning. As security professionals grapple with these evolving dynamics, they face a predicament of conflicting priorities that directly impact budget decisions. 

So much innovation and possibilities, yet, the economic climate is demanding consolidation, simplification, and yes, budget cuts. So, how do you navigate this tricky balancing act? On one hand, you need to close those critical cybersecurity gaps, and on the other, you must embrace new technology to innovate and stay competitive. To add a touch more complexity, there’s the issue of CIOs suffering from "change fatigue." According to Gartner, this fatigue manifests as CIOs hesitate to invest in new projects and initiatives, pushing a portion of 2023's IT spending into 2024, a trend that is likely to continue into 2025. CIOs are prioritizing cost control, efficiencies, and automation, while scaling back those long IT projects that take ages to show returns. 

Cloud Security - A Top Investment 

PwC suggests that cloud security is one of the top investment areas for 2024. The cloud's complex landscape, often poorly managed, presents a significant challenge. Astoundingly, 97% of organizations have gaps in their cloud risk management plans. The cloud security arena is nothing short of a maze that is difficult to navigate, driving enterprises towards vendor consolidation in an effort to reduce complexity, drive greater predictability and achieve positive ROI quickly. 

The cloud data security challenge is far from being solved, and this is precisely why the demand for Data Security Posture Management (DSPM) solutions is on the rise. DSPM shines a light on the entire multi-cloud estate by bringing in the data context. With easy integrations, DSPM enriches the entire cloud security stack, driving more operational efficiencies as a result of accurate data risk quantification and prioritization. By proactively reducing the data attack surface on an ongoing basis, DSPM plays a role in reducing the overall risk profile of the organization. 

 DSPM's Role in Supporting C-Suite Challenges 

Sometimes amid economic uncertainty and regulatory complexities, taking a comprehensive and granular approach to prioritize data risks can greatly enhance your 2024 cybersecurity investments. 

DSPM plays a vital role in addressing the intricate challenges faced by CISOs and their teams. By ensuring the correct security posture for sensitive data, DSPM brings a new level of clarity and control to data security, making it an indispensable tool for navigating the complex data risk landscape. DSPM enables CISOs to make informed decisions and stay one step ahead of evolving threats, even in the face of uncertainty.

Let's break it down and bottom line why DSPM should have a spot in your 2024 budget:

  • DSPM isn't just a technology; it's a proactive and strategic approach that empowers you to harness the full potential of  your cloud data while having a clear prioritized view of your most critical data risks that will impact remediation efficiency and accurate assessment of your organization’s overall risk profile. 
  • Reduce Cloud Storage Costs via the detection and elimination  of unused data,  and drive up operational efficiency from targeted and prioritized remediation efforts that focus on the critical data risks that matter. 
  • Cloud Data Visibility comes from DSPM providing security leaders with a crystal-clear view of their organization's most critical data risks. It offers unmatched visibility into sensitive data across multi-cloud environments, ensuring that no sensitive data remains undiscovered. The depth and breadth of data classification provides enterprises with a solid foundation to benefit from multiple use case scenarios spanning DLP,  data access governance, data privacy and compliance, and cloud security enrichment.  
  • Manage & Monitor Risk Proactively: Thanks to its ability to understand data context, DSPM offers accurate and prioritized data risk scores. It's about embracing the intricate details within larger multi-cloud environments that enable security professionals to make well-informed decisions. Adding the layer of data sensitivity, with its nuanced scoring, enriches this context even further. DSPM tools excel at recognizing vulnerabilities, misconfigurations, and policy violations. This empowers organizations to address these issues before they escalate into incidents.
  • Regulatory Compliance undertakings to abide by data protection regulations, becomes simplified with DSPM, helping organizations steer clear of hefty penalties. Security teams can align their data security practices with industry-specific data regulations and standards. Sentra assesses how your data security posture stacks up against standard compliance and security frameworks your organization needs to comply with. 
  • Sentra's agentless DSPM platform offers quick setup, rapid ROI, and seamless integration with your existing cloud security tools. It deploys effortlessly in your multi-cloud environment within minutes, providing valuable insights from day one. DSPM enhances your security stack, collaborating with CSPMs, CNAPPs, and CWPPs to prioritize data risks based on data sensitivity and security posture. It ensures data catalog accuracy and completeness, supports data backup, and aids SIEMs and Security Lakes in threat detection. DSPM also empowers Identity Providers for precise access control and bolsters detection and access workflows by tagging data-based cloud workloads, optimizing data management, compliance, and efficiency

The Path Forward

2024 is approaching fast, and DSPM is an investment in long-term resilience against the ever-evolving data risk landscape.  In planning 2024's cybersecurity budget, it's essential to find a balance between simplification, innovation and cost reduction.  DSPM plays an important part in this intricate budgeting dance and stands ready to play its part. 

Read More
Yoav Regev
Yoav Regev
October 19, 2023
8
Min Read
Data Security

Meeting CISO Priorities Head-On with DSPM

Meeting CISO Priorities Head-On with DSPM

Access to and sharing cloud data is fast becoming the new reality, enabling enterprises to innovate quickly and compete better. But it also comes with a more complex data risk landscape. 

Information security leaders are grappling with a fresh set of priorities to handle cloud data challenges. They must strike the right balance between enabling business growth and securing sensitive data. CISOs, in particular, are exploring ways to empower employees and data handlers to naturally make secure choices and create controls that support them.

This shift requires a change in mindset that centers around trust. In a perimeter-less environment, concerns about how data is protected, used, and shared are vital factors influencing stakeholders' trust in an organization's data security management abilities. Recent findings from KPMG's "Cybersecurity Considerations 2023" study reveal that over a third of organizations recognize that building trust can boost profitability.

The study also claims that our future relies on data and digital infrastructure, creating a complex web of interconnected ecosystems and vast information networks. As our dependence on these systems grows, it increases the attractiveness of malicious actors seeking to exploit vulnerabilities. Regarding digital trust (the level of confidence people have in digital systems), it's crucial to understand that regulatory requirements will likely expand, raising the bar for transparency and accountability when protecting sensitive data.

DSPM is vital in navigating this changing landscape, aligning with CISO priorities to enhance data security in a world where trust and innovation are indispensable. The role of the CISO, VP information technology, chief security officers, and data security leaders is complex. 

DSPM is a proactive approach to securing cloud data by ensuring that sensitive data always has the correct security posture. It brings the context of sensitive data into risk assessments and profiling, making it a vital tool for navigating the intricacies and complexities of the data security landscape.

Let's look at some of the practical challenges and priorities facing Information security leaders today (as outlined by Gartner) and how DSPM is perfectly positioned to set up security teams and leaders to deliver against these challenging requirements. 

As CISOs tackle their multifaceted role, they grapple with several core priorities. These include reducing cybersecurity threat exposure, enhancing organizational resilience, aligning cybersecurity investments with tangible business outcomes, and optimizing the efficiency of security systems and talent. Reporting on cyber risk and evaluating cybersecurity's overall effectiveness are equally critical. 

However, these priorities come with their share of challenges. Striking a balance between immediate threat response and proactive risk decisions remains an ongoing challenge while staying abreast of the evolving threat landscape and best practices is crucial. Effective communication of security's value in business outcomes, especially to leaders from various functions and boards, is a persistent concern. 

According to Gartner, many organizations map cybersecurity investments to specific business outcomes and establish clear security metrics linked to business performance. CISOs are urged to adopt a more rigorous approach to prioritize security resources and evaluate investments.

Here's how DSPM supports the critical data security questions that are top of mind for CISOs and data security leaders:

1. Where is our sensitive cloud data, and is it sufficiently protected? 

DSPM immediately addresses this question by automatically discovering and classifying all sensitive data stores at speed and scale across multi-cloud environments such as AWS, Azure, GCP, as well as SaaS services such as Snowflake, Microsoft 365 and Google Suite. The breadth and granularity of coverage leave no stone unturned, ensuring that all sensitive cloud data is tracked down and accurately categorized within your organization.

Sentra's novel scanning approach uses minimal processing power, ensuring scanning speed and efficiency. This means that the CISO can always gain a clear and prioritized view of sensitive data from a dynamic data catalog that is continuously updated. With Sentra, the CISO can also rest assured that the data will never leave their cloud environment, removing an additional layer of risk. 

Sensitive data assets with a weak security posture are accurately identified, including misconfigurations, encryption types, compliance violations, backups, logging, etc. 

This fast, automated discovery, classification, and data security posture assessment will provide the CISO with all the information needed.

2. Can we quantify our data risks? 

CISOs need to understand the most severe data risks upfront. DSPM provides a data risk assessment with a quantification and prioritization of the actual risks. This helps CISOs prioritize their efforts when taking swift corrective actions. 

Context is everything when it comes to accurate data risk prioritization and scoring. Sentra's automated risk scoring is built from a rich data security context. This context originates from a thorough understanding of various layers:

  1. Data Access: Who has access to the data, and how is it governed?
  2. User Activity: What are the users doing with the data? 
  3. Data Movement: How does data move within a complex multi-cloud environment?
  4. Data Sensitivity: How sensitive is the data? 
  5. Misconfigurations: Are there any errors that could expose data?

3. How do we ensure compliance?

DSPM enables CISOs to align their data security practices with industry-specific data regulations and standards. This ensures the organization remains compliant and avoids potential legal and financial penalties.

Sentra assesses how your data security posture stacks up against standard compliance and security frameworks your organization needs to comply with. 

4. How do we proactively reduce the data attack surface?

A concern for CISOs is how to continuously reduce the data attack surface. They aim to mitigate their organization's overall risk profile by doing so. DSPM empowers CISOs with the tools and insights to proactively shrink the data attack surface while providing measurable benchmarks to track progress.

Sentra excels at identifying PII, PHI, and financial data across all cloud resources, including databases, storage buckets, virtual machines, and more. This ensures the prompt detection of compliance violations, making remediation efficient.

By continuously scanning and accurately classifying data, it becomes easy to spot anomalies. For example, you’ll notice when a new application version begins logging PII or when sensitive data is transferred from a production environment to an unsecured development system. Here are some practical examples of how to uphold a strong data security posture with Sentra:

  • Detect forgotten shadow data with the option to remove it or strengthen its security posture 
  • Identify inactive identities with access to sensitive data and disable them
  • Detect unencrypted credentials or authentication tokens within configuration files and secure them

These insights empower CISOs and their teams to take fast corrective measures, strengthening their data security posture.

5. How do we manage data access and third-party risks?

Safeguarding sensitive data hinges on maintaining precise control over identities, access, and entitlements. DSPM supports the indispensable role of precise data access controls, which is why Sentra supports a transition to fine-grained access controls tailored to your organization's needs. 

Achieving 'least privilege access' requires continuous monitoring and vigilant tracking of access keys and user identities to ensure that each user operates strictly within their designated roles and responsibilities.

Sentra offers businesses the capability to address risks related to third-party provider access proactively. Vulnerabilities are minimized from the outset by granting varying levels of access to different providers. Sentra quickly conducts impact assessments in case of a third-party provider data breach and facilitates immediate remediation to limit further exposure. Additionally, identity mapping to the sensitive data that can be accessed is provided. For instance, the CISO can monitor which internal users or third parties can access PII or financial data. With Sentra, questions like "Who within my organization can access SSNs and credit card numbers?" or "Which external users can access PHI?" can be answered efficiently, providing a comprehensive view of data access.

6. How are critical data risks being remediated?

DSPM is pivotal in providing prioritized remediation guidance keeping CISOs well informed and in control. For less complex issues, DSPM can often initiate remediation steps automatically, saving time and reducing the risk of human error.

Sentra assigns risk scores to identified data vulnerabilities, prioritizing them based on their potential impact. This prioritization ensures that CISOs can focus their efforts and resources on the most critical issues first.

7. How can we address resourcing challenges? 

Automation in DSPM offers many advantages that enable CISOs to address the ongoing skills shortage while bridging the talent gap in data security. By automating routine, error-prone, and time-consuming tasks such as data discovery, classification, and risk assessment, DSPM allows CISOs to maximize the value of their existing cybersecurity teams. It not only boosts operational efficiency but also minimizes the reliance on a large workforce. This is especially crucial in an environment where organizations need help finding and hiring qualified security professionals. 

DSPM ensures that the available expertise is utilized to its fullest extent by pivoting expertise toward addressing the most critical data vulnerabilities. Not only does this drive operational efficiency, but it also mitigates the friction induced by cybersecurity measures, reducing unnecessary effort and preserving employee productivity. Automation and an API-first approach can help streamline processes, reduce the risk of human error, and improve the efficiency of data security teams.

8. How do we communicate the business value of data security to the board?

A crucial responsibility for CISOs is to provide the board with a high-level update on prioritizing their most critical data risks. DSPM enables CISOs to furnish the board with comprehensive reports, allowing for a macroscopic view of security priorities and the capability to delve into granular details to address specific concerns.

DSPM's reporting capabilities make it easier for CISOs to communicate data security status to executives and the board. This facilitates speaking the language of business value and gaining the necessary support and resources.

DSPM is a proactive partner for CISOs, helping them maintain control over their organization's data security. It offers real-time insights, automation, and a structured approach to remediation, ensuring that CISOs can make informed decisions and stay ahead of evolving threats.

Read More
Ron Reiter
Ron Reiter
September 12, 2023
5
Min Read
AI and ML

Transforming Data Security with Large Language Models (LLMs): Sentra’s Innovative Approach

Transforming Data Security with Large Language Models (LLMs): Sentra’s Innovative Approach

In today's data-driven world, the success of any data security program hinges on the accuracy, speed, and scalability of its data classification efforts. Why? Because not all data is created equal, and precise data classification lays the essential groundwork for security professionals to understand the context of data-related risks and vulnerabilities. Armed with this knowledge, security operations (SecOps) teams can remediate in a targeted, effective, and prioritized manner, with the ultimate aim of proactively reducing an organization's data attack surface and risk profile over time.

Sentra is excited to introduce Large Language Models (LLMs) into its classification engine. This development empowers enterprises to proactively reduce the data attack surface while accurately identifying and understanding sensitive unstructured data such as employee contracts, source code, and user-generated content at scale.

Many enterprises today grapple with a multitude of data regulations and privacy frameworks while navigating the intricate world of cloud data. Sentra's announcement of adding LLMs to its classification engine is redefining how enterprise security teams understand, manage, and secure their sensitive and proprietary data on a massive scale. Moreover, as enterprises eagerly embrace AI's potential, they must also address unauthorized access or manipulation of Language Model Models (LLMs) and remain vigilant in detecting and responding to security risks associated with AI model training. Sentra is well-equipped to guide enterprises through this multifaceted journey.

A New Era of Data Classification 

Identifying and managing unstructured data has always been a headache for organizations,  whether it's legal documents buried in email attachments, confidential source code scattered across various folders, or user-generated content strewn across collaboration platforms. Imagine a scenario where an enterprise needs to identify all instances of employee contracts within its vast data repositories. Previously, this would have involved painstaking manual searches, leading to inefficiency, potential oversight, and increased security risks.

Sentra’s LMM-powered classification engine can now comprehend the context, sentiment, and nuances of unstructured data, enabling it to classify such data with a level of accuracy and granularity that was previously unimaginable. The model can analyze the content of documents, emails, and other unstructured data sources, not only identifying employee contracts but also providing valuable insights into their context. It can understand contract clauses, expiration dates, and even flag potential compliance issues.

Similarly, for source code scattered across diverse folders, Sentra can recognize programming languages, identify proprietary code, and ensure that sensitive code is adequately protected.

When it comes to user-generated content on collaboration platforms, Sentra can analyze and categorize this data, making it easier for organizations to monitor and manage user interactions, ensuring compliance with their policies and regulations.

This new classification approach not only aids in understanding the business context of unstructured customer data but also aligns seamlessly with compliance standards such as GDPR, CCPA, and HIPAA. Ensuring the highest level of security, Sentra exclusively scans data with LLM-based classifiers within the enterprise's cloud premises. The assurance that the data never leaves the organization’s environment reduces an additional layer of risk.

Quantifying Risk: Prioritized Data Risk Scores 

Automated data classification capabilities provide a solid foundation for data security management practices. What’s more, data classification speed and accuracy are paramount when striving for an in-depth comprehension of sensitive data and quantifying risk. 

Sentra offers data risk scoring that considers multiple layers of data, including sensitivity scores, access permissions, user activity, data movement, and misconfigurations. This unique technology automatically scores the most critical data risks, providing security teams and executives with a clear, prioritized view of all their sensitive data at-risk, with the option to drill down deeply into the root cause of the vulnerability (often at a code level). 

Having a clear, prioritized view of high-risk data at your fingertips empowers security teams to truly understand, quantify, and prioritize data risks while directing targeted remediation efforts.

The Power of Accuracy and Efficiency

One of the most significant advantages of Sentra's LLM-powered data classification is the unprecedented accuracy it brings to the table. Inaccurate or incomplete data classification can lead to costly consequences, including data breaches, regulatory fines, and reputational damage. With LLMs, Sentra ensures that your data is classified with  precision, reducing the risk of errors and omissions.

Moreover, this enhanced accuracy translates into increased efficiency. Sentra's LLM engine can process vast volumes of data in a fraction of the time it would take a human workforce. This not only saves valuable resources but also enables organizations to proactively address security and compliance challenges.

Key developments of Sentra's classification engine encompass:

  • Automatic classification of proprietary customer data with additional context to comply with regulations and privacy frameworks.
  • LLM-powered scanning of data asset content and analysis of metadata, including file names, schemas, and tags.
  • The capability for enterprises to train their LLMs and seamlessly integrate them into Sentra's classification engine for improved proprietary data classification.

We are excited about the possibilities that this advancement will unlock for our customers as we continue to innovate and redefine cloud data security.

Read More
Yair Cohen
Yair Cohen
September 7, 2023
5
Min Read
Data Security

Why Legacy Data Classification Tools Don't Work Well in the Cloud (But DSPM Does)

Why Legacy Data Classification Tools Don't Work Well in the Cloud (But DSPM Does)

Data security teams are always trying to understand where their sensitive data is. Yet this goal has remained out of reach for a number of reasons.

The main difficulty is creating a continuously updated data catalog of all production and cloud data. Creating this catalog would involve:

  1.  Identifying everyone in the organization with knowledge of any data stores, with visibility into its contents
  1. Connecting a data classification tool to these data stores
  1. Ensure there’s network connectivity by configuring network and security policies
  1. Confirm that business-critical production systems using each data source won’t be negatively affected, causing damage to performance or availability

Having a process this complex requires a major investment of resources, long workflows, and will still probably not provide the full coverage organizations are looking for. Many so-called successful implementations of such solutions will prove unreliable and too difficult to maintain after a short period of time.

Another pain with a legacy data classification solution is accuracy. Data security professionals are all too aware of the problem of false positives (i.e. wrong classification and data findings) and false negatives (i.e. missing classification of sensitive data that remains unknown). This is mainly due to two reasons. 

  • Legacy classification solutions rely solely on patterns, such as regular expressions, to identify sensitive data, which falls short in both unstructured data and structured data. 
  • These solutions don’t understand the business context around the data, such as how it is being used, by whom, for what purposes and more.

Without the business context, security teams can’t get any actionable items to remove or protect sensitive data against data risks and security breaches.

Lastly, there’s the reason behind high operational costs. Legacy data classification solutions were not built for the cloud, where each data read/write and network operation has a price tag. The cloud also offers a much more cost efficient data storage solution and advanced data services that causes organizations to store much more data than they did before moving to the cloud. On the other hand, the public cloud providers also offer a variety of cloud-native APIs and mechanisms that can extremely benefit a data classification and security solution, such as automated backups, cross account federation, direct access to block storage, storage classes, compute instance types, and much more. However, legacy data classification tools, that were not built for the cloud, will completely ignore those benefits and differences, making them an extremely expensive solution for cloud-native organizations.

DSPM: Built to Solve Data Classification in the Cloud 

These challenges have led to the growth of a new approach to securing cloud data - Data Security Posture Management, or DSPM. Sentra’s DSPM  is able to provide full coverage and an up-to-date data catalog with classification of sensitive data, without any complex deployment or operational work involved. This is achieved thanks to a cloud-native agentless architecture, using cloud-native APIs and mechanisms.

A good example of this approach is how Sentra’s DSPM architecture leverages the public cloud mechanism of automated backups for compute instances, block storage, and more. This allows Sentra to securely run a full discovery and classification technology from within the customer’s premises, in any VPC or subscription/account of the customer’s choice. This offers a number of benefits:

  1. The organization does not need to change any existing infrastructure configuration, network policies, or security groups.
  1. There’s no need to provide individual credentials for each data source in order for Sentra to discover and scan it.
  1. There is never a performance impact on the actual workloads that are compute-based/bounded, such as virtual machines, that run in production environments. In fact, Sentra’s scanning will never connect via network or application layers to those data stores.

Another benefit of a DSPM built for the cloud is classification accuracy.  Sentra’s DSPM provides an unprecedented level of accuracy thanks to more modern and cloud-native capabilities.This starts with advanced statistical relevance for structured data, enabling our classification engine to understand with high confidence that sensitive data is found within a specific column or field, without scanning every row in a large table.

Sentra leverages even more advanced algorithms for key-value stores and document databases. For unstructured data, the use of AI and LLM -based algorithms unlock tremendous accuracy in understanding and detecting sensitive data types by understanding the context within the data itself. Lastly, the combination of data-centric and identity-centric security approaches provides greater context that allows Sentra’s users to know what actions they should take to remediate data risks when it comes to classification.

Here are two examples of how we apply this context:

1. Different Types of Databases

Personal Identifiable Information (PII) that is found in a database in which only users from the Analytics team have access to, is often a privacy violation and a data risk. On the other hand, PII that is found in a database that only three production microservices have access to is expected,  but requires the data to be isolated within a secure VPC. 

2. Different Access Histories

If 100 employees have access to a sensitive shadow data lake, but only 10 people have actually accessed it in the last year. In this case, the solution would be to reduce permissions and implement stricter access controls. We’d also want to ensure that the data has the right retention policy, to reduce both risks and storage costs. Sentra’s risk score prioritization engine takes multiple data layers into account, including data access permissions, activity, sensitivity, movement and misconfigurations, giving enterprises greater visibility and control over their data risk management processes

Finally, with regards to costs, Sentra’s Data Security Posture Management (DSPM) solution utilizes innovative features that make its scanning and classification solution about two or three orders of magnitude more cost efficient than legacy solutions. The first is the use of smart sampling, where Sentra is able to cluster multiple data units that share the same characteristics, and using intelligent sampling with statistical relevance, understand what sensitive data exists within such data assets that are grouped automatically. This is extremely powerful especially when dealing with data lakes that are often the size of dozens of petabytes, without compromising the solution coverage and accuracy.

Second, Sentra’s modern architecture leverages the benefits of cloud ephemeral resources, such as snapshotting and ephemeral compute workloads with a cloud-native orchestration technology that leverages the elasticity and the scale of the cloud. Sentra balances its resource utilization with the needs of the customer's business, providing advanced scan settings that are built and designed for the cloud. This allows teams to optimize cost according to their business needs, such as determining the frequency and sampling of scans, among more advanced features.

To summarize:

  1. Given the current macroeconomic climate, CISOs should find DSPMs like Sentra as an opportunity to increase their security and minimize their costs
  2. DSPM solutions like Sentra bring an important context - awareness to security teams and tools, allowing them to do better risk management and prioritization by focusing on whats important
  3. Data is likely to continue to be the most important asset of every business, as more organizations embrace the power of the cloud. Therefore, a DSPM will be a pivotal tool in realizing the true value of the data while ensuring it is always secured
  4. Accuracy is key and AI is an enabler for a good data classification tool

Read More
Team Sentra
Team Sentra
August 23, 2023
3
Min Read
Data Security

Why Data is the New Center of Gravity in a Connected Cloud Security Ecosystem

Why Data is the New Center of Gravity in a Connected Cloud Security Ecosystem

As many forward-thinking organizations embrace the transformational potential of innovative cloud architectures- new dimensions of risk are emerging, centered around data privacy, compliance, and the protection of sensitive data. This shift has catapulted cloud data security to the top of the Chief Information Security Officer's (CISO) agenda.

At the Gartner Security and Risk Management summit, Gartner cited some of the pressing priorities for CISOs as safeguarding data across its various forms, adopting a simplified approach, optimizing resource utilization, and achieving low-risk, high-value outcomes. While these may seem like a tall order, they provide a clear roadmap for the future of cloud security.

In light of these priorities, Gartner also highlighted the pivotal trend of integrated security systems. Imagine a holistic ecosystem where proactive and predictive controls harmonize with preventative measures and detection mechanisms. Such an environment empowers security professionals to continuously monitor, assess, detect, and respond to multifaceted risks. This integrated approach catalyzes the move from reaction to anticipation and resolution to prevention.

In this transformative ecosystem, we at Sentra believe that data is the gravitational center of connected cloud security systems and an essential element of the risk equation. Let's unpack this some more.

It's All About the Data.

Given the undeniable impact of major data breaches that have shaken organizations like  Discord, Northern Ireland Police, and Docker Hub, we all know that often the most potent risks lead to sensitive data. 

Security teams have many cloud security tools at their disposal, from Cloud Security Posture Management (CSPM) and Cloud Native Application Protection Platform (CNAPP) to Cloud Access Security Broker (CASB). These are all valuable tools for identifying and prioritizing risks and threats in the cloud infrastructure, network, and applications, but what really matters is the data.  

Let's look at an example of a configuration issue detected in an S3 bucket. The next logical question will be what kind of data resides inside that datastore, how sensitive the data is, and how much of a risk it poses to the organization when aligned with specific security policies that have been set up. These are the critical factors that determine the real risk. Can you imagine assessing risk without understanding the data? Such an assessment would inevitably fall short, lacking the contextual depth necessary to gauge the true extent of risk.

Why is this important? Because sensitive data will raise the severity of the alert. By factoring data sensitivity into risk assessments, prioritizing data-related risks becomes more accurate. This is where Sentra's innovative technology comes into play. By automatically assigning risk scores to the most vital data risks within an organization, Sentra empowers security teams and executives with a comprehensive view of sensitive data at risk. This overview extends the option to delve deep into the root causes of vulnerabilities, even down to the code level.

Prioritized Data Risk Scoring: The Sentra Advantage

Sentra's automated risk scoring is built from a rich data security context. This context originates from a thorough understanding of various layers:

  1. Data Access: Who has access to the data, and how is it governed?
  2. User Activity: What are the users doing with the data? 
  3. Data Movement: How does data move within a complex multi-cloud environment?
  4. Data Sensitivity: How sensitive is the data? 
  5. Misconfigurations: Are there any errors that could expose data?

This creates a holistic picture of data risk, laying a firm and comprehensive foundation for Sentra's unique approach to data risk assessment and prioritized risk scoring. 

Contextualizing Data Risk

Context is everything when it comes to accurate risk prioritization and scoring. Adding the layer of data sensitivity – with its nuanced scoring – further enriches this context, providing a more detailed perspective of the risk landscape. This is the essence of an integrated security system designed to empower security leaders with a clear view of their exposure while offering actionable steps for risk reduction.

The value of this approach becomes evident when security professionals are empowered to manage and monitor risk proactively. The CISO is armed with insights into the organization's vulnerabilities and the means to address them. Data security platforms, such as Sentra's, should seamlessly integrate with the workflows of risk owners. This facilitates timely action, eliminating the need for bottlenecks and unnecessary back-and-forth with security teams.

Moving Forward 

The connection between cloud security and data is profound, shaping the future of cybersecurity practices. A data-centric approach to cloud security will empower organizations to harness the full potential of the cloud while safeguarding the most valuable asset: their data. 

Read More
Catherine Gurwitz
Catherine Gurwitz
July 27, 2023
2
Min Read

Sentra Featured in Gartner’s 2023 Hype Cycle for Data Security for Second Consecutive Year

Sentra Featured in Gartner’s 2023 Hype Cycle for Data Security for Second Consecutive Year

We are thrilled to be featured for the second time in Gartner’s latest Hype Cycle for data security. DSPM has not only been acknowledged as having transformational benefits to organizations, but it is reshaping and innovating the cloud data security landscape. 

Gartner indicates the important role of DSPM in identifying privacy and security risks with a single product, as organizations are faced with the challenge of data proliferation across multiple cloud environments. Furthermore, DSPM will transform how security teams “identify business risks that result from data residency, privacy, and security risks”.

Gartner also describes the business impact of DSPM as:

“Uniquely discovering shadow data by creating and analyzing a data map and data flow to identify data locations and user access to data. This will create critical insights to previously unassessed business risks. DSPM then enables data security posture to be applied consistently across previously independent data security controls. This allows organizations to mitigate these business risks despite the speed, complexity, dynamics and scale of data deployments. This is a unique combination of properties provided via a single console”.

Other highlights from the report include:

  • Dynamic changes to data pipelines and services across CSPs are resulting in diverse shadow data repositories due to unfamiliar, undiscovered, or unclassified data, posing potential risks through geographic location, misconfigurations, or inappropriate access privileges.
  • “Creating a data map of user access against specific datasets” was a complex process in the past due to data security and IAM operating in silos. 
  • “To achieve consistent analysis, organizations need to map and track the evolution and data lineage across structured, semistructured and unstructured formats, and across all potential data locations and shadow data”.
  •  The growth of data regulations has created the need for “tools that can access DSG policies”.
  • Organizations are looking to identify security gaps and undue exposure using a combination of data observability features, such as real-time visibility into data flows, risk and compliance with data security controls.

To learn more about DSPM capabilities, check out this 2023 Gartner Report - Innovation Insight: Data Security Posture Management

Read More
Ron Reiter
Ron Reiter
June 26, 2023
2
Min Read
Data Security

Why ChatGPT is a Data Loss Disaster: ChatGPT Data Privacy Concerns

Why ChatGPT is a Data Loss Disaster: ChatGPT Data Privacy Concerns

ChatGPT is an incredible productivity tool. Everyone is already hooked on it because it is a force multiplier for just about any corporate job out there. Whether you want to proofread your emails, restructure data, investigate, write code, or perform almost any other task, ChatGPT can help.

However, for ChatGPT to provide effective assistance, it often requires a significant amount of context. This context is sometimes copied and pasted from internal corporate data, which can be sensitive in many cases. For example, a user might copy and paste a whole PDF file containing names, addresses, email addresses, and other sensitive information about a specific legal contract, simply to have ChatGPT summarize or answer a question about the contract's details.

Unlike searching for information on Google, ChatGPT allows users to provide more extensive information to solve the problem at hand. Furthermore, free generative AI models always offer their services for free in exchange for being able to improve their models based on the questions they are asked.

What happens if sensitive data is pasted into ChatGPT? OpenAI's models continuously improve by incorporating the information provided by users as input data. This helps the models learn how to enhance their answering abilities. Once the data is pasted and sent to OpenAI's servers, it becomes impossible to remove or request the redaction of specific information. While OpenAI's engineers are working to improve their technology in many other ways, implementing governance features that could mitigate these effects will likely take months or even years.

This situation creates a Data Loss Disaster, where employees are highly motivated and encouraged to copy and paste potentially sensitive information into systems that may store the submitted information indefinitely, without the ability to remove it or know exactly what information is stored within the complex models.

This has led companies such as Apple, Samsung, Verizon, JPMorgan, Bank of America, and others to completely ban the use of ChatGPT across their organizations. The goal is to prevent employees from accidentally leaking sensitive data while performing their everyday tasks. This approach helps minimize the risk of sensitive data being leaked through ChatGPT or similar tools.

Read More
Ron Reiter
Ron Reiter
June 8, 2023
3
Min Read
Data Security

Why We Built ChatDLP: Because Banning Productivity Tools Isn't the Answer

Why We Built ChatDLP: Because Banning Productivity Tools Isn't the Answer

There are two main ChatGPT types of posts appearing in my LinkedIn feed. 

The first is people showing off the different ways they’re using ChatGPT to be more effective at work. Everyone from developers to marketers has shared their prompts to do repetitive or difficult work faster.

The second is security leaders announcing their organizations will no longer permit using ChatGPT at work for security reasons. These usually come with a story about how sensitive data has been fed into the AI models.

For example, a month ago, researchers found that Samsung’s employees submitted sensitive information (meeting notes and source code) to ChatGPT to assist in their everyday tasks. Recently Apple blocked the use of ChatGPT in their company, so that data won’t leak into OpenAI’s models.

The Dangers of Sharing Sensitive Data with ChatGPT

What’s the problem with providing unfiltered access to ChatGPT? Why are organizations reacting this aggressively to a tool that clearly has many benefits?

One reason is that the models cannot avoid learning from sensitive data. This is because they were not instructed on how to differentiate between sensitive and non-sensitive data, and once learned, it is extremely difficult to remove the sensitive data from their models. Once the models have the information, it’s very easy for attackers to continuously search for sensitive data that companies accidentally submitted. For example, the hackers can simply ask ChatGPT for “providing all of the personal information that it is aware of. And while there are mechanisms in place to prevent models from sharing this type of information, these can be easily circumvented by phrasing the request differently.

Introducing ChatDLP - the Sensitive Data Anonymizer for ChatGPT

In the past few months, we were approached by dozens of CISOs and security professionals with the urge to provide a DLP tool that will enable their employees to continue using ChatGPT safely.

So we’ve developed ChatDLP, a plugin for chrome and Edge add-on that anonymizes sensitive data typed into ChatGPT before it’s submitted to the model.

Example of ChatDLP
On the bottom of the image is the original query with sensitive data. Above you can see that it's been redacted.

Sentra’s engine provides the ability to ensure with high accuracy that no sensitive data will be leaked from your organization, if ChatDLP is installed, allowing you to stay compliant with privacy regulations and avoid sensitive data leaks caused by letting employees use ChatGPT.


Sensitive data anonymized by ChatDLP includes:

  • Names
  • Emails
  • Credit Card Numbers
  • Social Security Numbers
  • Phone Numbers
  • Mailing Address
  • IP Address
  • Bank account details
  • And more!


We built Chat DLP using Sentra's AI-based classification engine which detects both pattern-based and free text sensitive data using advanced LLM (Large Language Model) techniques - the same technology used by ChatGPT itself.

You know that there’s no business case to be made for blocking ChatGPT in your organization. And now with ChatDLP - there’s no security reason either. Unleash the power of ChatGPT securely.

Read More
Team Sentra
Team Sentra
May 31, 2023
3
Min Read
Data Security

Sentra Integrates with Amazon Security Lake, Providing a Data First Security Approach

Sentra Integrates with Amazon Security Lake, Providing a Data First Security Approach

We are excited to announce Sentra’s integration with Amazon Security Lake, a fully managed security data lake service enabling organizations to automatically centralize security data from various sources, including cloud, on-premises, and third-party vendors.

Our joint capabilities enable organizations to fast track the prioritization of their most business critical data risks, based on data sensitivity scores. What’s more, enterprises can automatically classify and secure their sensitive cloud data while also analyzing the data to gain a comprehensive understanding of their security posture.

Building a Data Sensitivity Layer is Key for Prioritizing Business Critical Risks

Many security programs and products today generate a large number of alerts and notifications without understanding how sensitive the data at risk truly is. This leaves security teams overwhelmed and susceptible to alert fatigue, making it difficult to efficiently identify and prioritize the most critical risks to the business.

Bringing Sentra's unique data sensitivity scoring approach to Amazon Security Lake, organizations can now effectively protect their most valuable assets by prioritizing and remediating the security issues that pose the greatest risks to their critical data.


Moreover, many organizations leverage third-party vendors for threat detection based on security logs that are stored in Amazon Security Lake. Sentra enriches these security events with the corresponding sensitivity score, greatly improving the speed and accuracy of threat detection and  reducing the response time of real-world attacks.

Sentra's technology allows security teams to easily discover, classify, and assess the sensitivity of every data store and data asset in their cloud environment. By correlating security events with the data sensitivity layer, a meaningful data context can be built, enabling organizations to more efficiently detect threats and prioritize risks, reducing the most significant risks to the business.

OCSF Opens Up Multiple Use Cases

The Open Cybersecurity Schema Framework (OCSF) is a set of standards and best practices for defining, sharing, and using cybersecurity-related data. By adopting OCSF, Sentra seamlessly exchanges cybersecurity-related data with various security tools, enhancing the efficiency and effectiveness of these solutions. Security Lake is one of the vendors that supports OCSF, enabling mutual customers to enjoy the benefit of the integration.

This powerful integration ultimately offers organizations a smart and more efficient way to prioritize and address security risks based on the sensitivity of their data. With Sentra's data-first security approach and Security Lake's analytics enabling capabilities, organizations can now effectively protect their most valuable assets and improve their overall security posture. By leveraging the power of both platforms, security teams can focus on what truly matters: securing their most sensitive data and reducing risk across their organization.

Read More
Ron Reiter
Ron Reiter
May 8, 2023
4
Min Read

Cloud Data Hygiene is an Underrated Security Enabler

Cloud Data Hygiene is an Underrated Security Enabler

As one who remembers life and technology before the cloud, I appreciate even more the incredible changes the shift to the cloud has wrought. Productivity, speed of development, turbocharged collaboration – these are just the headlines. The true scope goes much deeper.

Yet with any new opportunity come new risks. And moving data at unprecedented speeds – even if it is to facilitate unprecedented productivity – enhances the risk that data won’t always end up where it’s supposed to be. In fact, it’s highly likely it won’t. It will find its way into corners of the cloud that are out of the reach of governance, risk and compliance tools - becoming shadow data that can pose a range of dangers to compliance, IP, revenue and even business continuity itself.

There are many approaches to mitigating the risk to cloud data. Yet there are also some foundations of cloud data management that should precede investment in technology and services. This is kind of like making sure your car has enough oil and air in the tires before you even consider that advanced defensive driving course. And one of the most important – yet often overlooked – data security measures you can take is ensuring that your organization follows proper cloud data hygiene.

Why Cloud Data Hygiene?

On the most basic level, cloud data hygiene practices ensure that your data is clean, accurate, consistent and is stored appropriately. Data hygiene affects all aspects of the data-driven business – from efficiency to decision making, from cloud storage expenses to customer satisfaction, and everything in between. 

What does this have to do with security? Although it may not be the first thing that pops into a CISO’s mind when he or she hears “data hygiene,” the fact is that good cloud data hygiene improves the security posture of your organization. 

By ensuring that cloud data is consistently stored only in sanctioned environments, good cloud data hygiene helps dramatically reduce the cloud data attack surface. This is a crucial concept, because cloud security risks no longer arise primarily from technical vulnerabilities in the cloud environment. They more frequently originate because there’s so much data to defend that organizations don’t know where it all is, who’s responsible for what, and what its exact security posture is. This is the cloud data attack surface: the sum of the total vulnerable, sensitive, and shadow data assets in the cloud. And cloud data hygiene is a key mitigating force. 

Moreover, even when sensitive data is not under direct threat, good cloud data hygiene lowers indirect risk by mitigating the potential for serious damage from lateral movement following a breach. And, of course, cloud data security policies are more easily implemented and enforced when data is in good order.

The Three Commandments of Cloud Data Hygiene 

  • Commandment 1: Know Thy Data

Understanding is the first step on the road to enlightenment…and cloud data security. You need to understand what dataset you have, which can be deleted to lower storage expenses, where each is stored exactly, whether any copies were made, and if so who has access to each copy? Once you know the ‘where,’ you must know the ‘which’ – which datasets are sensitive and which are subject to regulatory oversight? After that, the ‘how:’ how are these datasets being protected? How are they accessed and by whom?

Only once you have the answers to all these (and more) questions can you start protecting the right data in the right way. And don’t forget that the sift to the cloud means that there is a lot of sensitive data types that never existed on-prem, yet still need to be protected – for example code stored in the cloud, applications that use other cloud services, or cloud-based APIs.

  • Commandment 2 – Know Thy Responsibilities

In any context, it’s crucial to understand who does what. There is a misperception that cloud providers are responsible for cloud data security. This is simply incorrect. Cloud providers are responsible for the security of the infrastructure over which services are provided. Securing applications and – especially – data is the sole responsibility of the customer.

Another aspect of cloud data management that falls solely on the customer’s shoulders is access control. If every user in your organization has admin privileges, any breach can be devastating. At the user level, applying the principle of least privilege is a good start.

  • Commandment 3 – Ensure Continuous Hygiene 

To keep your cloud ecosystem healthy, safe and cost-effective over the long term, establish and enforce clear and detailed cloud data hygiene processes and procedures. Make sure you have a can effectively monitor the entire data lifecycle. You need to continuously monitor/scan all data and search for new and changed data

To ensure that data is secure both at rest and in motion, make sure both storage and encryption have a minimal level of encryption – preventing unauthorized users from viewing or changing data. Most cloud vendors enable security to manage their own encryption keys – meaning that, once encrypted, even cloud vendors can’t access sensitive data.

Finally, keep cloud API and data storage expenses in check by continuously tracking data wherever it moves or is copied. Multiple copies of petabyte scale data sets unknowingly copied and used (for example) to train AI algorithms will necessarily result in far higher (yet preventable) storage costs.

The Bottom Line

Cloud data is a means to a very valuable end. Adopting technology and processes that facilitate effective cloud data hygiene enables cloud data security. And seamless cloud data security enables enterprises to unlock the vast yet often hidden value of their data.

Read More
Team Sentra
Team Sentra
May 3, 2023
8
Min Read

Use Redshift Data Scrambling for Additional Data Protection

Use Redshift Data Scrambling for Additional Data Protection

According to IBM, a data breach in the United States cost companies an average of 9.44 million dollars in 2022. It is now more important than ever for organizations to place high importance on protecting confidential information. Data scrambling, which can add an extra layer of security to data, is one approach to accomplish this. 

In this post, we'll analyze the value of data protection, look at the potential financial consequences of data breaches, and talk about how Redshift Data Scrambling may help protect private information.

The Importance of Data Protection

Data protection is essential to safeguard sensitive data from unauthorized access. Identity theft, financial fraud,and other serious consequences are all possible as a result of a data breach. Data protection is also crucial for compliance reasons. Sensitive data must be protected by law in several sectors, including government, banking, and healthcare. Heavy fines, legal problems, and business loss may result from failure to abide by these regulations.

Hackers employ many techniques, including phishing, malware, insider threats, and hacking, to get access to confidential information. For example, a phishing assault may lead to the theft of login information, and malware may infect a system, opening the door for additional attacks and data theft. 

So how to protect yourself against these attacks and minimize your data attack surface?

What is Redshift Data Masking?

Redshift data masking is a technique used to protect sensitive data in Amazon Redshift; a cloud-based data warehousing and analytics service. Redshift data masking involves replacing sensitive data with fictitious, realistic values to protect it from unauthorized access or exposure. It is possible to enhance data security by utilizing Redshift data masking in conjunction with other security measures, such as access control and encryption, in order to create a comprehensive data protection plan.

What is Redshift Data Masking

What is Redshift Data Scrambling?

Redshift data scrambling protects confidential information in a Redshift database by altering original data values using algorithms or formulas, creating unrecognizable data sets. This method is beneficial when sharing sensitive data with third parties or using it for testing, development, or analysis, ensuring privacy and security while enhancing usability. 

The technique is highly customizable, allowing organizations to select the desired level of protection while maintaining data usability. Redshift data scrambling is cost-effective, requiring no additional hardware or software investments, providing an attractive, low-cost solution for organizations aiming to improve cloud data security.

Data Masking vs. Data Scrambling

Data masking involves replacing sensitive data with a fictitious but realistic value. However, data scrambling, on the other hand, involves changing the original data values using an algorithm or a formula to generate a new set of values.

In some cases, data scrambling can be used as part of data masking techniques. For instance, sensitive data such as credit card numbers can be scrambled before being masked to enhance data protection further.

Setting up Redshift Data Scrambling

Having gained an understanding of Redshift and data scrambling, we can now proceed to learn how to set it up for implementation. Enabling data scrambling in Redshift requires several steps.

To achieve data scrambling in Redshift, SQL queries are utilized to invoke built-in or user-defined functions. These functions utilize a blend of cryptographic techniques and randomization to scramble the data.

The following steps are explained using an example code just for a better understanding of how to set it up:

Step 1: Create a new Redshift cluster

Create a new Redshift cluster or use an existing cluster if available. 

Redshift create cluster

Step 2: Define a scrambling key

Define a scrambling key that will be used to scramble the sensitive data.

 
SET session my_scrambling_key = 'MyScramblingKey';

In this code snippet, we are defining a scrambling key by setting a session-level parameter named <inlineCode>my_scrambling_key<inlineCode> to the value <inlineCode>MyScramblingKey<inlineCode>. This key will be used by the user-defined function to scramble the sensitive data.

Step 3: Create a user-defined function (UDF)

Create a user-defined function in Redshift that will be used to scramble the sensitive data. 


CREATE FUNCTION scramble(input_string VARCHAR)
RETURNS VARCHAR
STABLE
AS $$
DECLARE
scramble_key VARCHAR := 'MyScramblingKey';
BEGIN
-- Scramble the input string using the key
-- and return the scrambled output
RETURN ;
END;
$$ LANGUAGE plpgsql;

Here, we are creating a UDF named <inlineCode>scramble<inlineCode> that takes a string input and returns the scrambled output. The function is defined as <inlineCode>STABLE<inlineCode>, which means that it will always return the same result for the same input, which is important for data scrambling. You will need to input your own scrambling logic.

Step 4: Apply the UDF to sensitive columns

Apply the UDF to the sensitive columns in the database that need to be scrambled.


UPDATE employee SET ssn = scramble(ssn);

For example, applying the <inlineCode>scramble<inlineCode> UDF to a column saying, <inlineCode>ssn<inlineCode> in a table named <inlineCode>employee<inlineCode>. The <inlineCode>UPDATE<inlineCode> statement calls the <inlineCode>scramble<inlineCode> UDF and updates the values in the <inlineCode>ssn<inlineCode> column with the scrambled values.

Step 5: Test and validate the scrambled data

Test and validate the scrambled data to ensure that it is unreadable and unusable by unauthorized parties.


SELECT ssn, scramble(ssn) AS scrambled_ssn
FROM employee;

In this snippet, we are running a <inlineCode>SELECT<inlineCode> statement to retrieve the <inlineCode>ssn<inlineCode> column and the corresponding scrambled value using the <inlineCode>scramble<inlineCode> UDF. We can compare the original and scrambled values to ensure that the scrambling is working as expected. 

Step 6: Monitor and maintain the scrambled data

To monitor and maintain the scrambled data, we can regularly check the sensitive columns to ensure that they are still rearranged and that there are no vulnerabilities or breaches. We should also maintain the scrambling key and UDF to ensure that they are up-to-date and effective.

Different Options for Scrambling Data in Redshift

Selecting a data scrambling technique involves balancing security levels, data sensitivity, and application requirements. Various general algorithms exist, each with unique pros and cons. To scramble data in Amazon Redshift, you can use the following Python code samples in conjunction with a library like psycopg2 to interact with your Redshift cluster. Before executing the code samples, you will need to install the psycopg2 library:


pip install psycopg2

Random

Utilizing a random number generator, the Random option quickly secures data, although its susceptibility to reverse engineering limits its robustness for long-term protection.


import random
import string
import psycopg2

def random_scramble(data):
    scrambled = ""
    for char in data:
        scrambled += random.choice(string.ascii_letters + string.digits)
    return scrambled

# Connect to your Redshift cluster
conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()
# Fetch data from your table
cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

# Scramble the data
scrambled_rows = [(random_scramble(row[0]),) for row in rows]

# Update the data in the table
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

# Close the connection
cursor.close()
conn.close()

Shuffle

The Shuffle option enhances security by rearranging data characters. However, it remains prone to brute-force attacks, despite being harder to reverse-engineer.


import random
import psycopg2

def shuffle_scramble(data):
    data_list = list(data)
    random.shuffle(data_list)
    return ''.join(data_list)

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

scrambled_rows = [(shuffle_scramble(row[0]),) for row in rows]

cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Reversible

By scrambling characters in a decryption key-reversible manner, the Reversible method poses a greater challenge to attackers but is still vulnerable to brute-force attacks. We’ll use the Caesar cipher as an example.


def caesar_cipher(data, key):
    encrypted = ""
    for char in data:
        if char.isalpha():
            shift = key % 26
            if char.islower():
                encrypted += chr((ord(char) - 97 + shift) % 26 + 97)
            else:
                encrypted += chr((ord(char) - 65 + shift) % 26 + 65)
        else:
            encrypted += char
    return encrypted

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

key = 5
encrypted_rows = [(caesar_cipher(row[0], key),) for row in rows]
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(encrypted, original) for encrypted, original in zip(encrypted_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Custom

The Custom option enables users to create tailor-made algorithms to resist specific attack types, potentially offering superior security. However, the development and implementation of custom algorithms demand greater time and expertise.

Best Practices for Using Redshift Data Scrambling

There are several best practices that should be followed when using Redshift Data Scrambling to ensure maximum protection:

Use Unique Keys for Each Table

To ensure that the data is not compromised if one key is compromised, each table should have its own unique key pair. This can be achieved by creating a unique index on the table.


CREATE UNIQUE INDEX idx_unique_key ON table_name (column_name);

Encrypt Sensitive Data Fields 

Sensitive data fields such as credit card numbers and social security numbers should be encrypted to provide an additional layer of security. You can encrypt data fields in Redshift using the ENCRYPT function. Here's an example of how to encrypt a credit card number field:


SELECT ENCRYPT('1234-5678-9012-3456', 'your_encryption_key_here');

Use Strong Encryption Algorithms

Strong encryption algorithms such as AES-256 should be used to provide the strongest protection. Redshift supports AES-256 encryption for data at rest and in transit.


CREATE TABLE encrypted_table (  sensitive_data VARCHAR(255) ENCODE ZSTD ENCRYPT 'aes256' KEY 'my_key');

Control Access to Encryption Keys 

Access to encryption keys should be restricted to authorized personnel to prevent unauthorized access to sensitive data. You can achieve this by setting up an AWS KMS (Key Management Service) to manage your encryption keys. Here's an example of how to restrict access to an encryption key using KMS in Python:


import boto3

kms = boto3.client('kms')

key_id = 'your_key_id_here'
grantee_principal = 'arn:aws:iam::123456789012:user/jane'

response = kms.create_grant(
    KeyId=key_id,
    GranteePrincipal=grantee_principal,
    Operations=['Decrypt']
)

print(response)

Regularly Rotate Encryption Keys 

Regular rotation of encryption keys ensures that any compromised keys do not provide unauthorized access to sensitive data. You can schedule regular key rotation in AWS KMS by setting a key policy that specifies a rotation schedule. Here's an example of how to schedule annual key rotation in KMS using the AWS CLI:

 
aws kms put-key-policy \\
    --key-id your_key_id_here \\
    --policy-name default \\
    --policy
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    \\":\\"kms:RotateKey\\",\\"Resource\\":\\"*\\"},{\\"Effect\\":\\"Allow\\",\
    \"Principal\\":{\\"AWS\\":\\"arn:aws:iam::123456789012:root\\"},\\"Action\\
    ":\\"kms:CreateGrant\\",\\"Resource\\":\\"*\\",\\"Condition\\":{\\"Bool\\":
    {\\"kms:GrantIsForAWSResource\\":\\"true\\"}}}]}"

Turn on logging 

To track user access to sensitive data and identify any unwanted access, logging must be enabled. All SQL commands that are executed on your cluster are logged when you activate query logging in Amazon Redshift. This applies to queries that access sensitive data as well as data-scrambling operations. Afterwards, you may examine these logs to look for any strange access patterns or suspect activities.

You may use the following SQL statement to make query logging available in Amazon Redshift:

ALTER DATABASE  SET enable_user_activity_logging=true;

The stl query system table may be used to retrieve the logs once query logging has been enabled. For instance, the SQL query shown below will display all queries that reached a certain table:

Monitor Performance 

Data scrambling is often a resource-intensive practice, so it’s good to monitor CPU usage, memory usage, and disk I/O to ensure your cluster isn’t being overloaded. In Redshift, you can use the <inlineCode>svl_query_summary<inlineCode> and <inlineCode>svl_query_report<inlineCode> system views to monitor query performance. You can also use Amazon CloudWatch to monitor metrics such as CPU usage and disk space.

Amazon CloudWatch

Establishing Backup and Disaster Recovery

In order to prevent data loss in the case of a disaster, backup and disaster recovery mechanisms should be put in place. Automated backups and manual snapshots are only two of the backup and recovery methods offered by Amazon Redshift. Automatic backups are taken once every eight hours by default. 

Moreover, you may always manually take a snapshot of your cluster. In the case of a breakdown or disaster, your cluster may be restored using these backups and snapshots. Use this SQL query to manually take a snapshot of your cluster in Amazon Redshift:

CREATE SNAPSHOT ; 

To restore a snapshot, you can use the <inlineCode>RESTORE<inlineCode> command. For example:


RESTORE 'snapshot_name' TO 'new_cluster_name';

Frequent Review and Updates

To ensure that data scrambling procedures remain effective and up-to-date with the latest security requirements, it is crucial to consistently review and update them. This process should include examining backup and recovery procedures, encryption techniques, and access controls.

In Amazon Redshift, you can assess access controls by inspecting all roles and their associated permissions in the <inlineCode>pg_roles<inlineCode> system catalog database. It is essential to confirm that only authorized individuals have access to sensitive information.

To analyze encryption techniques, use the <inlineCode>pg_catalog.pg_attribute<inlineCode> system catalog table, which allows you to inspect data types and encryption settings for each column in your tables. Ensure that sensitive data fields are protected with robust encryption methods, such as AES-256.

The AWS CLI commands <inlineCode>aws backup plan<inlineCode> and <inlineCode>aws backup vault<inlineCode> enable you to review your backup plans and vaults, as well as evaluate backup and recovery procedures. Make sure your backup and recovery procedures are properly configured and up-to-date.

Decrypting Data in Redshift

There are different options for decrypting data, depending on the encryption method used and the tools available; the decryption process is similar to of encryption, usually a custom UDF is used to decrypt the data, let’s look at one example of decrypting data scrambling with a substitution cipher.

Step 1: Create a UDF with decryption logic for substitution


CREATE FUNCTION decrypt_substitution(ciphertext varchar) RETURNS varchar
IMMUTABLE AS $$
    alphabet = 'abcdefghijklmnopqrstuvwxyz'
    substitution = 'ijklmnopqrstuvwxyzabcdefgh'
    reverse_substitution = ''.join(sorted(substitution, key=lambda c: substitution.index(c)))
    plaintext = ''
    for i in range(len(ciphertext)):
        index = substitution.find(ciphertext[i])
        if index == -1:
            plaintext += ciphertext[i]
        else:
            plaintext += reverse_substitution[index]
    return plaintext
$$ LANGUAGE plpythonu;

Step 2: Move the data back after truncating and applying the decryption function


TRUNCATE original_table;
INSERT INTO original_table (column1, decrypted_column2, column3)
SELECT column1, decrypt_substitution(encrypted_column2), column3
FROM temp_table;

In this example, encrypted_column2 is the encrypted version of column2 in the temp_table. The decrypt_substitution function is applied to encrypted_column2, and the result is inserted into the decrypted_column2 in the original_table. Make sure to replace column1, column2, and column3 with the appropriate column names, and adjust the INSERT INTO statement accordingly if you have more or fewer columns in your table.

Conclusion

Redshift data scrambling is an effective tool for additional data protection and should be considered as part of an organization's overall data security strategy. In this blog post, we looked into the importance of data protection and how this can be integrated effectively into the  data warehouse. Then, we covered the difference between data scrambling and data masking before diving into how one can set up Redshift data scrambling.

Once you begin to accustom to Redshift data scrambling, you can upgrade your security techniques with different techniques for scrambling data and best practices including encryption practices, logging, and performance monitoring. Organizations may improve their data security posture management (DSPM) and reduce the risk of possible breaches by adhering to these recommendations and using an efficient strategy.

Read More
Ron Reiter
Ron Reiter
April 4, 2023
4
Min Read

Cloud Data Governance is a Security Enabler

Cloud Data Governance is a Security Enabler

Data governance is how we manage the availability, usability, integrity, privacy and security of the data in our enterprise systems. It’s based both on the internal policies that dictate how data can be used, and on the global and local regulations that so tightly control how we need to handle our data.

Effective data governance ensures that data is trustworthy, consistent and doesn't get misused. As businesses began to increasingly rely on data analytics to optimize operations and drive decision-making, data governance became a central part of enterprise operations. And as protection of data and data assets came under ever-closer regulatory scrutiny, data governance became a key part of policymaking, as well. But then came the move to the cloud. This represented a tectonic shift in how data is stored, transported and accessed. And data governance – notably the security facet of data governance – has not quite been able to keep up.

Cloud Data Governance: A Different Game

The shift to the cloud radically changed data governance. From many perspectives, it’s a totally different game. The key differentiator? Cloud data governance, unlike on-prem data governance, currently does not actually control all sensitive data. The origin of this challenge is that, in the cloud, there is simply too much movement of data. This is not a bad thing. The democratization of data has dramatically improved productivity and development speed. It’s facilitated the rise of a whole culture of data-driven decision making. 

Yet the goal of cloud data governance is to streamline data collection, storage, and use within the cloud - enabling collaboration while maintaining compliance and security. And the fact is that data in the cloud is used at such scale and with such intensity that it’s become nearly impossible to govern, let alone secure. The cloud has given rise to every data security stakeholder’s nightmare: massive shadow data.

The Rise of Shadow Data

Shadow data is any data that is not subject to your organization’s data governance or security framework. It’s not governed by your data policies. It’s not stored according to your preferred security structure. It’s not subject to your access control limitations. And it’s probably not even visible to the security tools you use to monitor data access.

In most cases, shadow data is not born of malicious roots. It’s just data in the wrong place, at the wrong time.

Where does shadow data come from?

  • …from prevalent hybrid and multi-cloud environments. Though excellent for productivity, these ecosystems present serious visibility challenges. 
  • …from cloud-driven CI/CD, which speeds interactions between development pipelines and source code repositories. Yet while making life easier for developers, cloud-driven CI/CD also frequently (and usually inadvertently) sacrifices data security to expediency. 
  • …from distributed cloud-native apps based on containers, serverless functions and microservices – which leaves data spread across hundreds of databases, data warehouses, data pipelines, and external SaaS warehouses.

Cloud Data Governance Today and Tomorrow 

In an attempt to duplicate the success of on-prem data governance paradigms in the cloud, many organizations attempt to create cloud data catalogs. 

Data catalog tools and services collect metadata and offer big data management and search capabilities. The goal is to provide analysts and data users with a way to find data they need – while also creating an inventory of available data. Yet while catalogs have become the core component of on-prem big data governance, in the cloud this paradigm falls short. 

Data catalogs are labor intensive, mostly manual endeavors. There are data cataloging tools, but most lack automatic discovery and classification. This means that teams have to manually connect to each data source, then manually classify and catalog data. This is why data cataloging at the enterprise level is a full-time job, and frequently a departmental task. And once the catalog is created, multiple security and governance teams still need to work to enforce access to sensitive data. 

Yet despite these efforts, shadow cloud data persists – and is growing. What’s more, increasingly popular unstructured data sources like Amazon S3 can’t be partitioned into the business flows they contain, nor effectively classified manually.

Taken together, all this means there’s an urgent emerging need for automatic data discovery, as well as a way to ensure that data discovered is also data governed

This is where Data Lifecycle Security comes in.

Data Lifecycle Security solutions enable effective cloud data governance by following sensitive data through the cloud - helping organizations identify data movement and ensuring that security posture follows it. It accomplishes this by first discovering sensitive data, including shadow or abandoned data. Then it automatically classifies data types using AI models, determines whether the data has the proper security posture and notifies the remediation teams if not.

It’s crucial for cloud-facing organizations to remember that the distributed nature of cloud computing means they may not currently know exactly where all their data is stored. Data governance and security cannot be ‘lifted and shifted’ from on-prem to the cloud. But Data Lifecycle Security solutions can bridge the gap between the need for cloud data governance and security and the sub-optimal performance of existing paradigms.

To learn more about how Sentra and Data Lifecycle Security can help you apply effective cloud data governance, watch a demo here

Read More
Yair Cohen
Yair Cohen
March 31, 2023
3 minutes
Min Read
Data Security

Sentra Named a Representative Vendor in Gartner’s Innovation Insight: Data Security Posture Management Report

Sentra Named a Representative Vendor in Gartner’s Innovation Insight: Data Security Posture Management Report

DSPM is recognized as a significant force in cyber security - this a clear indication that smart cloud data security is maturing and fast becoming a priority for security leaders.

As a pioneer and driving force behind redefining and innovating new ways to secure dynamic cloud data, at Sentra we are very encouraged to see how this technology is rapidly gaining more traction and market recognition. 

It was not so long ago that Data Security Posture Management (DSPM) was considered an early stage emerging technology, and today we see how quickly it is being adopted, by organizations of all sizes and across most verticals.

Working hand in hand with top security leaders and teams across the globe, almost 24/7, we see how the high degree of fragmentation in cloud platforms, data stores and data handlers makes maintaining data visibility and risk assessment a real challenge. What’s more, data handlers are moving sensitive data around in the public cloud, and properly securing this data is very difficult, perhaps one of the most significant security challenges of our time. But more specifically, we see security teams struggle with the following issues:

  • Detecting when data is copied across cloud data stores and identifying data movement when it is processed by data pipelines and ETLs. For example, we frequently see sensitive customer or financial data being duplicated from a prod environment to a dev environment. This would weaken the security posture should it not be encrypted or lack the necessary backup policies, for example
  • Defining the right policies to alert security teams when sensitive data is copied or moved between regions, environments and networks
  • Gaining a rich, yet clear data security context to indicate any security drifts such as excessive permissions or sensitive data that may be publicly accessible 
  • Or even just gaining a clear view of all the regulated data, to be ready for those big security audits

Here is Gartner’s take on some of the key challenges from their recently published Innovation Insight: Data Security Posture Management Report:

 “Traditional data security products have an insufficient view to discover previously unknown, undiscovered or unidentified data repositories, and they fail to consistently discover sensitive data (structured or unstructured) within repositories. Such data is 'shadow data' that can expose an organization to a variety of risks”. 

“To make matters worse, organizations must navigate a complex, messy market of siloed data security products. These products do not integrate or share policies, a shortcoming that results in gaps and inconsistencies in how data is protected and that makes it extremely difficult to achieve any consistent level of data security posture. Therefore it is important to be able to assess how data security posture is implemented by establishing a meaningful data risk assessment”. 

“This situation is fueling an urgent need for new technologies, such as DSPM, that can help discover shadow data and mitigate the growing data security and privacy risks”.

 

Let's take a look at some of the key findings, taken directly from Gartner's Innovation Insight: Data Security Posture Management Report, that explain how DSPM solutions are starting to address some of the challenges in data security today:

  1. Data security posture management (DSPM) solutions are evolving the ability to discover unknown data repositories, and to identify whether the data they contain is exposed to data residency, privacy or data security risks.
  1. DSPM solutions can use data lineage to discover, identify and map data, across structured and unstructured data repositories, that relies on integrations with, for example, specific infrastructure, databases and CSPs.
  1. DSPM technologies use custom integrations with identity and access management (IAM) products. They can create data security alerts, but typically do not integrate with third-party data security products, which leads to a variety of security approaches.

This is just the beginning of a fast growing and flourishing category that will continue to evolve and mature in addressing the challenges and complexity of accurately securing dynamic cloud data.

Read More
Yair Cohen
Yair Cohen
February 22, 2023
4
Min Read
Data Security

How DSPM Reduces the Risk of Data Breaches

How DSPM Reduces the Risk of Data Breaches

The movement of more and more sensitive data to the cloud is driving a cloud data security gap – the chasm between the security of cloud infrastructure and the security of the data housed within it. This is one of the key drivers of the Data Security Posture Management (DSPM) model and why more organizations are adopting a data-centric approach. 

Unlike Cloud Security Posture Management (CSPM) solutions, which were purpose-built to protect cloud infrastructure by finding vulnerabilities in cloud resources, DSPM is about the data itself. CSPM systems are largely data agnostic – looking for infrastructure vulnerabilities, then trying to identify what data is vulnerable because of them. DSPM provides visibility into where sensitive data is, who can access that data, how it was used, and how robust the data store or application security posture is.

On a fundamental level, the move to DSPM reflects a recognition that in hybrid or cloud environments, data is never truly at rest. Data moves to different cloud storage as security posture shifts, then moves back. Data assets are copied for testing purposes, then erased (or not) and are frequently forgotten. This leaves enterprises large and small scrambling to track and assess sensitive data and its security throughout the data lifecycle and across all cloud environments.

The data-centric approach of DSPMs is solely focused on the unique challenges of securing cloud data. It does this by making sure that sensitive data always has the correct security posture - regardless of where it’s been duplicated or moved to. DSPM ensures that sensitive data is always secured by providing automatic visibility, risk assessment, and access analysis for cloud data - no matter where it travels.

Because of this, DSPM is well-positioned to reduce the risk of catastrophic data breaches and data exposure, in three key ways:

  1. Finding and eliminating shadow data to reduce the data attack surface:

    Shadow data is any data that has been stored, copied, or backed up in a way that does not subject it to your organization’s data management framework or data security policies. Shadow data may also not be housed according to your preferred security structure, may not be subject to your access control limitations, and it may not even be visible to the tools you use to monitor and log data access.

    Shadow data is basically data in the wrong place, at the wrong time. And it is gold for attackers – publicly accessible sensitive data that nobody really knows is there. Aside from the risk of breach, shadow data is an extreme compliance risk. Even if an organization is unaware of the existence of data that contains customer or employee data, intellectual property, financial or other confidential information – it is still responsible for it.

    Where is all this shadow data coming from? Aside from data that was copied and abandoned, consider sources like decommissioned legacy applications – where historical customer data or PII is often just left sitting where it was originally stored. And there is also data produced by shadow IT applications, or databases used by niche app. And what about cloud architecture changes? When data is lifted and shifted, unmanaged or orphaned backups that contain sensitive information often remain.

    DSPM solutions locate shadow data by looking for it where it’s not supposed to be. Then, DSPM solutions provide actionable guidance for deletion and/or remediation. Advanced DSPM solutions search for sensitive information across different security postures, and can also discover when multiple copies of data exist. What’s more, DSPM solutions scrutinize privileges across multiple copies of data, identifying who can access data and who should not be able to.
  2. Identifying over-privileged users and third parties:

    Controlling access to data has always been one of the basics of cybersecurity hygiene. Traditionally, enterprises have relied on three basic types of access controls for internal users and third parties:

    · Access Control Lists - Straight lists of which users have read/write access
    · Role Based Access Control (RBAC) - Access according to what roles the user has in the organization
    · Attribute Based Access Control (ABAC) – Access determined by the attributes a user must have - job title, location, etc.

    Yet traditional data access controls are tied to one or more data stores or databases – like a specific S3 bucket. RBAC or ABAC policies ensure only the right users have permissions at the right times to these assets. But if someone copies and pastes data from that bucket to somewhere else in the cloud environment, what happens to the RBAC or ABAC policy? The answer is simple: it no longer applies to the copied data. DSPM solves this by ensuring that access control policy travels with data, across both cloud environments. Essentially, DSPM extends access control across any environment by enabling admins to understand where data came from, who originally had access to it, and who has access now.
  3. Identifying data movement, making sure security posture follows:

    Data moves through the public cloud – it’s the reason the cloud is so efficient and productive. It lets people use data in interesting ways. Yet the distributed nature of cloud computing means that organizations may not understand exactly where all applications and data are stored. Third-party hosting places serious limits on the visibility of data access and sharing, and multi-cloud environments frequently suffer from inconsistent security regimes.

    Basically, similar to the access control challenges - when data moves across the cloud, its security posture doesn’t necessarily follow. DSPM solves this by noticing when data moves and how its security posture changes. By focusing on finding and securing sensitive data, as opposed to securing cloud infrastructure or applications, DSPM solutions first discover sensitive data (including shadow or abandoned data), classify data types using AI models, then determine whether the data has the proper security posture. If it doesn’t, DSPM solutions notify the relevant teams and coordinate remediation.

DSPM to secure cloud data

Data security in the cloud is  a growing challenge. And contrary to some perceptions – the security for data created in the cloud, sent to the cloud, or downloaded from the cloud is not the responsibility of the cloud provider (AWS, Azure, GCP, etc.). This responsibility falls squarely on the shoulders of the cloud customer.

More and more organizations are choosing the DSPM paradigm to secure cloud data. In this dynamic and highly-complex ecosystem, DSPM ensures that sensitive data always has the correct security posture – no matter where it’s been duplicated or moved to. This dramatically lowers the risk of catastrophic data leaks, and dramatically raises user and admin confidence in data security.

Read More
Team Sentra
Team Sentra
February 15, 2023
3
Min Read

5 Key Findings for Cloud Data Security Professionals from ESG's Survey

5 Key Findings for Cloud Data Security Professionals from ESG's Survey

Securing sensitive cloud data is a key challenge and priority for 2023 and there's increasing evidence that traditional data security approaches are not sufficient. Recently, Enterprise Strategy Group surveyed hundreds of IT, Cloud Security, and DevOps professionals who are responsible for securing sensitive cloud data. The survey had 4 main objectives:

  • Determine how public cloud adoption was changing data security priorities
  • Explore data loss - particularly sensitive data - from public cloud environments. 
  • Learn the different approaches organizations are adopting to secure their sensitive cloud data. 
  • Examine data security spending trends

The 26 page report is full of insights regarding each of these topics. In this blog, we’ll dive into 5 of the most compelling findings and explore what each of them mean for cloud data security leaders.

More Data is Migrating to the Cloud - Even Though Security Teams Aren’t Confident they Can Keep it Secure.

ESG’s findings show that currently 26% of organizations have more than 40% of their company’s data in the cloud. But in 24 months more organizations ( 58%) will have that much of their data in the cloud. 

On the one hand, this isn’t surprising. The report notes that digital transformation initiatives combined with the growth of remote/hybrid work environments are pushing this migration. The challenge is that the report also shows that sensitive data is being stored in more than one cloud platform and when it comes to IaaS and PaaS data, more than half admit that a large amount of that data is insufficiently secured. In other words - security isn’t keeping pace with this push to store more and more data in the public cloud.

Cloud Data Loss Affects Nearly 60% of Respondents. Yet They’re Confident They Know Where their Data is

59% of surveyed respondents know they’ve lost sensitive data or suspect they have (with the vast majority saying they lost it more than once). There are naturally many reasons for this, including misconfigurations, misclassifications, and malicious insiders. But at the same time, over 90% said they’re confident in their data discovery and classification abilities. Something doesn’t add up. This gives us a clear indication that existing/defensive security controls are insufficient to deal with cloud data security challenges.

The problem here is likely shadow data. Of course security leaders would secure the sensitive data that they know about. But you can’t secure what you’re unaware of. And with data being constantly moved and duplicated, sensitive assets can be abandoned and forgotten. Solving the data loss problem requires a richer data discovery to provide a meaningful security context. Otherwise,  this false sense of security will continue to contribute to sensitive data loss. 

Almost All Data Warehouses Have Sensitive Data

Where is this sensitive data being stored? 86% of survey respondents say that they have sensitive data in data lakes or data warehouses. A third of this data is business critical, with almost all the remaining data considered ‘important’ for the business. 

Data lakes and warehouses allow data scientists and engineers to leverage their business and customer data to use analytics and machine learning to generate business insights, and have a clear impact on the enterprise. Keeping this growing amount of business critical sensitive data secure is leading to increasing adoption of cloud data security tools. 

The Ability to Secure Structured and Unstructured Data is the Most Important Attribute for Data Security Platforms

With 45% of organizations facing a cybersecurity skills shortage, there’s a clear movement towards automation and security platforms to pick up some of the work securing cloud data. With data being stored across different cloud platforms and environments, two thirds of respondents mentioned preferring  a single tool for cloud data security. 

When choosing a data security platform, the 3 most important attributes were:

  • Data type coverage (structured and unstructured data)
  • Data location coverage
  • Integration with security tools

It’s clear that as organizations plan for a future with increasing amounts of data in the public cloud, we will see a widespread adoption of cloud data security tools that can find and secure data across different environments.

Cloud Data Security has an Address in the Organization - The Cloud Security Architect

Cloud data security has always been a role that was assigned to any number of different team members. Devops, legal, security, and compliance teams all have a role to play. But increasingly, we’re seeing data security become the responsibility chiefly of the cloud security architect.

86% of organizations surveyed now have a cloud security architect role, and 11% more are hiring for this role in the next 12-24 months - and for good reason. Of course, the other teams, including infrastructure and development continue to play a major role. But there is finally some agreement that sensitive data requires its own focus and is best secured by the cloud security architect. 

Read More
Asaf Kochan
Asaf Kochan
February 1, 2023
3
Min Read

Thoughts on Sentra and the Data Security Landscape After Our Series A

Thoughts on Sentra and the Data Security Landscape After Our Series A

By Asaf Kochan, Co-Founder and President, Sentra

Series A announcements are an exciting time for any startup, and we look forward to working with our new investors from Standard Investments, Munich re Ventures (MRV), Moore Strategic Ventures, and INT3 to grow the cloud data security and data security posture management (DSPM) categories. 

I wanted to take a moment to share some of my thoughts around what this round means for Sentra, cloud data security, and the growth of the DSPM category as a whole. 

Seeing is Believing: From Potential Customer to Investor

The most amazing part of this round is that we didn’t originally intend to raise money. We approached Standard Industries as a potential customer not an investor. It was incredible to see how bought-in the team was to Sentra’s approach to data security. They understood instantly the potential for securing not only Standard’s data, but the data of every cloud-first enterprise. The total addressable market for data security solutions is already large, and it’s growing every year, as more and more new companies are now cloud-native. The global need for solutions like Sentra was obvious to their team after seeing the product, and I’m excited to have a forward-thinking investor like Standard as part of our journey. 

It’s a Vote of Confidence in the Sentra Team and Product

Any Series A is first and foremost a vote of confidence. It’s an endorsement of the vision of the company, the approach the product is taking, and the potential of the core team to continue to grow the business. Anyone who has spoken with our talented team understands the level of expertise and perseverance they bring to every task, meeting, and challenge. I’m proud of the team we’ve built, and I’m excited to welcome many new Sentrans to the team in the coming months. 

As I mentioned, the round is also a mark of confidence of the development and direction of the product itself. Working with our existing customers, we’ve regularly added new features, integrations, and capabilities to our solution. As we continue to discover, classify, and secure larger amounts of data in all cloud environments, the benefits of a data centric approach become clear. We’ve successfully reduced the risks of catastrophic data breaches by reducing the data attack surface, improved relationships between engineering and security teams by breaking down silos, and even helped our customers reduce cloud costs by finding and eliminating duplicate data. 

Data Security is a Must, Not a Nice to Have

Raising money in the current economic climate is not to be taken for granted. The significant investment in Sentra’s vision speaks not only to the value provided by Sentra’s product and team, but also to how critical data security has become. Compliance, privacy, and security concerns are present regardless of how the NASDAQ is performing.

Certainly we’re seeing no slowdown in data security regulations. Global enterprises are now responsible for ensuring compliance with a growing number of data regulations from different government and commercial sources across the globe.  When it comes to security and IP protection, the threat of a catastrophic data breach is top of mind for all cloud security teams. As the reality sets in that breaches are not a matter of “if” but “when”, the logic of the data centric approach becomes clear: If you can’t prevent the initial breach, you can make sure your most sensitive data always has the proper security posture. In the future we’re building, not every breach will be newsworthy, because not every breach will involve sensitive data. This funding round demonstrates the growing acceptance that this is the direction cloud security is and should be heading. 

DSPM will Come to Dominate Cloud Security

There’s always some skepticism in the cyber world when a new category is created. 

  • Is the problem it claims to solve really that serious?
  • Can we just use existing paradigms and tools to address it?
  • Is implementing a new tool going to make a real difference for the business? 

These questions are valid, and any cyber company operating in a new space must address them forthrightly and clearly. We have been clear from the beginning - a data centric approach to security with DSPM is not a small step, but a giant leap forward. Data is the core asset of most companies, and that asset is now stored in the cloud. Old approaches will not be sufficient. This new round is led by investors who recognize this new reality and share our vision that we need to put data at the core of cloud security strategies.

I want to end by again emphasizing how thankful I am for having amazing investors, partners, and team members join us over the last 18 months. So much has been accomplished already, but the industry shift to data centric security has only just begun. I’m looking forward to continuing to protect the most important business asset in the world - our data.

Read More
Yoav Regev
Yoav Regev
January 31, 2023
3
Min Read
Data Security

Sentra Raises $30M Series A to Lead the Data-Centric Approach to Cloud Security

Sentra Raises $30M Series A to Lead the Data-Centric Approach to Cloud Security

By Yoav Regev, CEO and Co-Founder, Sentra

Today we’re announcing that Sentra has raised a $30 million dollar Series A round to revolutionize the way cloud first enterprises secure their data. This brings Sentra’s total funding to $53M.  I’m excited to be working together with  Standard Industries, Munich Re Ventures, Moore Capital, Xerox, INT3, Bessemer Venture Partners, and Zeev Ventures  to help enterprises securely leverage their data to enable growth. The last 18 months have already been an amazing journey, and I wanted to take this opportunity to share some thoughts around how we got to where we are, and what I’m looking forward to in the coming months and years. 

The Sentra Team

When we founded Sentra, we had a very clear objective regarding the team we wanted to build - we only wanted the best people. People who are passionate about their work and the problems we’re addressing. Not just people who are technically brilliant, but those who are drawn to challenges and aren’t easily discouraged. This is the mindset required to solve one of the largest problems facing cybersecurity leaders. 

18 months later, it’s clear we accomplished that objective. This team built a revolutionary new data security platform that’s impressing security leaders on a daily basis. Within minutes of seeing the platform, security leaders grasp the value Sentra is providing - securing the most important corporate asset (data) while simultaneously breaking down silos between security, engineering, and data teams.

The Sentra Way

Culture comes from people. When you have the best people it’s just going to be easier to build a productive and healthy culture. Teams should be excited to work together. I’d describe the culture today as one that values independence, responsibility, and persistence. Team members need to be given the freedom to try new things, occasionally fail at them, and move on quickly to find other, better ways forward. When you’re building a product to solve a global security problem, it’s going to be difficult, and there will be setbacks and disappointments. The team at Sentra embodies these values and it’s what’s allowed us to build such a revolutionary product so quickly. 

Building the Data Centric Future

Getting the right team and culture in place is critical for tackling one of the greatest security challenges of our time - data security in the cloud. There are a few reasons why cloud data security is an unsolved problem. 

It begins with the simple fact that data travels in the cloud. It gets processed, extracted, duplicated, and moved by different teams. But when data moves, its security posture doesn’t move with it - for example, if it was encrypted in one environment and duplicated to a lower environment, it might be unencrypted now. Another issue caused by data movement is that sensitive data gets abandoned and forgotten, creating vulnerable shadow data. Finally, even when vulnerable sensitive data is identified, it’s hard to know where the data came from and how it’s meant to be secured. 

Data and engineering teams are the ones moving this data around. And that’s actually a good thing. We want them to leverage the flexibility of the cloud to do amazing things for the business. Security should enable this work, not slow it down. At the same time, we need to make sure the data is secured. This is what we’re building. A data-centric future where we keep the data secure and enable the business to reach new heights.  

Here’s what this future is going to look like:

First, companies will know where all of their sensitive data is. Shadow data, especially sensitive shadow data, will not exist. Data is the most important asset, and knowing where your most important asset is at all times is crucial.

Next, sensitive data will always have the right security posture. When sensitive data moves and its security posture is affected, the right people know instantly. And they also know where the data came from, who owns it, and how to remediate data vulnerabilities before they become incidents. The data attack surface will shrink, with the result that even when there’s a breach, the most sensitive data assets are secured. 

The result? Business growth. Enterprises will be able to confidently move large amounts of data between cloud environments, generating the insights and innovations they need to grow. In other words, the full promise of the cloud will be realized. 

In the future, organizations will be able to move quickly and securely at the same time!

We’re building this future right now.

Read More
Ron Reiter
Ron Reiter
January 24, 2023
3
Min Read
Data Security

Cloud Data Breaches: Cloud vs On Premise Security

Cloud Data Breaches: Cloud vs On Premise Security

"The cloud is more secure than on prem.” This has been taken for granted for years, and is one of the many reasons companies are adopting a ‘cloud first mentality’. But when it comes to data breaches this isn’t always the case.

That’s why you still can’t find a good answer to the question “Is the cloud more secure than on-premise?”

Because like everything else in security, the answer is always ‘it depends’. While having certain security aspects managed by the cloud provider is nice, it’s hardly comprehensive. The cloud presents its own set of data security concerns that need to be addressed.

In this blog, we’ll be looking at data breaches in the cloud vs on premises. What are the unique data security risks associated with both use cases, and can we definitively say one is better at mitigating the risks of data breaches? 

On Premises Data Security

An on-premise architecture is the traditional way organizations manage their networks and data. The company’s servers, hardware, software, and network are all managed directly by the IT department, which assumes full control over uptime, security, and data. 

While more labor intensive than cloud infrastructures, on-premise architectures have the advantage of having a perimeter to defend. Unlike the cloud,  IT and security teams also know exactly where all of their data is - and where it’s supposed to be. Even if data is duplicated without authorization, it’s duplicated in the on-prem server, with existing perimeter protections in place. The advantage of these solutions can’t be overstated. IT has decades of experience managing on-premise servers and there are hundreds of tested products on the market that do an excellent job of securing an on-prem perimeter.  

Despite these advantages, around half of data breaches are still from on-premise architectures rather than cloud. This is caused by a number of factors. Most importantly, cloud providers like Amazon Web Services, Azure, and GCP are responsible for some aspects of security. Additionally, while securing a perimeter might be more straightforward than the defense in depth approach required for the cloud, it’s also easier for attackers to find and exploit on-premise vulnerabilities by easily searching public exploit databases and then finding organizations that haven’t patched the relevant vulnerability. 

Data Security in the Cloud 

Infrastructure as a Service (IaaS) Cloud computing runs on a ‘shared responsibility model’. The cloud provider is responsible for the hardware, so they provide the physical security, but protecting the software, applications, and data is still the enterprise’s responsibility. And while some data leaks are the result of poor physical security, many of the major leaks today are the result of misconfigurations and vulnerabilities, not someone physically accessing a hard drive. 

So when people claim the cloud is better for data security than on premises, what exactly do they mean? Essentially they’re saying that data in the cloud is more secure when the cloud is correctly set up. And no, this is not as obvious as it sounds. Because by definition the cloud needs to be accessed through the internet, that also makes it shockingly easy to accidentally expose data to everyone through the internet. For example, S3 buckets that are improperly configured have been responsible for some of the most well known cloud data breaches, including Booz Allen Hamilton , Accenture, and Prestige Software. This just isn’t a concern for on-prem organizations.  There’s also the matter of the quantity of data being created in the cloud. Because the cloud is provisioned on demand, developers and engineers can easily duplicate databases and applications, and accidentally expose the duplicates to the internet. 

Amazon’s warning against leaving buckets exposed to the internet

Securing your cloud against data breaches is also complicated by the lack of a definable perimeter. When everything is accessible via the internet with the right credentials, guarding a ‘perimeter’ isn’t possible. Instead cloud security teams manage a range of security solutions designed to protect different elements of their cloud - the applications, the networking, the data, etc. And they have to do all of this without slowing down business processes. The whole advantage of moving to the cloud is speed and scalability. If security prevents scalability, the benefits of the cloud vanish. 

So we see with the cloud there’s a basic level of security features you need to enable. The good news is that once those features are enabled, the cloud is much harder for an attacker to navigate. There’s monitoring built in to which makes breaches more difficult. It’s also a lot more difficult to understand a cloud architecture than an on-premise one, which means that attackers either have to be more sophisticated or they just go for the low-hanging fruit (exposed s3 buckets being a good example of this). 

However, once you have your monitoring built in, there’s still one challenge facing cloud-first organizations. That’s the data. No matter how many cloud security experts you have, there’s data being constantly created in the cloud that security may not even be aware exists. There’s no issue of visibility on premises - we know where the data is. It’s on the server we’re managing. In the cloud, there’s nothing stopping developers from duplicating data, moving it between environments, and forgetting about it completely (also known as shadow data). Even if you were able to discover the data, it’s no longer clear where it came from, or what security posture it’s supposed to have.  Data sprawl leading to a loss of visibility, context, which damages your security posture is the primary cloud security challenge.  

So what’s the verdict on data breaches in the cloud vs data breaches on premises? Which is riskier or more likely? 

Is the Cloud More Secure Than On Premise?

Like we warned in the beginning, the answer is an unsatisfying “it depends”. If your organization properly manages the cloud, configures the basic security features, limits data sprawl, and has cloud experts managing your environment, the cloud can be a fortress. Ultimately though, this may not be a conversation most enterprises are having in the coming years. With the advantages of scalability and speed, many new enterprises are cloud-first and the question won’t be ‘is the cloud secure’ but is our cloud’s data secure.

Read More
Sagit Dotan
Sagit Dotan
January 18, 2023
3
Min Read

No DevOps? No Problem (For a While, At Least)

No DevOps? No Problem (For a While, At Least)

By definition, startups need to be able to adapt quickly. The ability to move and adjust quickly is crucial if we are to have any hope of out-competing larger, established players. 

On the face of things, this would make us a perfect fit for an in-house DevOps team. DevOps and CI/CD are meant to increase rapid development, testing, and deployment. DevOps was designed to remove traditional barriers between development and operations teams – making sure they’re collaborating across the software’s lifecycle to identify opportunities for improvement, then integrate changes quickly and seamlessly.

However, startups are also – to be honest – cost-sensitive creatures, especially in challenging financial climates. And this is why during the early development stages of our product, prior to GA, we did not choose to have a dedicated DevOps team - or even a full time DevOps engineer. And yet, we were able to implement DevOps practices for over a year before we hired for a full time DevOps role. 

Becoming 'DevOps People' Without a 'DevOps Person'

The success we had adopting DevOps practices came down to 4 practices we implemented in the team: 

  1. Engineers learned to speak DevOps - In the early days, everyone was DevOps and DevOps was everyone. All engineers were aware of and able to contribute to DevOps-related issues. The lack of DevOps actually made us all far more aware of hidden cloud costs that we might not have all paid attention to if there was a dedicated DevOps team. 
  2.  The “Bus Factor” – This is going to sound a little morbid, but the basic idea was that, lacking a single DevOps point of contact, we always had to ensure that too much information was never concentrated in the hands of one person. That way, if that person got “hit by a bus” (metaphorically speaking, of course), the loss to the project would be minimal. Teams without DevOps can’t rely on one person. 
  3. Lowering the on-call burden – DevOps teams are the ones who get called at 3am on Sunday because production is down. It’s just part of the game. But without DevOps, we learned to rotate the burden of on-call during off-hours and weekends, which lowered the burden on us all while positively impacting our team spirit and morale.
  4. Professional DevOps teams, like professional teams in any field, work with established tools and methodologies. This is a good thing. That said, engineers are constantly trying to find new and better ways. During the time where we filled the DevOps role, we discovered a lot of new ways of doing things that we might not have found had we had a dedicated DevOps team.

But Then Things Changed...

As I mentioned, we had to move on from our split DevOps responsibility and bring in a full time DevOps professional. 

We knew it was time to bring on dedicated DevOps pros when we got to the point where we had too many product and feature tasks and couldn’t keep up with the core infrastructure tasks. As we began to implement more feature requests, we found ourselves spending less and less time on infrastructure. This is a problem because versions and features still continue being released, whether or not the infrastructure is ready to support them.

For example, just before GA, our team needed to make absolutely sure that our infrastructure was stable, that the product was monitored and we had full visibility of issues. To make this happen, we had to put a hold on new features and focus solely on infrastructure. Lesson learned: it was time for DevOps.

The Takeaways

Not having DevOps is perfectly OK…until it’s not. One big takeaway here is that you need to watch scale carefully between handling infrastructure and handling features. Once it tips too far towards features, it’s time to consider DevOps. The other takeaway is that you need to consider everything in terms of dev velocity versus costs. Engineer time is expensive, on one hand, but so is slower dev velocity. The question is, where is the breaking point for your organization?

Read More
Daniel Suissa
Daniel Suissa
January 11, 2023
3
Min Read
Data Security

Protecting Source Code in the Cloud

Protecting Source Code in the Cloud

Source code lies at the heart of every technology company’s business. Aside from being the very blueprint upon which the organization relies upon to sell its products, source code can reveal how the business operates, its strategies, and how its infrastructure is designed. Many of the recent data breaches we’ve witnessed, including those against industry leaders like LastPass, Okta, Intel, and Samsung, were instances where attackers were able to gain access to all or part of the organization's source code.

The good news with source code is that we usually know where it originated from, and even where it’s destined to be shipped. The bad news is that code delivery is getting increasingly more complex in order to meet business demands for fast iterations, causing code to pass multiple stations on its way to its final destination. We like to think that the tools we use to ship code protect it well and clean it up where it's no longer needed, but that’s wishful thinking that puts the business at risk. To make matters worse, bad development practices can lead to developer secrets and even customer information being stolen with a source code breach, which can in turn trigger cascading problems. 

At Sentra, we see protecting source code as the heart of protecting an organization’s data. Simply put, code is a qualitative type of data, which means that unlike quantitative data, the impact of the breach does not depend on its scale. Even a small breach can provide the attacker with crucial intellectual property or intel that can be used for follow up attacks. That said, not every piece of code leaked can damage the business in the same way. 

So how do we protect source code in the cloud? 

Visualization

All data protection starts with knowing where the data is and how it’s protected. We always start with the home repository, usually in GitLab, GitHub, or BitBucket. Then we move to data stores that are a part of the code delivery cycle. These can be container-management services like Amazon’s Elastic Containers Service or Azure Container Instances, as well as the VMs running that code. But because code is also used by developers on personal VMs and moved through Data Lakes, Sentra takes a wider approach and looks for source code across all of the organizations’ non-tabular data stores across all IaaS and SaaS services, such as files in Azure Disk Storage volumes attached to Azure VMs.

Classification

We said it before and we’ll say it again - not all data is created equal. Some copies of source code may include intellectual property and some may not. For example, a CPP file with complex logic is not the same as an HTML file distributed by a CDN. On the other hand, that HTML might accidentally contain a developer secret, so we must look for those as well before we label it as ‘non-sensitive’. Classifying exactly what kind of data each particular source code file contains helps us filter out the noise and focus on the most sensitive data. 

Detecting Data Movement

At this point we may know where source code is located and what kind of information it contains, but not how where it came from or how to stop bad data flows that lead to unwanted exposure. Remember, source code is handled both manually and by automatic processes. Sometimes it’s copied in its entirety, and sometimes partially. Detecting how much is copied and through which processes will help us enforce good code handling practices in the organization. Sentra combines multiple methods to identify source code movement at the function level by understanding the organization’s user access scheme, activity, and by looking at the code itself. 

Determining Risk

Security efficiency begins with prioritization. Some of the code we will find in the environment may be properly separated from the world behind a private network, or even encrypted, and some of it may be partially exposed or even publicly accessible. By determining the Data Security Posture of each piece of code we can determine what processes are conducive to the business’ goals and which put it at risk. This is where we combine all of the above steps and determine the risk based on the kind of data in the code, how it is moved, who has access to it, and how well it’s protected. 

Remediation

Now that we understand what source code needs protecting against which risks, and more importantly, what are processes which require the code in each store, we can choose from several remediation tools in our arsenal:

  • Encrypt. Often source code is not required to be loaded from rest very-quickly, so it’s alway a good idea to encrypt or obfuscate it. 
  • Limiting access to all stores other than the source code repository. 
  • Use a retention policy anywhere where the code is needed only intermediately. 
  • Review old code delivery processes that are no longer needed.
  • Remove any shadow data. Code inside unused VMs or old stores that weren't accessed in a while can most probably be removed altogether. 
  • Detect and remove any secrets in source code and move them to vaults.
  • Detect intellectual property that is used in non-compliant or insecure environments.

Source code is the data that absolutely cannot be allowed to leak. By taking the steps above, Sentra's DSPM ensures that it stays where it’s supposed to be, and always is protected properly. 

Book a demo and learn how Sentra’s solution can redefine your cloud data security landscape.

Read More
Hanan Zaichyk
Hanan Zaichyk
January 2, 2023
3
Min Read
Data Security

Building a Better DSPM by Combining Data Classification Techniques

Building a Better DSPM by Combining Data Classification Techniques

The increasing prevalence of data breaches is driving many organizations to add another tool to their ever growing security arsenal - data security posture management, or DSPM.

This new approach recognizes that not all data is equal - breaches to some data can have dire implications for an organization, while breaches to other data can be very alarming but will not cause major financial or reputational damage.

At Sentra, we’re building the product to fulfill this approach's potential by mapping all data in organizations’ cloud environments, determining where sensitive data is stored, and who has access to it. Because some data is more sensitive than others, accurate classification of data is the core of a successful DSPM solution.

Unfortunately, there’s no single approach that can cover all data classes optimally, so we need to employ a number of classification techniques, scanning methods, verification processes, and advanced statistical analysis. By combining the strengths and weaknesses of different techniques, we can reach a high level of accuracy for all types of sensitive cloud data.

Let’s dive into some of these techniques to see how different methods can be used in different situations to achieve superior results.

The Power and Limits of Regular Expressions

Regular expressions are a very robust tool that can precisely capture a wide range of data entities at scale. Regular expressions capture a pattern - specific character types, their order, and lengths. Using regular expressions for classification involves looking at a string of characters - without any context - and deducing what entity it represents based only on the pattern of the string. A couple of examples where this can be used effectively are AWS keys and IP addresses. We know how many characters and what type of characters these entities contain.

However, the limitation of this approach is that if the pattern of the characters isn’t sufficient to classify the entity, a regular expression will need ‘help’. For example, a 9 digit number can represent a number of things, but if it is on a driver’s license it’s probably a license number, if it’s on a tax return, it’s probably a Social Security Number, etc.

Humans do this subconsciously all the time. If you hear someone’s name is ‘George’ you know that’s a common first name, and you will assume - usually correctly - that the individual’s first name is ‘George’ and not his last name.

So what we need is a classification engine that can make these connections the way humans do - one that can look at the context of the string, and not just its content. Discovery and classification of sensitive data is one of many DSPM use cases we employ to secure your data. We also can give the engine a list of names and tell it “these are first names” so that it’s able to accurately make these connections.

Another method to provide context is NER - Named Entity Recognition. This is a tool from Natural Language Processing (NLP) which can analyze sentences to determine the category of different words. Supplementing the limitations of regular expressions with these techniques is one way Sentra ensures that we’re always using the best possible classification technique.

Of course, we still need to ensure that these patterns or data entities are actually the ones we’re looking for. For example, let’s say we identify a 16 digit number. This could be a credit card number. But it could also be a user ID, bank account number, a tracking number, or just a very large number. 

So how do we determine if this is, in fact, a credit card number?

There are a number of ways we can confirm this.

(Note that these approaches are using the example of the credit card, but this can extend to various data classes):

  • Verify the integrity of the results: Credit cards have a check digit, the last digit in any card number, designed to avoid typos. We can verify it is correct. We can also verify the first few digits are in the ranges allowed for credit cards.
  • Model internal structure of the data: If data is in tabular form, such as a .csv file, we can create models of relationships between column values, so that only if, for example, 50% of values are valid credit card numbers will the whole column be labeled as such.
  • Look at the data’s ‘detection context’: If data is in tabular form, such as a .csv file, we can increase our certainty of a credit card detection if the column is named “credit card number”. The relationships between different columns can be used to add missing context, so a column suspected to hold credit card numbers will seem much more probable if there’s an expiration date column and a CVV column in the same table. When the data is in free form text format (as in a .docx file) this is much more complicated, and tools such as natural language understanding and keywords must be applied to accurately classify the data.

    These are a few examples of methods that when combined together appropriately can yield results that are not only much more accurate, but also much more useful for explaining and understanding the reasoning behind these decisions.

    Data classification has long been a challenge because of the limitations in different models. Only by using different methods in conjunction are we able to classify with the level of accuracy required to assist data and security teams responsible for securing large quantities of cloud data. 

To learn more about our approach to cloud data security, watch the demo of the Sentra platform here.  

 

Read More
Team Sentra
Team Sentra
November 17, 2022
3
Min Read
Data Security

Top 5 GCP Security Tools for Cloud Security Teams in 2024

Top 5 GCP Security Tools for Cloud Security Teams in 2024

Like its primary competitors Amazon Web Services (AWS) and Microsoft Azure, Google Cloud Platform (GCP) is one of the largest public cloud vendors in the world – counting companies like Nintendo, eBay, UPS, The Home Depot, Etsy, PayPal, 20th Century Fox, and Twitter among its enterprise customers. 

In addition to its core cloud infrastructure – which spans some 24 data center locations worldwide - GCP offers a suite of cloud computing services covering everything from data management to cost management, from video over the web to AI and machine learning tools. And, of course, GCP offers a full complement of security tools – since, like other cloud vendors, the company operates under a shared security responsibility model, wherein GCP secures the infrastructure, while users need to secure their own cloud resources, workloads and data.

To assist customers in doing so, GCP offers numerous security tools that natively integrate with GCP services. If you are a GCP customer, these are a great starting point for your cloud security journey.

In this post, we’ll explore five important GCP security tools security teams should be familiar with. 

Security Command Center

GCP’s Security Command Center is a fully-featured risk and security management platform – offering GCP customers centralized visibility and control, along with the ability to detect threats targeting GCP assets, maintain compliance, and discover misconfigurations or vulnerabilities. It delivers a single pane view of the overall security status of workloads hosted in GCP and offers auto discovery to enable easy onboarding of cloud resources - keeping operational overhead to a minimum. To ensure cyber hygiene, Security Command Center also identifies common attacks like cross-site scripting, vulnerabilities like legacy attack-prone binaries, and more.

Chronicle Detect

GCP Chronicle Detect is a threat detection solution that helps enterprises identify threats at scale. Chronicle Detect’s next generation rules engine operates ‘at the speed of search’ using the YARA detection language, which was specially designed to describe threat behaviors. Chronicle Detect can identify threat patterns - injecting logs from multiple GCP resources, then applying a common data model to a petabyte-scale set of unified data drawn from users, machines and other sources. The utility also uses threat intelligence from VirusTotal to automate risk investigation. The end result is a complete platform to help GCP users better identify risk, prioritize threats faster, and fill in the gaps in their cloud security.

Event Threat Detection

GCP Event Threat Detection is a premium service that monitors organizational cloud-based assets continuously, identifying threats in near-real time. Event Threat Detection works by monitoring the cloud logging stream - API call logs and actions like creating, updating, reading cloud assets, updating metadata, and more. Drawing log data from a wide array of sources that include syslog, SSH logs, cloud administrative activity, VPC flow, data access, firewall rules, cloud NAT, and cloud DNS – the Event Threat Detection utility protects cloud assets from data exfiltration, malware, cryptomining, brute-force SSH, outgoing DDoS and other existing and emerging threats.

Cloud Armor

The Cloud Armor utility protects GCP-hosted websites and apps against denial of service and other cloud-based attacks at Layers 3, 4, and 7. This means it guards cloud assets against the type of organized volumetric DDoS attacks that can bring down workloads. Cloud Armor also offers a web application firewall (WAF) to protect applications deployed behind cloud load balancers – and protects these against pervasive attacks like SQL injection, remote code execution, remote file inclusion, and others. Cloud Armor is an adaptive solution, using machine learning to detect and block Layer 7 DDoS attacks, and allows extension of Layer 7 protection to include hybrid and multi-cloud architectures.

Web Security Scanner

GCP’s Web Security Scanner was designed to identify vulnerabilities in App Engines, Google Kubernetes Engines (GKEs), and Compute Engine web applications. It does this by crawling applications at their public URLs and IPs that aren't behind a firewall, following all links and exercising as many event handlers and user inputs as it can. Web Security Scanner protects against known vulnerabilities like plain-text password transmission, Flash injection, mixed content, and also identifies weak links in the management of the application lifecycle like exposed Git/SVN repositories. To monitor web applications for compliance control violations, Web Security Scanner also identifies a subset of the critical web application vulnerabilities listed in the OWASP Top Ten Project.

 

Securing the cloud ecosystem is an ongoing challenge, partly because traditional security solutions are ineffective in the cloud – if they can even be deployed at all. That’s why the built-in security controls in GCP and other cloud platforms are so important.

The solutions above, and many others baked-in to GCP, help GCP customers properly configure and secure their cloud environments - addressing the ever-expanding cloud threat landscape.

Read More
Ron Reiter
Ron Reiter
November 8, 2022
4
Min Read

Minimizing your Data Attack Surface in the Cloud

Minimizing your Data Attack Surface in the Cloud

The cloud is one of the most important developments in the history of information technology. It drives innovation and speed for companies, giving engineers instant access to virtually any type of workload with unlimited scale.

But with opportunity comes a price - moving at these speeds increases the risk that data ends up in places that are not monitored for governance, risk and compliance issues. Of course, this increases the risk of a data breach, but it’s not the only reason we’re seeing so many breaches in the cloud era. Other reasons include: 

  • Systems are being built quickly for business units without adequate regard for security
  • More data is moving through the company as teams use and mine data more efficiently using tools such as cloud data warehouses, BI, and big data analytics
  • New roles are being created constantly for people who need to gain access to organizational data
  • New technologies are being adopted for business growth which require access to vast amounts of data - such as deep learning, novel language models, and new processors in the cloud
  • Anonymous cryptocurrencies have made data leaks lucrative.
  • Nation state powers are increasing cyber attacks due to new conflicts

Ultimately, there are only two methods which can mitigate the risk of cloud data leaks - better protecting your cloud infrastructure, and minimizing your data attack surface.

Protecting Cloud Infrastructure

Companies such as Wiz, Orca Security and Palo Alto provide great cloud security solutions, the most important of which is a Cloud Security Posture Management tool. CSPM tools help security teams to understand and remediate infrastructure related cloud security risks which are mostly related to misconfigurations, lateral movements of attackers, and vulnerable software that needs to be patched.

However, these tools cannot mitigate all attacks. Insider threats, careless handling of data, and malicious attackers will always find ways to get a hold of organizational data, whether it is in the cloud, in different SaaS services, or on employee workstations. Even the most protected infrastructure cannot withstand social engineering attacks or accidental mishandling of sensitive data. The best way to mitigate the risk for sensitive data leaks is by minimizing the “data attack surface” of the cloud.

What is the "Data Attack Surface"?

Data attack surface is a term that describes the potential exposure of an organization’s sensitive data in the event of a data breach. If a traditional attack surface is the sum of all an organization’s vulnerabilities, a data attack surface is the sum of all sensitive data that isn’t secured properly. 

The larger the data attack surface - the more sensitive data you have - the higher the chances are that a data breach will occur.

There are several ways to reduce the chances of a data breach:

  • Reduce access to sensitive data
  • Reduce the number of systems that process sensitive data
  • Reduce the number of outputs that data processing systems write
  • Address misconfigurations of the infrastructure which holds sensitive data
  • Isolate infrastructure which holds sensitive data
  • Tokenize data
  • Encrypt data at rest
  • Encrypt data in transit
  • Use proxies which limit and govern access to sensitive data of engineers

Reduce Your Data Attack Surface by using a Least Privilege Approach

The less people and systems have access to sensitive data, the less chances a misconfiguration or an insider will cause a data breach. 

The most optimal method of reducing access to data is by using the least privilege approach  of only granting access to entities that need the data.  The type of access is also important  - if read-only access is enough, then it’s important to make sure that write access or administrative access is not accidentally granted. 

To know which entities need what access, engineering teams need to be responsible for mapping all systems in the organization and ensuring that no data stores are accessible to entities which do not need access.

Engineers can get started by analyzing the actual use of the data using cloud tools such as Cloudtrail.  Once there’s an understanding of which users and services access infrastructure with sensitive data, the actual permissions to the data stores should be reviewed and matched against usage data. If partial permissions are adequate to keep operations running, then it’s possible to reduce the existing permissions within existing roles. 

Reducing Your Data Attack Surface by Tokenizing Your Sensitive Data

Tokenization is a great tool which can protect your data - however it’s hard to deploy and requires a lot of effort from engineers. 

Tokenization is the act of replacing sensitive data such as email addresses and credit card information with tokens, which correspond to the actual data. These tokens can reside in databases and logs throughout your cloud environment without any concern, since exposing them does not reveal the actual data but only a reference to the data.

When the data actually needs to be used (e.g. when emailing the customer or making a transaction with their credit card) the token can be used to access a vault which holds the sensitive information. This vault is highly secured using throttling limits, strong encryption, very strict access limits, and even hardware-based methods to provide adequate protection.

This method also provides a simple way to purge sensitive customer data, since the tokens that represent the sensitive data are meaningless if the data was purged from the sensitive data vault.

Reducing Your Data Attack Surface by Encrypting Your Sensitive Data

Encryption is an important technique which should almost always be used to protect sensitive data. There are two methods of encryption: using the infrastructure or platform you are using to encrypt and decrypt the data, or encrypting it on your own. In most cases, it’s more convenient to encrypt your data using the platform because it is simply a configuration change. This will allow you to ensure that only the people who need access to data will have access via encryption keys. In Amazon Web Services for example, only principals with access to the KMS vault will be able to decrypt information in an S3 bucket with KMS encryption enabled.

It is also possible to encrypt the data by using a customer-managed key, which has its advantages and disadvantages. The advantage is that it’s harder for a misconfiguration to accidentally allow access to the encryption keys, and that you don’t have to rely on the platform you are using to store them. However, using customer-managed keys means you need to send the keys over more frequently to the systems which encrypt and decrypt it, which increases the chance of the key being exposed.

Reducing Your Data Attack Surface by using Privileged Access Management Solutions

There are many tools that centrally manage access to databases. In general, they are divided into two categories: Zero-Trust Privilege Access Management solutions, and Database Governance proxies. Both provide protection against data leaks in different ways.

Zero-Trust Privilege Access Management solutions replace traditional database connectivity with stronger authentication methods combined with network access. Tools such as StrongDM and Teleport (open-source) allow developers to connect to production databases by using authentication with the corporate identity provider.

Database Governance proxies such as Satori and Immuta control how developers interact with sensitive data in production databases. These proxies control not only who can access sensitive data, but how they access the data. By proxying the requests, sensitive data can be tracked and these proxies guarantee that no sensitive information is being queried by developers. When sensitive data is queried, these proxies can either mask the sensitive information, or simply omit or disallow the requests ensuring that sensitive data doesn’t leave the database.

Reducing the data attack surface reflects the reality of the attackers mindset. They’re not trying to get into your infrastructure to breach the network. They’re doing it to find the sensitive data. By ensuring that sensitive data always is secured, tokenized, encrypted, and  with least privilege access, they’ll be nothing valuable for an attacker to find - even in the event of a breach. 

 

Read More
Team Sentra
Team Sentra
November 7, 2022
6
Min Read

Top 6 Azure Security Tools, Features, and Best Practices

Top 6 Azure Security Tools, Features, and Best Practices

Nowadays, it is evident that the rapid growth of cloud computing has changed how organizations operate. Many organizations increasingly rely on the cloud to drive their daily business operations. The cloud is a single place for storing, processing and accessing data; it’s no wonder that people are becoming addicted to its convenience.

However, as the dependence on cloud service providers continues, the need for security also increases. One needs to measure and safeguard sensitive data to protect against possible threats. Remember that security is a shared responsibility - even if your cloud provider secures your data, the security will not be absolute. Thus, understanding the security features of a particular cloud service provider becomes significant.

Introduction to Microsoft Azure Security Services

Image of Microsoft Azure, explaining how to strengthen security posture with Azure

Microsoft Azure offers services and tools for businesses to manage their applications and infrastructure. Utilizing Azure ensures robust security measures are in place to protect sensitive data, maintain privacy, and mitigate potential threats.

This article will tackle Azure’s security features and tools to help organizations and individuals safeguard and protect their data while they continue their innovation and growth. 

There’s a collective set of security features, services, tools, and best practices offered by Microsoft to protect cloud resources. In this section, let's explore some layers to gain some insights.

The Layers of Security in Microsoft Azure:

Layers of Security Description
Physical Security Microsoft Azure has a strong foundation of physical security measures, and it operates state-of-the-art data centers worldwide with strict physical access controls, which ensures that Azure's infrastructure protects itself against unauthorized physical access.
Network Security Virtual networks, network security groups (NSGs), and distributed denial of service (DDoS) protection create isolated and secure network environments. Microsoft Azure network security mechanisms secure data in transit and protect against unauthorized network access. Of course, we must recognize Azure Virtual Network Gateway, which secures connections between on-premises networks and Azure resources.
Identity and Access Management (IAM) Microsoft Azure offers identity and access management capabilities to control and secure access to cloud resources. The Azure Active Directory (AD) is a centralized identity management platform that allows organizations to manage user identities, enforce robust authentication methods, and implement fine-grained access controls through role-based access control (RBAC).
Data Security Microsoft Azure offers Azure Storage Service Encryption (SSE) which encrypts data at rest, while Azure Disk Encryption secures virtual machine disks. Azure Key Vault provides a secure and centralized location for managing cryptographic keys and secrets.
Threat Detection and Monitoring Microsoft Azure offers Azure Security Center, which provides a centralized view of security recommendations, threat intelligence, and real-time security alerts. Azure Sentinel offers cloud-native security information that helps us quickly detect, alert, investigate, and resolve security incidents.
Compliance and Governance Microsoft Azure offers Azure Policy to define and enforce compliance controls across Azure resources within the organization. Moreover, it helps provide compliance certifications and adhere to industry-standard security frameworks.

Let’s explore some features and tools, and discuss their key features and best practices.

Azure Active Directory Identity Protection

Image of Azure’s Identity Protection page, explaining what is identity protection

Identity protection is a cloud-based service for the Azure AD suite. It focuses on helping organizations protect their user identities and detect potential security risks. Moreover, it uses advanced machine learning algorithms and security signals from various sources to provide proactive and adaptive security measures. Furthermore, leveraging machine learning and data analytics can identify risky sign-ins, compromised credentials, and malicious or suspicious user behavior. How’s that? Sounds great, right?

Key Features

1. Risk-Based User Sign-In Policies

It allows organizations to define risk-based policies for user sign-ins which evaluate user behavior, sign-in patterns, and device information to assess the risk level associated with each sign-in attempt. Using the risk assessment, organizations can enforce additional security measures, such as requiring multi-factor authentication (MFA), blocking sign-ins, or prompting password resets.

2. Risky User Detection and Remediation

The service detects and alerts organizations about potentially compromised or risky user accounts. It analyzes various signals, such as leaked credentials or suspicious sign-in activities, to identify anomalies and indicators of compromise. Administrators can receive real-time alerts and take immediate action, such as resetting passwords or blocking access, to mitigate the risk and protect user accounts.

Best Practices

  • Educate Users About Identity Protection - Educating users is crucial for maintaining a secure environment. Most large organizations now provide security training to increase the awareness of users. Training and awareness help users protect their identities, recognize phishing attempts, and follow security best practices.
  • Regularly Review and Refine Policies - Regularly assessing policies helps ensure their effectiveness, which is why it is good to continuously improve the organization’s Azure AD Identity Protection policies based on the changing threat landscape and your organization's evolving security requirements.

Azure Firewall

Image of Azure Firewall page, explaining what is Azure Firewall

Microsoft offers an Azure Firewall, which is a cloud-based network security service. It acts as a barrier between your Azure virtual networks and the internet. Moreover, it provides centralized network security and protection against unauthorized access and threats. Furthermore, it operates at the network and application layers, allowing you to define and enforce granular access control policies.

Thus, it enables organizations to control inbound and outbound traffic for virtual and on-premises networks connected through Azure VPN or ExpressRoute. Of course, we can’t ignore the filtering traffic of source and destination IP addresses, ports, protocols, and even fully qualified domain names (FQDNs).

Key Features

1. Network and Application-Level Filtering

This feature allows organizations to define rules based on IP addresses (source and destination), including ports, protocols, and FQDNs. Moreover, it helps organizations filter network and application-level traffic, controlling inbound and outbound connections.

2. Fully Stateful Firewall

Azure Firewall is a stateful firewall, which means it can intelligently allow return traffic for established connections without requiring additional rules. The beneficial aspect of this is it simplifies rule management and ensures that legitimate traffic flows smoothly.

3. High Availability and Scalability

Azure Firewall is highly available and scalable. It can automatically scale with your network traffic demand increases and provides built-in availability through multiple availability zones.

Best Practices

  • Design an Appropriate Network Architecture - Plan your virtual network architecture carefully to ensure proper placement of Azure Firewall. Consider network segmentation, subnet placement, and routing requirements to enforce security policies and control traffic flow effectively.
  • Implement Network Traffic Filtering Rules - Define granular network traffic filtering rules based on your specific security requirements. Start with a default-deny approach and allow only necessary traffic. Regularly review and update firewall rules to maintain an up-to-date and effective security posture.
  • Use Application Rules for Fine-Grain Control - Leverage Azure Firewall's application rules to allow or deny traffic based on specific application protocols or ports. By doing this, organizations can enforce granular access control to applications within their network.

Azure Resource Locks

Image of Azure Resource Locks page, explaining how to lock your resources to protect your infrastructure

Azure Resource Locks is a Microsoft Azure feature that allows you to restrict Azure resources to prevent accidental deletion or modification. It provides an additional layer of control and governance over your Azure resources, helping mitigate the risk of critical changes or deletions.

Key Features

Two types of locks can be applied:

1. Read-Only (CanNotDelete)

This lock type allows you to mark a resource as read-only, meaning modifications or deletions are prohibited.

2. CanNotDelete (Delete)

This lock type provides the highest level of protection by preventing both modifications and deletions of a resource; it ensures that the resource remains completely unaltered.

Best Practices

  • Establish a Clear Governance Policy - Develop a governance policy that outlines the use of Resource Locks within your organization. The policy should define who has the authority to apply or remove locks and when to use locks, and any exceptions or special considerations.
  • Leverage Azure Policy for Lock Enforcement - Use Azure Policy alongside Resource Locks to enforce compliance with your governance policies. It is because Azure Policy can automatically apply locks to resources based on predefined rules, reducing the risk of misconfigurations.

Azure Secure SQL Database Always Encrypted

Image of Azure Always Encrypted page, explaining how it works

Azure Secure SQL Database Always Encrypted is a feature of Microsoft Azure SQL Database that provides another security-specific layer for sensitive data. Moreover, it protects data at rest and in transit, ensuring that even database administrators or other privileged users cannot access the plaintext values of the encrypted data.

Key Features

1. Client-Side Encryption

Always Encrypted enables client applications to encrypt sensitive data before sending it to the database. As a result, the data remains encrypted throughout its lifecycle and can be decrypted only by an authorized client application.

2. Column-Level Encryption

Always Encrypted allows you to selectively encrypt individual columns in a database table rather than encrypting the entire database. It gives organizations fine-grained control over which data needs encryption, allowing you to balance security and performance requirements.

3. Transparent Data Encryption

The database server stores the encrypted data using a unique encryption format, ensuring the data remains protected even if the database is compromised. The server is unaware of the data values and cannot decrypt them.

Best Practices

The organization needs to plan and manage encryption keys carefully. This is because encryption keys are at the heart of Always Encrypted. Consider the following best practices.

  • Use a Secure and Centralized Key Management System - Store encryption keys in a safe and centralized location, separate from the database. Azure Key Vault is a recommended option for managing keys securely.
  • Implement Key Rotation and Backup - Regularly rotate encryption keys to mitigate the risks of key compromise. Moreover, establish a key backup strategy to recover encrypted data due to a lost or inaccessible key.
  • Control Access to Encryption Keys - Ensure that only authorized individuals or applications have access to the encryption keys. Applying the principle of least privilege and robust access control will prevent unauthorized access to keys.

Azure Key Vault

Image of Azure Key Vault page

Azure Key Vault is a cloud service provided by Microsoft Azure that helps safeguard cryptographic keys, secrets, and sensitive information. It is a centralized storage and management system for keys, certificates, passwords, connection strings, and other confidential information required by applications and services. It allows developers and administrators to securely store and tightly control access to their application secrets without exposing them directly in their code or configuration files.

Key Features

1. Key Management

Key Vault provides a secure key management system that allows you to create, import, and manage cryptographic keys for encryption, decryption, signing, and verification.

2. Secret Management

It enables you to securely store (as plain text or encrypted value) and manage secrets such as passwords, API keys, connection strings, and other sensitive information.

3. Certificate Management

Key Vault supports the storage and management of X.509 certificates, allowing you to securely store, manage, and retrieve credentials for application use.

4. Access Control

Key Vault provides fine-grained access control to manage who can perform operations on stored keys and secrets. It integrates with Azure Active Directory (Azure AD) for authentication and authorization.

Best Practices

  • Centralized Secrets Management - Consolidate all your application secrets and sensitive information in Key Vault rather than scattering them across different systems or configurations. The benefit of this is it simplifies management and reduces the risk of accidental exposure.
  • Use RBAC and Access Policies - Implement role-based access control (RBAC) and define granular access policies to power who can perform operations on Key Vault resources. Follow the principle of least privilege, granting only the necessary permissions to users or applications.
  • Secure Key Vault Access - Restrict access to Key Vault resources to trusted networks or virtual networks using virtual network service or private endpoints because it helps prevent unauthorized access to the internet.

Azure AD Multi-Factor Authentication

Image of Azure AD Multi-Factor Authentication page, explaining how it works

It is a security feature provided by Microsoft Azure that adds an extra layer of protection to user sign-ins and helps safeguard against unauthorized access to resources. Users must give additional authentication factors beyond just a username and password.

Key Features

1. Multiple Authentication Methods

Azure AD MFA supports a range of authentication methods, including phone calls, text messages (SMS), mobile app notifications, mobile app verification codes, email, and third-party authentication apps. This flexibility allows organizations to choose the methods that best suit their users' needs and security requirements.

2. Conditional Access Policies

Azure AD MFA can configure conditional access policies, allowing organizations to define specific conditions under which MFA (is required), once applied to an organization, on the user location, device trust, application sensitivity, and risk level. This granular control helps organizations strike a balance between security and user convenience.

Best Practices

  • Enable MFA for All Users - Implement a company-wide policy to enforce MFA for all users, regardless of their roles or privileges, because it will ensure consistent and comprehensive security across the organization.
  • Use Risk-Based Policies - Leverage Azure AD Identity Protection and its risk-based policies to dynamically adjust the level of authentication required based on the perceived risk of each sign-in attempt because it will help balance security and user experience by applying MFA only when necessary.
  • Implement Multi-Factor Authentication for Privileged Accounts - Ensure that all privileged accounts, such as administrators and IT staff, are protected with MFA. These accounts have elevated access rights and are prime targets for attackers. Enforcing MFA adds an extra layer of protection to prevent unauthorized access.

Conclusion

In this post, we have introduced the importance of cybersecurity in the cloud space due to dependence on cloud providers. After that we discussed some layers of security in Azure to gain insights about its landscape and see some tools and features available. Of course we can’t ignore the features such as Azure Active Directory Identity Protection, Azure Firewall, Azure Resource Locks, Azure Secure SQL Database Always Encrypted, Azure Key Vault and Azure AD Multi-Factor Authentication by giving an overview on each, its key features and the best practices we can apply to our organization.

Read More
Team Sentra
Team Sentra
November 3, 2022
Min Read

Top 8 AWS Cloud Security Tools and Features for 2024

Top 8 AWS Cloud Security Tools and Features for 2024

AWS – like other major cloud providers – has a ‘shared responsibility’ security model for its customers. This means that AWS takes full responsibility for the security of its platform – but customers are ultimately responsible for the security of the applications and datasets they host on the platform.

This doesn’t mean, however, that AWS washes its hands of customer security concerns. Far from it. To support customers in meeting their mission critical cloud security requirements, AWS has developed a portfolio of cloud security tools and features that help keep AWS applications and accounts secure. Some are offered free, some on a subscription basis. Below, we’ve compiled some key points about the top eight of these tools and features:

1. Amazon GuarDuty

Amazon’s GuardDuty threat detection service analyzes your network activity, API calls, workloads, and data access patterns across all your AWS accounts. It uses AI to check and analyze multiple sources – from Amazon CloudTrail event logs, DNS logs, Amazon VPC Flow Logs, and more. GuardDuty looks for anomalies that could indicate infiltration, credentials theft, API calls from malicious IPs, unauthorized data access, cryptocurrency mining, and other serious cyberthreats. The subscription-based tool also draws updated threat intel from feeds like Proofpoint and Crowdstrike, to ensure workloads are fully protected from emerging threats.

2. AWS CloudTrail

Identity is an increasingly serious attack surface in the cloud. And this makes visibility over AWS user account activity crucial to maintaining uptime and even business continuity. AWS CloudTrail enables you to monitor and record account activity - fully controlling storage, analysis and remediation - across all your AWS accounts. In addition to improving overall security posture through recording user activity and events, CloudTrail offers important audit functionality for proof of compliance with emerging and existing regulatory regimes like HIPAA, SOC and PCI. CloudTrail is an invaluable addition to any AWS security war chest, empowering admins to capture and monitor API usage and user activity across all AWS regions and accounts.

3. AWS Web Application Firewall

Web applications are attractive targets for threat actors, who can easily exploit known web layer vulnerabilities to gain entry to your network. AWS Web Application Firewall (WAF) guards web applications and APIs from bots and web exploits that can compromise security and availability, or unnecessarily consume valuable processing resources. AWS WAF addresses these threats by enabling control over which traffic reaches applications, and how it reaches them. The tool lets you create fully-customizable security rules to block known attack patterns like cross-site scripting and SQL injection. It also helps you control traffic from automated bots, which can cause downtime or throw off metrics owing to excessive resource consumption.

4. AWS Shield

Distributed Denial of Service (DDoS) attacks continue to plague companies, organizations, governments, and even individuals. AWS Shield is the platform’s built-in DDoS protection service. Shield ensures the safety of AWS-based web applications – minimizing both downtime and latency. Happily, the standard tier of this particular AWS service is free of charge and protects against most common transport and network layer DDoS attacks. The advanced version of AWS Shield, which does carry an additional cost, adds resource-specific detection and mitigation techniques to the mix - protecting against large-scale DDoS attacks that target Amazon ELB instances, AWS Global Accelerator, Amazon CloudFront, Amazon Route 53, and EC2 instances.

5. AWS Inspector

With the rise in adoption of cloud hosting for storage and computing, it’s crucial for organizations to protect themselves from attacks exploiting cloud vulnerabilities. A recent study found that the average cost of recovery from a breach caused by cloud security vulnerabilities was nearly $5 million. Amazon Inspector enables automated vulnerability management for AWS workloads. It automatically scans for software vulnerabilities, as well as network vulnerabilities like remote root login access, exposed EC2 instances, and unsecured ports – all of which could be exploited by threat actors. What’s more, Inspector’s integral rules package is kept up to date with both compliance standards and AWS best practices.

6. Amazon Macie

Supporting Amazon Simple Storage Service (S3), Amazon’s Macie data privacy and security service leverages pattern matching and machine learning to discover and protect sensitive data. Recognizing PII or PHI (Protected Health Information) in S3 buckets, Macie is also able to monitor the access and security of the buckets themselves. Macie makes compliance with regulations like HIPAA and GDPR simpler, since it clarifies what data there is in S3 buckets and exactly how that data is shared and stored publicly and privately.

7. AWS Identity and Access Management

AWS Identity and Access Management (IAM) enables secure management of identities and access to AWS services and resources. IAM works on the principle of least privilege – meaning that each user should only be able to access information and resources necessary for their role. But achieving least privilege is a constantly-evolving process – which is why IAM works continuously to ensure that fine-grained permissions change as your needs change. IAM also allows AWS customers to manage identities per-account or offer multi-account access and application assignments across AWS accounts. Essentially, IAM streamlines AWS streamlines permissions management – helping you set, verify, and refine policies toward achieving least privilege.

8. AWS Secrets Manager

AWS aptly calls their secrets management service Secrets Manager. It’s designed to help protect access to IT resources, services and applications – enabling simpler rotation, management and retrieval of API keys, database credentials and other secrets at any point in the secret lifecycle. And Secrets Manager allows access control based on AWS Identity and Access Management (IAM) and resource-based policies. This means you can leverage the least privilege policies you defined in IAM to help control access to secrets, too. Finally, Secrets Manager handles replication of secrets – facilitating both disaster recovery and work across multiple regions.

There are many more important utilities we couldn’t cover in this blog, including AWS Audit Manager, which are equally important in their own rights. Yet the key takeaway is this: even though AWS customers are responsible for their own data security, AWS makes a real effort to help meet and exceed security standards and expectations.

 

Read More
Galia Nedvedovich
Galia Nedvedovich
November 2, 2022
Min Read

“Ahhh, So That’s Why Everyone’s Talking About DSPM”

“Ahhh, So That’s Why Everyone’s Talking About DSPM”

The most satisfying part of working at a startup in the hottest space in cybersecurity -  cloud data security-  is when I get to witness cloud security pros realize how Data Security Posture Management solves one of the most complex issues in modern infrastructures - knowing where all of your data is, and how it’s secured.

If you’re unfamiliar with DSPM, that’s understandable - it’s a new category recently recognized by Gartner’s 2022 Data Security Hype Cycle report that refers to the approach to securing cloud data by ensuring that sensitive data always has the correct security posture - regardless of where it’s been duplicated or moved.

After all, if the most significant security risk for any organization is a data breach, why not focus on securing the sensitive data at all times? The honest answer to this is that it’s very complex. The way data travels in the cloud is very different from how security tools were built to protect the data. Mostly they’re built to keep unauthorized users and products out of their infrastructure, but they’re not looking at the actual data.

As a result, we have a mismatch of how organizations build and use their data and their approach to securing these assets. Security teams and tools do what they do best, hoping that by combining different approaches, the most sensitive data never leaves their environment and is always protected.

So it’s no surprise that when DSPM tools came with a new data-centric approach and granted full visibility to their cloud environments and offered cloud data classification, security teams were curious.

“It will make my job as a product security leader a lot easier.  I’ll be able to show [our engineers] where their compliance requirements are and what needs to be done there.”

- Product Security Director at a Large SaaS Company on DSPM

But let’s be honest, in the cyber world, this is not the first time that a new category has come in and promised to solve a complex problem, and security teams are justified in their skepticism. Compounding the challenge is the lack of human resources, the never-ending growth of the security stack, and a fear of being burned by a tool that says it will solve your problems and then underdelivers. 

Nevertheless, I’m noticing a few ‘aha’ moments in our DSPM conversations with security leaders that keep recurring. And I think their reaction is a strong indication that they feel like someone has finally designed an approach that puts the data first. So far I’ve seen this reaction around these 3 areas of DSPM generally and Sentra’s solution specifically: 

  1. Shadow data: So many cloud first organizations have data that’s been duplicated or moved and then forgotten. This shadow data is the ‘unknown unknown’ for security leaders - they don’t know what data is out there, and they don’t know whether it has the proper security posture. For security leaders, knowing where all their cloud data is immediately brings control back into their hands. A recent Sentra customer told us that “It’s like wearing glasses for the first time.”
  2. Too many users and 3rd parties to access sensitive data: We all know that this happens, but once companies know who and which 3rd party integrations have overprivileged access to sensitive data, they’re well on their way to remediating the data vulnerability. (We once found source code that HR teams had access to. That came as a surprise to everyone!)
  3. Quick Deployment: The standard for a new security tool is to have a dedicated team assigned to test out a new vendor. One of the greatest advantages of Sentra’s DSPM is the fact that it needs zero implementation effort - “I don’t have the resources from my team to test out your solution” is simply not an objection when trying out Sentra.

DSPMs are different from other cloud security tools, which are built around building walls around your cloud. The problems of course, are (a) the cloud doesn’t have a perimeter and (b) employees need to move and manipulate the data to do their jobs. That’s one of the business reasons the cloud is adopted in the first place. Instead of defending non-existent perimeters and hurting productivity, DSPM makes sure that wherever your sensitive data goes, you know where it is and that it has the right security posture.  

Of course, there’s only one way to actually tell if DSPM and Sentra can find and secure your sensitive data - and that’s to try it yourself. If you’ve been hearing the buzz about DSPM and want to take a look for yourself, schedule a call with us here.

Read More
Team Sentra
Team Sentra
September 20, 2022
2
Min Read
Data Security

Sentra Arrives in the US, Announces New Technology Partnership with Wiz

Sentra Arrives in the US, Announces New Technology Partnership with Wiz

We're excited to announce that Sentra has opened its new North American headquarters in New York City!

Sentra now has teams operating out of both the East and West coasts in the US, along with their co-HQ in Tel Aviv.
We're also announcing a new technology partnership with Wiz, a Cloud Security Posture Management solution that just became the fast company trusted by over 20% of Fortune 500 companies.

“Sentra’s technology provides enterprises with a powerful understanding of their data,” says Assaf Rappaport, CEO of Wiz. “Together, our solutions effectively eliminate data risks and secure everything an enterprise builds and runs in the cloud. This is a true technology partnership between two organizations with industry-leading technology.”

These announcements come on the heels of our being recognized by Gartner as a Sample Vendor for Data Security Posture Management in the Hype Cycle™ report for Data Security 2022.

The reason for the growth of the Data Security Posture Management category and its popularity with Sentra's customers is clear.

Sentra CEO Yoav Regev explains that  "while the data attack surface might be a relatively new concept, it’s based on principles security teams are well aware of — increasing visibility and limiting exposure. When it comes to the cloud, the most effective way to accomplish this is by controlling where sensitive data is stored and making sure that data is always traveling to secure locations. Simply put, as data travels, so should security.” 

Sentra's agentless solution easily integrates into a customer’s infrastructure. Sentra's international growth comes as the solution gains traction with enterprises plagued by data overload due to its ease of use and ability to achieve rapid results. 

“Sentra’s value was immediate. It enables us to map critical sensitive data faster and secure it,” said Gal Vitenberg, application security architect at Global-e, a NASDAQ-listed e-commerce company. “As an organization that prioritizes the security of data in the cloud, using Sentra enables our teams to operate quickly while maintaining our high levels of security requirements.”

A proud member of the Cloud Security Alliance, you can learn more about Sentra's Data Security Posture Management solution.

GARTNER and HYPE CYCLE are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read More
Team Sentra
Team Sentra
September 19, 2022
3
Min Read
Data Security

Access Controls that Move - The Power of Data Security Posture Management

Access Controls that Move - The Power of Data Security Posture Management

Controlling access to data has always been one of the basics of cybersecurity hygiene. Managing this access has evolved from basic access control lists, to an entire Identity and Access Management industry. IAM controls are great at managing access to applications, infrastructure and on-prem data. But cloud data is a trickier issue. Data in the cloud changes environments and is frequently copied, moved, and edited. 

This is where data access tools share the same weakness- what happens when the data moves? (Spoiler - the policy doesn’t follow).

The Different Access Management Models

There are 3 basic types of access controls enterprises use to control who can read and edit their data.

Access Control Lists: Basic lists of which users have read/write access.

Role Based Access Control (RBAC): The administrator defines access by what roles the user has - for example, anyone with the role ‘administrator’ is granted access.

Attribute Based Access Control (ABAC): The administrator defines which attributes a user must have to access an object - for example, only users with the job title ‘engineer’ and only those accessing the data from a certain location will be granted access. These policies are usually defined in XACML which stands for "eXtensible Access Control Markup Language’.

How Access Controls are Managed in the Cloud

The major public cloud providers include a number of access control features.
AWS for example, has long included clear instructions on managing access to consoles and S3 buckets. In RDS, users can tag and categorize resources and then build access policies based on those tags. 

Similar controls exist in Azure: Azure RBAC allows owners and administrators to create RBAC roles and currently Azure ABAC is in preview mode, and will allow for fine grained access control in Azure environment. 

Another aspect of access management in the cloud is ‘assumed roles’ in which a user is given access to a resource they aren’t usually permitted to access via a temporary key. This permission is meant to be temporary and permit cross account access as needed. Learn more about Azure security in our comprehensive guide.

The Problem: Access Controls Don't Follow the Data

So what’s missing? When data access controls are put in place in the cloud, they’re tied to the data store or database that the controls were created for. Imagine the following scenario. An administrator knows that a specific S3 bucket has sensitive data in it. Being a responsible cloud admin, they set up RBAC or ABAC policies and ensure only the right users have permissions at the right times. So far so good.

But now someone comes along and needs some of the data in that bucket. Maybe just a few details from a CSV file. They copy/paste the data somewhere else in your AWS environment.

Now what happens to that RBAC or ABAC policy? It doesn’t apply to the copied data - not only does the data not have the proper access controls set, but even if you’re able to find the exposed sensitive data, it’s not clear where it came from, or how it’s meant to be protected.

How Sentra’s DSPM Ensures that Data Always Has the Proper Access Controls

What we need is a way for the access control policy to travel with the data throughout the public cloud. This is one of the most difficult problems that Data Security Posture Management (DSPM) was created to tackle. 

DSPM is an approach to cloud security that focuses on finding and securing sensitive data, as opposed to the cloud infrastructure or applications. It accomplishes this by first discovering sensitive data (including shadow or abandoned data). DSPM classifies the data types using AI models and then determines whether the data has the proper security posture and how best to remediate if it doesn’t. 

While data discovery and classification are important, they’re not actionable without understanding:

  • Where the data came from
  • Who originally had access to the data
  • Who has access to the data now

The divide between what a user currently has access to vs what they should have access to, is referred to as the ‘authorization gap’. 

Sentra’s DSPM solution is able to understand who has access to the data and close this gap through the following processes:

  • Detecting unused privileges and adjusting for least privileged access based on user behavior: For example ,if a user has access to 10 data stores but only accesses 2 of them, Sentra will notice and suggest removing access from the other 8. 
  • Detecting user groups with excessive access to data. For example, if a user in the finance team has access to the developer environment, Sentra will raise a flag to remove the over privileged user. 
  • Detecting overprivileged similar data: For example, if sensitive data in production is only accessible by 2 users, but 85% of the data exists somewhere where more people have access, Sentra will alert the data owners to remediate. 

Access control and authorization remains one of the most important ways of securing sensitive cloud data. A data centric security solution can help ensure that the right access controls always follow your cloud data.

Read More
Team Sentra
Team Sentra
August 29, 2022
3
Min Read
Data Security

How Sensitive Cloud Data Gets Exposed

How Sensitive Cloud Data Gets Exposed

When organizations began migrating to the cloud, they did so with the promise that they’ll be able to build and adapt their infrastructures at speeds that would give them a competitive advantage. It also meant that they’d be able to use large amounts of data to gather insights about their users and customers to better understand their needs.

While this is all true - it does mean that there’s more data than ever that security teams are responsible for protecting more data than ever before. As data gets replicated, shared, and moved throughout the public cloud, sensitive data exposure becomes more common. These are the most common ways that sensitive cloud data is exposed and leaked - and what’s needed to mitigate the risks. 

Causes of Cloud Data Exposure

Negligence: Accidentally leaving a data asset exposed to the internet shouldn’t happen. Cloud providers know it happens anyway - AWS’ first sentence in their best practices for S3 storage article says “Ensure that your Amazon S3 buckets use the correct policies and are not publicly accessible.” 5 years ago  AWS added warnings to dashboards when a bucket was publicly exposed. Of course, S3 is just one of many data stores that contain sensitive data and are prone to accidental exposure. Despite the warnings, exposed data assets continue to be a cause of data breaches. Fortunately, these vulnerabilities are easily corrected- assuming you have perfect visibility into your cloud environment. 

Data Movement: Even when sensitive data is properly secured, there’s always a risk that it could be moved or copied into an unsecured environment. A common example of this is taking sensitive data from a secured production environment and moving it to a developer environment with a lower security posture. In this case, the data’s owner did everything right - it was the second user who moved the data who accidentally put it at risk. Another example would be an organization which has a PCI environment where they keep all the payment information of their customers, and they need to prevent this extremely sensitive data from going to other data stores in less secured parts of their cloud environment.

Improper Access Management: Access to sensitive data should not be granted to users who don’t need it (see the example above). Improper IAM configurations and access control management increases the risk of accidental or malicious data leakage. More access means more potential shadow data being created and abandoned. For example, a user might copy sensitive data and then leave the company, creating data that no one is aware of.  Limiting access to sensitive data to users who actually need it can help prevent a needless expansion of your organization’s ‘data attack surface’.

3rd Parties:  It’s extremely easy to accidentally share sensitive data with a third party over email. Accidentally forwarding sensitive data or credentials is one of the simplest ways to leak sensitive data from your organization. In the public cloud, the equivalent of the accidental email is granting a 3rd party access to a data asset in your public cloud infrastructure, such as a CI/CD tool or a SaaS application for data analytics. It’s similar to improper access management, only now the over privileged access is granted outside of your organization entirely where you’re less able to mitigate the risks. 

Another common way data is leaked to 3rd parties is when someone inside an organization shares something that isn't supposed to have sensitive data, but does. A good example of this is sharing log files with a 3rd party. Log files shouldn’t have sensitive data, but often it can include data like user emails, IP addresses, API credentials, etc.

ETL Errors: When extracting data that contains PII from one from a production database to a data lake or an analytics data warehouse, such as Redshift or Snowflake, sometimes the wrong warehouse might be specified. This is an easy mistake to miss, as data agnostic tools might not understand the sensitive nature of the data.

Why Can’t Cloud Data Security Solutions Stop Sensitive Data Exposure?

Simply put - they’re not looking at the data. They’re looking at the network, infrastructure, and perimeter. That’s how data leaks used to be prevented in the on-prem days - you’d just make sure the perimeter was secure, and because all your sensitive data was on-prem, you could secure it by securing everything.

For cloud-first companies, data isn’t staying behind the corporate perimeter. And while cloud platforms can identify infrastructure vulnerabilities, they’re missing the context around which data is sensitive. Remediating data vulnerabilities - finding sensitive data with an improper security posture remains a challenge.

Discovering and Classifying Cloud Data - The Data Security Posture Management (DSPM) Approach

Instead of trying to adapt on-prem strategies to cloud environments, DSPM (a new ‘on the rise’ category in Gartner’s™ latest hype cycle) takes a data first approach. By understanding the data’s proper context, DSPM secure sensitive cloud data by:

  •  Discovering  all cloud data, including shadow data and abandoned data stores
  •  Classifying the different data types using standard and custom parameters
  • Automatically detects when sensitive data’s security posture is changed - whether via data movement or duplication
  • Detects who can access and who has accessed sensitive data
  • Understands how data travels throughout the cloud environment 
  • Orchestrates remediation workflows between engineering and security teams

Data Security Posture Management solves many of the most common reasons sensitive cloud data gets leaked. By focusing on securing and following the data across the cloud, DSPM helps cloud security teams finally secure what we’re all supposed to be protecting - sensitive data.

To learn more about Data Security Posture Management, check out our full introduction to DSPM, or see it for yourself.

 

Read More
Team Sentra
Team Sentra
August 22, 2022
3
Min Read
Data Security

Types of Sensitive Data: What Cloud Security Teams Should Know

Types of Sensitive Data: What Cloud Security Teams Should Know

Not all data is created equal. If there’s a breach of your public cloud, but all the hackers access is company photos from your last happy hour… well, no one really cares. It’s not making headlines. On the other hand if they leak a file which contains the payment and personal details of your customers, that’s (rightfully) a bigger deal. 

This distinction means that it’s critical for data security teams to understand the types of data that they should be securing first. This blog will explain the most common types of sensitive data organizations maintain, and why they need to be secured and monitored as they move throughout your cloud environment.

Types of Sensitive Cloud Data

Personal Identifiable Information (PII): National Institute of Standards and Practices defines PII as:

(1) any information that can be used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

User and customer data has become an increasingly valuable asset for businesses, and the amount of PII - especially in the cloud- has increased dramatically in only the past few years. 

 The value and amount of PII means that it is frequently the type of data that is exposed in the most famous data leaks. This includes the 2013 Yahoo! breach, which affected 3 billion records, and the 2017 Equifax breach.

Payment Card Industry (PCI): PCI data includes credit card information and payment details. The Payment Card Industry Security Standards Council created PCI-DSS (Data Security Standard) as a way to standardize how credit cards can be securely processed. To become PCI-DSS compliant, an organization must follow certain security practices with the aim of achieving 6 goals:

  • Build and maintain a secure network
  • Protect cardholder data
  • Maintain a vulnerability management program
  • Implement strong access control measures
  • Regularly monitor networks
  • Maintain an information security policy

Protected Health Information (PHI): In the United States, PHI regulations are defined by the Health Insurance Portability and Accountability Act (HIPAA). This data includes any past and future data about an identifiable individual’s health, treatment, and insurance information. The guidelines for protecting PHI are periodically updated by the US Department of Health and Human Services (HHS) but on a technological level, there is no one ‘magic bullet’ that can guarantee compliance. Compliant companies and healthcare providers will layer different defenses to ensure patient data remains secure. By law, HHS maintains a portal where breaches affecting 500 or more patient records are listed and updated.

Intellectual Property: While every company should consider user and employee data sensitive, what qualifies as a sensitive IP varies from organization to organization. For SaaS companies this could be source code of all customer-facing services or customer base trends. Identifying the most valuable data to your enterprise, securing it, and maintaining that security posture should be a priority for all security teams, regardless of the size of the company or where the data is stored.

Developer Secrets: For software companies, developer secrets such as passwords and API keys can be accidentally left in source code or in the wild. Often these developer secrets are unintentionally copied and stored in lower environments, data lakes, or unused block storage volumes.

The Challenge of Protecting Sensitive Cloud Data

When all sensitive data was stored on-prem, data security basically meant preventing unauthorized access to the company’s data center. Access could be requested, but the data wasn’t actually going anywhere. Of course, the adoption of cloud apps and infrastructures means this is no longer the case. Engineers and data teams need access to data to do their jobs, which often leads to moving, duplicating, or changing sensitive data assets. This growth of the ‘data attack surface’ leads to more sensitive data being exposed/leaked, which leads to more breaches. Breaking this cycle will require a new method of protecting these sensitive data classes.

Cloud Data Security with Data Security Posture Management

Data Security Posture Management (DSPM) was created for this new challenge. Recently recognized by Gartner® as an ‘On the Rise’ category, DSPMs find all cloud data, classify it by sensitivity, and then offer actionable remediation plans for data security teams. By taking a data centric approach to security, DSPM platforms are able to secure what matters to the business first - their data.


To learn more about Sentra’s DSPM solution, you can request a demo here.

Read More
Team Sentra
Team Sentra
August 11, 2022
3
Min Read
Data Security

Data Context is the Missing Ingredient for Security Teams

Data Context is the Missing Ingredient for Security Teams

Why are we still struggling with remediation and alert fatigue? In every cybersecurity domain, as we get better at identifying vulnerabilities, and add new automation tools, security teams still face the same challenge - what do we remediate first? What poses the greatest risk to the business? 

Of course, the capabilities of cyber solutions have grown. We have more information about breaches and potential risk than ever. If in the past, an EDR could tell you which endpoint has been compromised, today an XDR can tell you which servers and applications have been compromised. It’s a deeper level of analysis. But prioritizing what to focus on first is still a challenge. You might have more information, but it’s not always clear what the biggest risk to the business is. 

The same can be said for SIEMs and SOAR solutions. If in the past we received alerts and made decisions based on log and event data from the SIEM, now we can factor in threat intelligence and third party sources to better understand compromises and vulnerabilities. But again, when it comes to what to remediate to best protect your specific business these tools aren’t able to prioritize. 

The deeper level of analysis we’ve been conducting for the last 5-10 years is still missing what’s needed to make effective remediation recommendations - context about the data at risk. We get all these alerts, and while we might know which endpoints and applications are affected, we’re blind when it comes to the data. That ‘severe’ endpoint vulnerability your team is frantically patching? It might not contain any sensitive data that could affect the business. Meanwhile, the reverse might be true - that less severe vulnerability at the bottom of your to-do list might affect data stores with customer info or source code. 

AWS

AWS CISO Stephen Schmidt, showing data as the core layer of defense at this years AWS Reinforce

This is the answer to the question ‘why is prioritization still a problem?” - the data. We can’t really prioritize anything properly until we know what data we’re defending. After all, the whole point of exploiting a vulnerability is usually to get to the data. 

Now let’s imagine a different scenario. Instead of getting your usual alerts and then trying to prioritize, you get messages  that read like this:

‘Severe Data Vulnerability:  Company source code has been found in the following unsecured data store:____. This vulnerability can be remediated by taking the following steps: ___’. 

You get the context of what’s at-risk, why it’s important, and how to remediate it. That’s data centric security. 

Why Data Centric Security is Crucial for Cloud First Companies

Data centric security wasn’t always critical. When everything was stored on the corporate data center, it was enough to just defend the perimeter, and you knew the data was protected. You also knew where all your data was - literally in the room next door. Sure, there were risks around information kept on local devices, but there wasn’t a concern that someone would accidentally save 100 GB of information to their device. 

The cloud and data democratization changed all that. Now, besides not having a traditional perimeter, there’s the added issue of data sprawl. Data is moved, duplicated, and changed at previously unimaginable scales. And even when data is secured properly, with the proper security posture, that security posture doesn’t come with when the data is moved. Legacy security tools built for the on-prem era can’t provide the level of security context needed by organizations with petabytes of cloud data. 

Data Security Posture Management

This data context is the promise of data security posture management solutions. Recently recognized in Gartner’s Hype Cycle for Data Security Report as an ‘On the Rise’ category, DSPM gets to the core of the context issue. DSPM solutions attack the problem by first identifying all data an organization has in the cloud. This step often leads to the discovery of data stores that security teams didn’t even know existed. Following this, the next stage is classification, where the types of data labeled - this could be PII, PCI, company secrets, source code, etc. Any sensitive data found to have an insufficient security posture is passed to the relevant teams for remediation. Finally, the cloud environment must be continuously assessed for future data vulnerabilities which are again forwarded to the relevant teams with remediation suggestions in real time. 

In a clear example of the benefits offered by DSPM, Sentra has identified source code in open S3 buckets of a major ecommerce company. By leveraging machine learning and smart metadata scanning, Sentra quickly identified the valuable nature of the exposed asset and ensured it was quickly remediated. 

If you’re interested in learning more about DSPM or Sentra specifically, request a demo here.

 

Read More
Team Sentra
Team Sentra
July 27, 2022
3
Min Read

Cloud Data Security Means Shrinking the “Data Attack Surface”

Cloud Data Security Means Shrinking the “Data Attack Surface”

Traditionally, the attack surface was just the sum of the different attack vectors that your IT was exposed to. The idea being as you removed vectors through patching and internal audits. 

With the adoption of cloud technologies, the way we managed the attack surface changed. As the attack vendors changed, new tools were developed to find security vulnerabilities and misconfigurations. However, the principle remained similar - prevent attackers from accessing your cloud infrastructure and platform by eliminating and remediating attack vectors. But attackers will find their way -  it’s only a matter of "when".

Data attack surface is a new concept. When data was all in one location (the enterprise’s on- data center), this wasn’t something we needed to consider. Everything was in the data center, so defending the perimeter was the same thing as protecting the data. But what makes cloud data security vulnerable isn’t primarily technical vulnerabilities into the cloud environment. It’s the fact that there’s so much data to defend and it's not clear where all that data is, who is responsible for it, and what its security posture is supposed to be. The sum of the total vulnerable, sensitive, and shadow data assets is the data attack surface. 

Reducing the Data Attack Surface

Traditional attack surface reduction is accomplished by visualizing your enterprise’s architecture and finding unmanaged devices. This first step in data attack surface reduction is similar - except it's about mapping your cloud data. Only following a successful cloud data discovery program can you understand the scope of the project.

The second step of traditional attack surface reduction is finding vulnerabilities and indicators of exposures. This is similarly adaptable to the data attack surface. By classifying the data both by sensitivity (company secrets, compliance) and by security posture (how should this data be secured) cloud security teams can identify their level of exposure.

The final step shrinking the attack surface involves remediating the data vulnerability. This can involve deleting an exposed, unused data store or ensuring that sensitive data has the right level of encryption. The idea should always be not to have more sensitive data than you need, and that data should always have the proper security posture. 

 3 Ways Reducing the Data Attack Surface Matters to the Business

  • Reduce the likelihood of data breaches. Just like shrinking your traditional attack surfaces reduces the risk of a vulnerability being exploited, shrinking the data attack surface reduces the risk of a data breach. This is achieved by eliminating sensitive shadow data, which has the dual benefit of reducing both the overall amount of company data, and the amount of exposed sensitive data. With less data to defend, it’s easier to prioritize securing the most critical data to your business, reducing the risk of a breach. 
  • Stay compliant with data localization regulations. We’re seeing an increase in the number of data localization rules  - data generated and stored in one country or region isn’t allowed to be transferred outside of that area. This can cause some obvious problems for cloud-first enterprises, as they may be using data centers all over the world. Orphaned or shadow data can be non-compliant with local laws, but because no one knows they exist, they pose a real compliance risk if discovered.
  • Reducing overall security costs. The smaller your data attack surface is, the less time and money you need to spend defending it. There are cloud costs for scanning and monitoring your environment and there’s of course the time and human resources you need to dedicate to scanning, remediating, and managing the security posture of your cloud data. Shrinking the data attack surface means less to manage.

How Data Security Posture Management (DSPM) Tools Help Reduce the Data Attack Surface

Data Security Posture Management (DSPM) shrinks the attack surface by discovering all of your cloud data, classifying it by sensitivity to the business, and then offering plans to remediate all data vulnerabilities found. Often, shadow cloud data can be eliminated entirely, allowing security teams to focus on a smaller amount of data stores and databases. Additionally, by classifying data according to business impact, a DSPM tool ensures cloud security teams are focused on the most critical data - whether that’s company secrets, customer data, or sensitive employee data. Finally, remediation plans ensure that security teams aren’t just monitoring another dashboard, but are actually given actionable plans for remediating the data vulnerabilities, and shrinking the data attack surface. 

The data attack surface might be a relatively new concept, but it’s based on principles security teams are well aware of: limiting exposure by eliminating attack vectors. When it comes to the cloud, the most important way to accomplish this is by focusing on the data first. With a smaller data attack surface, the less likely it is that valuable company data will be compromised. 

Read More
Jason Chan
Jason Chan
July 19, 2022
2
Min Read
Data Security

Cloud Data Security Should Be About Guardrails, not Gates

Cloud Data Security Should Be About Guardrails, not Gates

I recently came back from my first trip to Israel, one of the centers of the cybersecurity industry. In addition to meeting so many peers and talented cyber teams, I also had the chance to speak at CyberWeekTLV with Asaf Kochan, President of Sentra, and former commander of Unit 8200 (Israel’s NSA). We discussed the different security challenges facing cloud first enterprises, but also some of the business opportunities the cloud makes possible and how I tried to use cloud security as a business enabler during my time at Netflix

Organizations move to the cloud or choose to be cloud native because they value speed. They want to be able to spin up thousands of VMs whenever they want and move massive amounts of data through their cloud infrastructure. We can think of the old way of cybersecurity as basically putting a gate on a road. We make the user stop, we inspect them and their data, and then open the gate and let them go wherever the business needs them. I always encouraged my team at Netflix to think in terms of ‘guardrails, not gates’. Let the business move as fast as it needs - with appropriate guardrails to prevent users from ‘flying off the road’, so to speak. 

The truth is that the best engineers and security teams want to help the business get to where they’re going as fast as possible. They understand that the business doesn’t exist to serve security. At Netflix, the business model was to put out high quality entertainment at a rapid pace. Our job was to help them do that while staying secure.

Besides the benefit of helping the business, there’s an important talent boost that comes with being cloud first.  The best engineers want to work on the newest technologies. It’s going to be harder and harder to find dedicated talent who are passionate about maintaining legacy and on-prem architectures. One of the major advantages I had recruiting talent at Netflix (besides the prestige of the brand) was that we were building security programs for a new type of infrastructure, and that was exciting.

Back to my guardrail metaphor. When you drive along a road you’ll notice that some areas have stronger guardrails. These are the areas where accidents are most likely to happen. Similarly in security, prepositioning is key. The reason new security leaders stay awake at night is because they’re imagining worst case scenarios all the time. But there’s a way to use that type of thinking for good. As Asaf said in my discussion with him, prepositioning by playing the ‘what if’ game is how you minimize the likelihood and impact of breaches. Think about the data that would do the most damage in the event of a breach, think where that data might be, and then make sure it has the proper security posture. Then do that for the next most critical assets, until the risk of the worst case scenario coming true has reached an acceptable level. 

Cloud data security is about helping your company leverage the cloud. The whole point of the cloud is speed and scalability. Security leaders for cloud first enterprises that don’t get in the way are the ones that are going to prosper in their careers and allow their companies to reach their full potential. 

Read More
Team Sentra
Team Sentra
July 11, 2022
4
Min Read
Sentra Case Study

Finding Sensitive Cloud Data in all the Wrong Places

Finding Sensitive Cloud Data in all the Wrong Places

Not all data can be kept under lock and key. Website resources, for example, always need to be public and S3 buckets are frequently used for this. On the other side, there are things that should never be public - customer information, payroll records, and company IP. But it happens - and can take months or years to notice - if you do at all. 

This is the story of how Sentra identified a large enterprise’s source code in an open S3 bucket. 

As part of work with this company, Sentra was given 7 Petabytes in AWS environments to scan for sensitive data. Specifically, we were looking for IP - source code, documentation, and other proprietary data. 

As we often do, we discovered many issues, but really there were 7 that needed to be remediated immediately, 7 that we defined as ‘critical’. 

The most severe data vulnerability was source code in an open S3 bucket with 7.5 TB worth of data. This file was hiding in a 600 MB .zip file in another .zip file. We also found recordings of client meetings and a tiny 8.9KB excel file with all of their existing current and potential customer data.

source code in an open S3 bucket with 7.5 TB worth of data.

Examples of sensitive data alerts displayed on Sentra's dashboard

So how did such a serious data vulnerability go unnoticed? In this specific case, one of the principal architects at the company had backed up his primary device to their cloud. This isn’t as uncommon as you might think - particularly in the early days of cloud based companies, data is frequently ‘dumped’ into the cloud as the founders and developers are naturally more concerned about speed than security. There’s no CISO on board to build policies. Everyone is just trusted with the data that they have. The early Facebook motto of ‘move fast and break things’ is very much alive in early stage companies. Of course, if they’re successful at building a major company, the problem is now there’s all this data traveling around their cloud environment that no one is tracking, no one is responsible for, and in the case above, no one even knew existed. 

Another explanation for unsecured sensitive data in the public cloud is that some people simply assume that the cloud is secure. As we’ve explained previously - the cloud can be more secure than on-prem architecture - but only if it’s configured properly. A major misconception is that everything in the cloud is secured by the cloud provider. Of course, the mere fact that you can host public resources on the cloud demonstrates how incorrect that assumption is - if you’ve left your S3 buckets open, that data is at risk, regardless of how much security the cloud provider offers. It’s important to remember that the ‘shared model of responsibility’ means that the cloud provider handles things like networking and physical security. But data security is on you. 

This is where accurate data classification needs to play a role. Enterprises need a way of identifying which data is sensitive and critical to keep secure, and what the proper security posture should be. Data classification tools have been around for a long time, but mainly focus on easily identifiable data - credit card and social security numbers for example. Identifying company secrets that weren’t supposed to be publicly accessible wasn’t possible.

The rise of Data Security Posture Management platforms is changing that. By understanding what the security posture of data is supposed to be. By having the security posture ‘follow’ the sensitive data as it travels through the cloud, security teams can ensure their data is always properly secured - no matter where the data ends up. 

Want to find out what sensitive data is publicly accessible in your cloud?

Get in touch with Sentra here to see our DSPM in action. 

Read More
Team Sentra
Team Sentra
July 4, 2022
3
Min Read
Data Security

Why It’s Time to Adopt a Data Centric Approach to Security

Why It’s Time to Adopt a Data Centric Approach to Security

Here’s the typical response to a major data leak: There’s a breach at a large company.  And the response from the security community is usually to invest more resources in preventing all possible data breaches. This might entail new DLP tools or infrastructure vulnerability management solutions.

But there’s something missing in this response.

The reason the breach was so catastrophic was because the data that leaked was valuable. It’s not the “network” that’s being leaked.

So that’s not where data centric security should start. 

Here’s what the future of data breaches could look like:  There’s a breach. But the breach doesn’t affect critical company or customer data because that data all has the proper security posture. There’s no press. And everyone goes home calmly at the end of the day.

This is going to be the future of most data breaches.

It’s just more attainable to secure specific data stores and files than it is to throw up defenses “around” the infrastructure. The truth this that most data stores do not contain sensitive information. So if we can just keep sensitive data in a small number of secured data stores, enterprises will be much more secure. Focusing on the data is a better way to prepare for a compromised environment. 

Practical Steps for Achieving Data Centric Security

What does it take to make this a reality? Organizations need a way to find, classify, and remediate all data vulnerabilities. Here are the 5 steps to adopting a data centric security approach:

  • Discover shadow data and build a data asset inventory.

You can’t protect what you don’t know you have. This is true of all organizations, but especially cloud first organizations. Cloud architectures make it easy to replicate or move data from one environment or another. It could be something as simple as a developer moving a data table to a staging environment, or a data analyst copying a file to use elsewhere. Regardless of how the shadow data is created, finding it needs to be priority number one. 

  • Classifying the most sensitive and critical data

Many organizations already use data tagging to classify their data. While this often works well for structured data like credit card numbers, it’s important to remember that ‘sensitive data’ includes unstructured data as well. This includes company secrets like source code and intellectual property which cause as much damage as customer data in the event of a breach. 

  • Prioritize data security according to business impact

The reason we’re investing time in finding and classifying all of this data is for the simple reason that some types of data matter more than others. We can’t afford to be data agnostic - we should be remediating vulnerabilities based on the severity of the data at risk, not the technical severity of the alert. Differentiating between the signal and the noise is critical for data security. Ignore the severity rating of the infrastructure vulnerabilities if there’s no sensitive data at risk. 

  • Continuously monitor data access and user activity, and make all employees accountable for their data – this is not only the security team’s problem.

Data is extremely valuable company property. When you give employees physical company property - like a laptop or even a car- they know they’re responsible for it. But when it comes to data, too many employees see themselves as mere users of the data. This attitude needs to change. Data isn’t the security team’s sole responsibility. 

  • Shrink the data attack surface - take action to reduce the organization’s data sprawl.

Beyond remediating according to business impact, organizations should reduce the number of sensitive data stores by removing sensitive data that don't need to have it. This can be via redaction, anonymization, encryption, etc. By limiting the number of sensitive data stores, security teams effectively shrink the attack surface by reducing the number of assets worth attacking in the first place.

The most important aspect is understanding that data travels and its security posture must travel with it. If a sensitive data asset has a strict security posture in one location in the public cloud, it must always maintain that posture. A Social Security number is always valuable to a threat actor. It doesn’t matter whether it leaks from a secured production environment or a forgotten data store that no one has accessed for two years. Only by appreciating this context will organizations be able to ensure that their sensitive data is always secured properly.   

How Data Security Posture Management Helps

The biggest technological obstacles that had to be overcome to make data centric security possible were proper classification of unstructured data and prioritization based on business impact. Advances in Machine Learning have made highly accurate classification and prioritization possible, and created a new type of security solution: Data Security Posture Management (DSPM)

DSPM allows organizations to accurately find and classify their cloud data while offering remediation plans for severe vulnerabilities. By finally giving enterprises a full view of their cloud data, data centric security is finally able to offer a deeper, more effective layer of cloud security than ever before. 

Want to see what data centric security looks like with Sentra’s DSPM?  Request a demo here

Read More
Jason Chan
Jason Chan
Min Read

Rising to the Challenge of Data Security Leadership

Rising to the Challenge of Data Security Leadership

Any attempt to perfectly prescribe exactly what you need to build an effective data security role or team is a fool’s errand. There are simply too many variables you need to take into account - the size of the organization, the amount of data it has, the type of data that needs to be secured, the organization’s culture and risk appetite- all of these need to be weighed and balanced.

However, with that disclaimer and caveat in place, I do think there are some broad best practices that apply to almost every data security role, and those are the ones I want to focus on in this blog. 

Know Your Inputs and Restrictions - and Document them

Every data security team has a certain set of ‘inputs’ and restrictions under whose framework they need to operate. These can be regulatory frameworks like GDPR and CCPA, but they also include agreements with customers and partners and the level of risk the company is willing to accept. 

These inputs exist for every data security role. And the first thing you need to do when stepping into a data security position is to document these inputs and ensure that everyone’s on the same page. This isn’t the type of project that can be done by a single person or even a single team. Legal needs to be involved. Privacy needs to be involved. Security needs to be involved. The scope of this varies by company, but the main point is that there needs to be a governance arm telling you what the requirements and policies are before you can get to work enforcing anything.

It’s also important to remember that there are two different groups here. You have the leaders from the teams I mentioned. And then you have the engineers and executors that implement those policies. All the documentation in the world won’t help if there’s a communication breakdown between the deciders and the implementers. 

Managing Risk, Managing People

Whether you’re an individual or a team responsible for data security, it’s important to keep in mind the big picture - your answer can’t always be ‘no’ when asked ‘can I do this with our data’. Understand that there’s a business reason behind the question - and find a way to help them achieve their goals without violating the risk and legal parameters you’ve already established. 

The data security role also shouldn’t be responsible for actually going into the platforms to remediate issues. As far as possible, the actual remediation should be done by the teams that manage those platforms every day. If there’s 10 different data sources, the security team should be identifying those issues using data security tools. But they should also be - with minimal friction- dispatching the alerts, tasks, and remediation steps to the relevant teams. And the security team should be assisting these teams with developing, rolling out, and managing secure configurations so that, ideally, alerts and remediation tasks become less frequent over time.

Besides managing systems, there’s an enormous human component when it comes to data security success. (In general, I believe that most of our problems in security have a human dimension.) There are egos and authority on the line in discussions around data and how it should be used. The business side of the company may want to gather and retain as much data as possible. The privacy and legal teams may want as little as possible. Security leaders in general and particularly data security leaders will need to get along well with the heads of these various departments. They need to play the role of harmonizer between the competing demands and be able to get things done. This involves working with the peers of the CISO - head of legal, head of privacy, and making judgment calls in a space (data security)  that historically hasn’t had that much authority. Of course, that’s all changing now as every country and region adopts new data security regulations.

Managing up, down, and across the company is the main data security skill. It’s what helps separate  effective security leaders. Working well with engineers gets the data secured. Working well with legal, privacy, and compliance is the scaffolding that supports all of your effort. And like every security role, working well with the CISO is critical.

Data Security's a Great Career - Just Take Care Not to Burn Out

To wrap up, I’d say - there’s never been a better time to get into data security. The growth of regulations - and associated consequences for non compliance- means companies are investing in data security talent. For anyone looking to move from a general security or IT role into a data security role, a great first step is to improve your cloud and data skills. Understanding your company’s cloud environment, its different use cases, tools, and business objectives will give you the context you need to be successful in the role. It will help you understand the inputs and pressures on the different teams, and grow your perspective beyond just the technical part of the job.

The key to avoiding burnout is understanding the nature of the job. There’s always going to be a new tool, stakeholder, or regulation that you’re going to face. There’s no ‘finishing’ the work in any final sense. What you spent all month working on might be irrelevant overnight. That’s the game. And if it’s for you, I hope this blog helps in some small way think about what makes a successful data security professional.

Read More
Guy Spilberg
Guy Spilberg
Min Read

If You Could Only Ask One Question About Your Data, It Should be This

If You Could Only Ask One Question About Your Data, It Should be This

When security and compliance teams talk about data classification, they speak in the language of regulations and standards. Personal Identifiable Information needs to be protected one way. Health data another way. Employee information in yet another way. And that’s before we get to IP and developer secrets. 

And while approaching data security this way is obviously necessary, I want to introduce a different way of thinking about data security. When we look at a data asset, we should be classifying them by asking  “Is this asset valuable for the amount of data it contains or the quality of data it contains?”?

Here’s what I mean. If an attacker is able to breach a bank’s records and steal a single customer record and credit card number, it’s an annoyance for the bank, and a major inconvenience for the customer. But the bank will simply resolve the issue directly with the customer. There’s no media attention. And it certainly doesn’t have an impact on the bank's business. On the other hand, if 10,000 customer records and credit card details were leaked… that’s another story. 

This is what we can call ‘quantitative data’. 

The risk to the company if the data is leaked is due to the type of data, yes, but perhaps more important is the amount of data. 

But there’s another type of data that works in exactly the opposite way. This is what we can call ‘qualitative data’. This is data that is so sensitive that even a small amount being leaked would cause major damage. A good example of this type of data is intellectual property or developer secrets. An attacker doesn’t need a huge file to affect the business with a breach. Even a piece of source code or a single certification could be catastrophic. 

Most data an organization has can be classified as either ‘qualitative’ or ‘quantitative’. And there’s very little they have in common, both in how they’re used and how they’re secured. 

Quantitative data moves. Its schema changes. It evolves, moving in and out of the organization.  It’s used in BI tools, added to data pipelines, and extracted via ETLs. It’s also subject to data retention policies and regulations. 

Qualitative data is exactly the opposite. It doesn’t move at all. Maybe certain developer secrets have a refresh token, but other than that there’s no real data retention policies. The data does not move through pipelines, it’s not extracted, and it certainly doesn’t leave the organization. 

So how does this affect how we approach data security?

When it comes to quantitative data, it’s much easier to find and classify this data - and not just because of the size of the asset. A classification tool uses probabilities to identify data - if it sees that 95% of the data in a column contains email addresses, it will label it as ‘email’. And it might have varying degrees of certainty based on each asset.

Qualitative data classification and security can’t work like this. There’s no such thing as ‘50% of a certificate’. It’s either a cert or not. For this reason, identifying qualitative data is much more of a challenge, but securing it is simpler because it doesn’t need to move. 

This method of thinking about data security can be helpful when trying to understand how different security tools find, classify, and protect sensitive data. Tools that rely *only* probabilities will have trouble recognizing when qualitative data moves or is at risk. Similarly, any approach that focuses on keeping qualitative data secure might neglect data in movement. Understanding the differences between qualitative and quantitative data is an easy framework for measuring the effectiveness of your organization’s data protection tools and policies

Read More