All Resources
In this article:
minus iconplus icon
Share the Blog

Understanding Data Movement to Avert Proliferation Risks

April 10, 2024
4
Min Read
Data Sprawl

Understanding the perils your cloud data faces as it proliferates throughout your organization and ecosystems is a monumental task in the highly dynamic business climate we operate in. Being able to see data as it is being copied and travels, monitor its activity and access, and assess its posture allows teams to understand and better manage the full effect of data sprawl.

 

It ‘connects the dots’ for security analysts who must continually evaluate true risks and threats to data so they can prioritize their efforts. Data similarity and movement are important behavioral indicators in assessing and addressing those risks. This blog will explore this topic in depth.

What Is Data Movement

Data movement is the process of transferring data from one location or system to another – from A to B. This transfer can be between storage locations, databases, servers, or network locations. Copying data from one location to another is simple, however, data movement can get complicated when managing volume, velocity, and variety.

  • Volume: Handling large amounts of data.
  • Velocity: Overseeing the pace of data generation and processing.
  • Variety: Managing a variety of data types.

How Data Moves in the Cloud

Data is free and can be shared anywhere. The way organizations leverage data is an integral part of their success. Although there are many business benefits to moving and sharing data (at a rapid pace), there are also many concerns that arise, mainly dealing with privacy, compliance, and security. Data needs to move quickly, securely, and have the proper security posture at all times.  

These are the main ways that data moves in the cloud:

1. Data Distribution in Internal Services: Internal services and applications manage data, saving it across various locations and data stores.

2. ETLs: Extract, Transform, Load processes, involve combining data from multiple sources into a central repository known as a data warehouse. This centralized view supports applications in aggregating diverse data points for organizational use.

3. Developer and Data Scientist Data Usage: Developers and data scientists utilize data for testing and development purposes. They require both real and synthetic data to test applications and simulate real-life scenarios to drive business outcomes.

4. AI/ML/LLM and Customer Data Integration: The utilization of customer data in AI/ML learning processes is on the rise. Organizations leverage such data to train models and apply the results across various organizational units, catering to different use-cases.

What Is Misplaced Data

"Misplaced data" refers to data that has been moved from an approved environment to an unapproved environment. For example, a folder that is stored in the wrong location within a computer system or network. This can result from human error, technical glitches, or issues with data management processes.

 

When unauthorized data is stored in an environment that is not designed for the type of data, it can lead to data leaks, security breaches, compliance violations, and other negative outcomes.

With companies adopting more cloud services, and being challenged with properly managing the subsequent data sprawl, having misplaced data is becoming more common, which can lead to security, privacy, and compliance issues.

The Challenge of Data Movement and Misplaced Data

Organizations strive to secure their sensitive data by keeping it within carefully defined and secure environments. The pervasive data sprawl faced by nearly every organization in the cloud makes it challenging to effectively protect data, given its rapid multiplication and movement.

It is encouraged for business productivity to leverage data and use it for various purposes that can help enhance and grow the business. However, with the advantages, come disadvantages. There are risks to having multiple owners and duplicate data..

To address this challenge, organizations can leverage the analysis of similar data patterns to gain a comprehensive understanding on how data flows within the organization and help security teams first get visibility of those movement patterns, and then identify whether this movement is authorized. Then they can protect it accordingly and understand which unauthorized movement should be blocked.

This proactive approach allows them to position themselves strategically. It can involve ensuring robust security measures for data at each location, re-confining it by relocating, or eliminating unnecessary duplicates. Additionally, this analytical capability proves valuable in scenarios tied to regulatory and compliance requirements, such as ensuring GDPR - compliant data residency.

 Identifying Redundant Data and Saving Cloud Storage Costs

The identification of similarities empowers Chief Information Security Officers (CISOs) to implement best practices, steering clear of actions that lead to the creation of redundant data.

Detecting redundant data helps reduce cloud storage costs and drive up operational efficiency from targeted and prioritized remediation efforts that focus on the critical data risks that matter. 

This not only enhances data security posture, but also contributes to a more streamlined and efficient data management strategy.

“Sentra has helped us to reduce our risk of data breaches and to save money on cloud storage costs.”

-Benny Bloch, CISO at Global-e

Security Concerns That Arise

  1. Data Security Posture Variations Across Locations: Addressing instances where similar data, initially secure, experiences a degradation in security posture during the copying process (e.g., transitioning from private to public, or from encrypted to unencrypted).
  1. Divergent Access Profiles for Similar Data: Exploring scenarios where data, previously accessible by a limited and regulated set of identities, now faces expanded access by a larger number of identities (users), resulting in a loss of control.
  1. Data Localization and Compliance Violations: Examining situations where data, mandated to be localized in specific regions, is found to be in violation of organizational policies or compliance rules (with GDPR as a prominent example). By identifying similar sensitive data, we can pinpoint these issues and help users mitigate them.
  1. Anonymization Challenges in ETL Processes: Identifying issues in ETL processes where data is not only moved but also anonymized. Pinpointing similar sensitive data allows users to detect and mitigate anonymization-related problems.
  1. Customer Data Migration Across Environments: Analyzing the movement of customer data from production to development environments. This can be used by engineers to test real-life use-cases.
  2. Data Data Democratization and Movement Between Cloud and Personal Stores: Investigating instances where users export data from organizational cloud stores to personal drives (e.g., OneDrive) for purposes of development, testing, or further business analysis. Once this data is moved to personal data stores, it typically is less secure. This is due to the fact that these personal drives are less monitored and protected, and in control of the private entity (the employee), as opposed to the security/dev teams. These personal drives may be susceptible to security issues arising from misconfiguration, user mistakes or insufficient knowledge.

How Sentra’s DSPM Helps Navigate Data Movement Challenges

  1. Discover and accurately classify the most sensitive data and provide extensive context about it, for example:
  • Where it lives
  • Where it has been copied or moved to
  • Who has access to it
  1. Highlight misconfigurations by correlating similar data that has different security posture. This helps you pinpoint the issue and adjust it according to the right posture.
  2. Quickly identify compliance violations, such as GDPR - when European customer data moves outside of the allowed region, or when financial data moves outside a PCI compliant environment.
  3. Identify access changes, which helps you to understand the correct access profile by correlating similar data pieces that have different access profiles.

For example, the same data is well kept in a specific environment and can be accessed by 2 very specific users. When the same data moves to a developers environment, it can then be accessed by the whole data engineering team, which exposes more risks.

Leveraging Data Security Posture Management (DSPM) and Data Detection and Response (DDR) tools proves instrumental in addressing the complexities of data movement challenges. These tools play a crucial role in monitoring the flow of sensitive data, allowing for the swift remediation of exposure incidents and vulnerabilities in real-time. The intricacies of data movement, especially in hybrid and multi-cloud deployments, can be challenging, as public cloud providers often lack sufficient tooling to comprehend data flows across various services and unmanaged databases.

 

Our innovative cloud DLP tooling takes the lead in this scenario, offering a unified approach by integrating static and dynamic monitoring through DSPM and DDR. This integration provides a comprehensive view of sensitive data within your cloud account, offering an updated inventory and mapping of data flows. Our agentless solution automatically detects new sensitive records, classifies them, and identifies relevant policies. In case of a policy violation, it promptly alerts your security team in real time, safeguarding your crucial data assets.

In addition to our robust data identification methods, we prioritize the implementation of access control measures. This involves establishing Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC) policies, so that the right users have permissions at the right times.

Identifying data movement with Sentra

Identifying Data Movement With Sentra

Sentra has developed different methods to identify data movements and similarities based on the content of two assets. Our advanced capabilities allow us to pinpoint fully duplicated data, identify similar data, and even uncover instances of partially duplicated data that may have been copied or moved across different locations. 

Moreover, we recognize that changes in access often accompany the relocation of assets between different locations. 

As part of Sentra’s Data Security Posture Management (DSPM) solution, we proactively manage and adapt access controls to accommodate these transitions, maintaining the integrity and security of the data throughout its lifecycle.

These are the 3 methods we are leveraging:

  1. Hash similarity - Using each asset unique identifier to locate it across the different data stores of the customer environment.
  2. Schema similarity - Locate the exact or similar schemas that indicated that there might be similar data in them and then leverage other metadata and statistical methods to simplify the data and find necessary correlations.
  3. Entity Matching similarity - Detects when parts of files or tables are copied to another data asset. For example, an ETL that extracts only some columns from a table into a new table in a data warehouse. 

Another example would be if PII is found in a lower environment, Sentra could detect if this is real or mock customer PII, based on whether this PII was also found in the production environment.

PII found in a lower environment

Conclusion

Understanding and managing data sprawl are critical tasks in the dynamic business landscape. Monitoring data movement, access, and posture enable teams to comprehend the full impact of data sprawl, connecting the dots for security analysts in assessing true risks and threats. 

Sentra addresses the challenge of data movement by utilizing advanced methods like hash, schema, and entity similarity to identify duplicate or similar data across different locations. Sentra's holistic Data Security Posture Management (DSPM) solution not only enhances data security but also contributes to a streamlined data management strategy. 

The identified challenges and Sentra's robust methods emphasize the importance of proactive data management and security in the dynamic digital landscape.

To learn more about how you can enhance your data security posture, schedule a demo with one of our experts.

<blogcta-big>

Ran is a passionate product and customer success leader with over 12 years of experience in the cybersecurity sector. He combines extensive technical knowledge with a strong passion for product innovation, research and development (R&D), and customer success to deliver robust, user-centric security solutions. His leadership journey is marked by proven managerial skills, having spearheaded multidisciplinary teams towards achieving groundbreaking innovations and fostering a culture of excellence. He started at Sentra as a Senior Product Manager and is currently the Head of Technical Account Management, located in NYC.

Subscribe

Latest Blog Posts

Yair Cohen
Yair Cohen
February 5, 2026
3
Min Read

OpenClaw (MoltBot): The AI Agent Security Crisis Enterprises Must Address Now

OpenClaw (MoltBot): The AI Agent Security Crisis Enterprises Must Address Now

OpenClaw, previously known as MoltBot, isn't just another cybersecurity story - it's a wake-up call for every organization. With over 150,000 GitHub stars and more than 300,000 users in just two months, OpenClaw’s popularity signals a huge change: autonomous AI agents are spreading quickly and dramatically broadening the attack surface in businesses. This is far beyond the risks of a typical ChatGPT plugin or a staff member pasting data into a chatbot. These agents live on user machines and servers with shell-level access, file system privileges, live memory control, and broad integration abilities, usually outside IT or security’s purview.

Older perimeter and endpoint security tools weren’t built to find or control agents that can learn, store information, and act independently in all kinds of environments. As organizations face this shadow AI risk, the need for real-time, data-level visibility becomes critical. Enter Data Security Posture Management (DSPM): a way for enterprises to understand, monitor, and respond to the unique threats that OpenClaw and its next-generation kin pose.

What makes OpenClaw different - and uniquely dangerous - for security teams?

OpenClaw runs by setting up a local HTTP server and agent gateway on endpoints. It provides shell access, automates browsers, and links with over 50 messaging platforms. But what really sets it apart is how it combines these features with persistent memory. That means agents can remember actions and data far better than any script or bot before. Palo Alto Networks calls this the 'lethal trifecta': direct access to private data, exposure to untrusted content, communication outside the organization, and persistent memory.

This risk isn't hypothetical. OpenClaw’s skill ecosystem functions like an unguarded software supply chain. Any third-party 'skill' a user adds to an agent can run with full privileges, opening doors to vulnerabilities that original developers can’t foresee. While earlier concerns focused on employees leaking information to public chatbots, tools like OpenClaw operate quietly at system level, often without IT noticing.

From theory to reality: OpenClaw exploitation is active and widespread

This threat is already real. OpenClaw’s design has exposed thousands of organizations to actual attacks. For instance, CVE-2026-25253 is a severe remote code execution flaw caused by a WebSocket validation error, with a CVSS score of 8.8. It lets attackers compromise an agent with a single click (critical OpenClaw vulnerability).

Attackers wasted no time. The ClawHavoc malware campaign, for example, spread over 341 malicious 'skills’, using OpenClaw’s official marketplace to push info-stealers and RATs directly into vulnerable environments. Over 21,000 exposed OpenClaw instances have turned up on the public internet, often protected by nothing stronger than a weak password, or no authentication at all. Researchers even found plaintext password storage in the code. The risk is both immediate and persistent.

The shadow AI dimension: why you’re likely exposed

One of the trickiest parts of OpenClaw and MoltBot is how easily they run outside official oversight. Research shows that more than 22% of enterprise customers have found MoltBot operating without IT approval. Agents connect with personal messaging apps, making it easy for employees to use them on devices IT doesn’t manage, creating blind spots in endpoint management.

This reflects a bigger shift: 68% of employees now access free AI tools using personal accounts, and 57% still paste sensitive data into these services. The risks tied to shadow AI keep rising, and so does the cost of breaches: incidents involving unsanctioned AI tools now average $670,000 higher than those without. No wonder experts at Palo Alto, Straiker, Google Cloud, and Intruder strongly advise enterprises to block or at least closely watch OpenClaw deployments.

Why classic security tools are defenseless - and why DSPM is essential

Despite many advances in endpoint, identity, and network defense, these tools fall short against AI agents such as OpenClaw. Agents often run code with system privileges and communicate independently, sometimes over encrypted or unfamiliar channels. This blinds existing security tools to what internal agent 'skills' are doing or what data they touch and process. The attack surface now includes prompt injection through emails and documents, poisoning of agent memory, delayed attacks, and natural language input that bypasses static scans.

The missing link is visibility: understanding what data any AI agent - sanctioned or shadow - can access, process, or send out. Data Security Posture Management (DSPM) responds to this by mapping what data AI agents can reach, tracing sensitive data to and from agents everywhere they run. Newer DSPM features such as real-time risk scoring, shadow AI discovery, and detailed flow tracking help organizations see and control risks from AI agents at the data layer (Sentra DSPM for AI agent security).

Immediate enterprise action plan: detection, mapping, and control

Security teams need to move quickly. Start by scanning for OpenClaw, MoltBot, and other shadow AI agents across endpoints, networks, and SaaS apps. Once you know where agents are, check which sensitive data they can access by using DSPM tools with AI agent awareness, such as those from Sentra (Sentra’s AI asset discovery). Treat unauthorized installations as active security incidents: reset credentials, investigate activity, and prevent agents from running on your systems following expert recommendations.

For long-term defense, add continuous shadow AI tracking to your operations. Let DSPM keep your data inventory current, trace possible leaks, and set the right controls for every workflow involving AI. Sentra gives you a single place to find all agent activity, see your actual AI data exposure, and take fast, business-aware action.

Conclusion

OpenClaw is simply the first sign of what will soon be a string of AI agent-driven security problems for enterprises. As companies use AI more to boost productivity and automate work, the chance of unsanctioned agents acting with growing privileges and integrations will continue to rise. Gartner expects that by 2028, one in four cyber incidents will stem from AI agent misuse - and attacks have already started to appear in the news.

Success with AI is no longer about whether you use agents like OpenClaw; it’s about controlling how far they reach and what they can do. Old-school defenses can’t keep up with how quickly shadow AI spreads. Only data-focused security, with total AI agent discovery, risk mapping, and ongoing monitoring, can provide the clarity and controls needed for this new world. Sentra's DSPM platform offers precisely that. Take the first steps now: identify your shadow AI risks, map out where your data can go, and make AI agent security a top priority.

<blogcta-big>

Read More
David Stuart
David Stuart
Nikki Ralston
Nikki Ralston
February 4, 2026
3
Min Read

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

DSPM Dirty Little Secrets: What Vendors Don’t Want You to Test

Discover  What DSPM Vendors Try to Hide 

Your goal in running a data security/DSPM POV is to evaluate all important performance and cost parameters so you can make the best decision and avoid unpleasant surprises. Vendors, on the other hand, are looking for a ‘quick win’ and will often suggest shortcuts like using a limited test data set and copying your data to their environment.

 On the surface this might sound like a reasonable approach, but if you don’t test real data types and volumes in your own environment, the POV process may hide costly failures or compliance violations that will quickly become apparent in production. A recent evaluation of Sentra versus another top emerging DSPM exposed how the other solution’s performance dropped and costs skyrocketed when deployed at petabyte scale. Worse, the emerging DSPM removed data from the customer environment - a clear controls violation.

If you want to run a successful POV and avoid DSPM buyers' remorse you need to look out for these "dirty little secrets".

Dirty Little Secret #1:
‘Start small’ can mean ‘fails at scale’

The biggest 'dirty secret' is that scalability limits are hidden behind the 'start small' suggestion. Many DSPM platforms cannot scale to modern petabyte-sized data environments. Vendors try to conceal this architectural weakness by encouraging small, tightly scoped POVs that never stress the system and create false confidence. Upon broad deployment, this weakness is quickly exposed as scans slow and refresh cycles stretch, forcing teams to drastically reduce scope or frequency. This failure is fundamentally architectural, lacking parallel orchestration and elastic execution, proving that the 'start small' advice was a deliberate tactic to avoid exposing the platform’s inevitable bottleneck.In a recent POV, Sentra successfully scanned 10x more data in approximately the same time than the alternative:

Dirty Little Secret #2:
High cloud cost breaks continuous security

Another reason some vendors try to limit the scale of POVs is to hide the real cloud cost of running them in production. They often use brute-force scanning that reads excessive data, consumes massive compute resources, and is architecturally inefficient. This is easy to mask during short, limited POVs, but quickly drives up cloud bills in production. The resulting cost pressure forces organizations to reduce scan frequency and scope, quietly shifting the platform from continuous security control to periodic inventory. Ultimately, tools that cannot scale scanners efficiently on-demand or scan infrequently trade essential security for cost, proving they are only affordable when they are not fully utilized. In a recent POV run on 100 petabytes of data, Sentra proved to be 10x more operationally cost effective to run:

Dirty Little Secret #3:
‘Good enough’ accuracy degrades security

Accuracy is fundamental to Data Security Posture Management (DSPM) and should not be compromised. While a few points difference may not seem like a deal breaker, every percentage point of classification accuracy can dramatically affect all downstream security controls. Costs increase as manual intervention is required to address FPs. When organizations automate controls based on these inaccuracies, the DSPM platform becomes a source of risk. Confidence is lost. The secret is kept safe because the POV never validates the platform's accuracy against known sensitive data.

In a recent POV Sentra was able to prove less than one percent rate of false positives and false negatives:

DSPM POV Red Flags 

  • Copy data to the vendor environment for a “quick win”
  • Limit features or capabilities to simplify testing
  • Artificially reduce the size of scanned data
  • Restrict integrations to avoid “complications”
  • Limit or avoid API usage

These shortcuts don’t make a POV easier - they make it misleading.

Four DSPM POV Requirements That Expose the Truth

If you want a DSPM POV that reflects production reality, insist on these requirements:

1. Scalability

Run discovery and classification on at least 1 petabyte of real data, including unstructured object storage. Completion time must be measured in hours or days - not weeks.

2. Cost Efficiency

Operate scans continuously at scale and measure actual cloud resource consumption. If cost forces reduced frequency or scope, the model is unsustainable.

3. Accuracy

Validate results against known sensitive data. Measure false positives and false negatives explicitly. Accuracy must be quantified and repeatable.

4. Unstructured Data Depth

Test long-form, heterogeneous, real-world unstructured data including audio, video, etc. Classification must demonstrate contextual understanding, not just keyword matches.

A DSPM solution that only performs well in a limited POV will lead to painful, costly buyer’s regret. Once in production, the failures in scalability, cost efficiency, accuracy, and unstructured data depth quickly become apparent.

Getting ready to run a DSPM POV? Schedule a demo.

<blogcta-big>

Read More
David Stuart
David Stuart
January 28, 2026
3
Min Read

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day: Why Discovery Isn’t Enough

Data Privacy Day is a good reminder for all of us in the tech world: finding sensitive data is only the first step. But in today’s environment, data is constantly moving -across cloud platforms, SaaS applications, and AI workflows. The challenge isn’t just knowing where your sensitive data lives; it’s also understanding who or what can touch it, whether that access is still appropriate, and how it changes as systems evolve.

I’ve seen firsthand that privacy breaks down not because organizations don’t care, but because access decisions are often disconnected from how data is actually being used. You can have the best policies on paper, but if they aren’t continuously enforced, they quickly become irrelevant.

Discovery is Just the Beginning

Most organizations start with data discovery. They run scans, identify sensitive files, and map out where data lives. That’s an important first step, and it’s necessary, but it’s far from sufficient. Data is not static. It moves, it gets copied, it’s accessed by humans and machines alike. Without continuously governing that access, all the discovery work in the world won’t stop privacy incidents from happening.

The next step, and the one that matters most today, is real-time governance. That means understanding and controlling access as it happens. 

Who can touch this data? Why do they have access? Is it still needed? And crucially, how do these permissions evolve as your environment changes?

Take, for example, a contractor who needs temporary access to sensitive customer data. Or an AI workflow that processes internal HR information. If those access rights aren’t continuously reviewed and enforced, a small oversight can quickly become a significant privacy risk.

Privacy in an AI and Automation Era

AI and automation are changing the way we work with data, but they also change the privacy equation. Automated processes can move and use data in ways that are difficult to monitor manually. AI models can generate insights using sensitive information without us even realizing it. This isn’t a hypothetical scenario, it’s happening right now in organizations of all sizes.

That’s why privacy cannot be treated as a once-a-year exercise or a checkbox in an audit report. It has to be embedded into daily operations, into the way data is accessed, used, and monitored. Organizations that get this right build systems that automatically enforce policies and flag unusual access - before it becomes a problem.

Beyond Compliance: Continuous Responsibility

The companies that succeed in protecting sensitive data are those that treat privacy as a continuous responsibility, not a regulatory obligation. They don’t wait for audits or compliance reviews to take action. Instead, they embed privacy into how data is accessed, shared, and used across the organization.

This approach delivers real results. It reduces risk by catching misconfigurations before they escalate. It allows teams to work confidently with data, knowing that sensitive information is protected. And it builds trust - both internally and with customers because people know their data is being handled responsibly.

A New Mindset for Data Privacy Day

So this Data Privacy Day, I challenge organizations to think differently. The question is no longer “Do we know where our sensitive data is?” Instead, ask:

“Are we actively governing who can touch our data, every moment, everywhere it goes?”

In a world where cloud platforms, AI systems, and automated workflows touch nearly every piece of data, privacy isn’t a one-time project. It’s a continuous practice, a mindset, and a responsibility that needs to be enforced in real time.

Organizations that adopt this mindset don’t just meet compliance requirements, they gain a competitive advantage. They earn trust, strengthen security, and maintain a dynamic posture that adapts as systems and access needs evolve.

Because at the end of the day, true privacy isn’t something you achieve once a year. It’s something you maintain every day, in every process, with every decision. This Data Privacy Day, let’s commit to moving beyond discovery and audits, and make continuous data privacy the standard.

<blogcta-big>

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.