All Resources
In this article:
minus iconplus icon
Share the Blog

Why Legacy Data Classification Tools Don't Work Well in the Cloud (But DSPM Does)

September 7, 2023
5
Min Read
Data Security

Data security teams are always trying to understand where their sensitive data is. Yet this goal has remained out of reach for a number of reasons.

The main difficulty is creating a continuously updated data catalog of all production and cloud data. Creating this catalog would involve:

  1.  Identifying everyone in the organization with knowledge of any data stores, with visibility into its contents
  1. Connecting a data classification tool to these data stores
  1. Ensure there’s network connectivity by configuring network and security policies
  1. Confirm that business-critical production systems using each data source won’t be negatively affected, causing damage to performance or availability

Having a process this complex requires a major investment of resources, long workflows, and will still probably not provide the full coverage organizations are looking for. Many so-called successful implementations of such solutions will prove unreliable and too difficult to maintain after a short period of time.

Another pain with a legacy data classification solution is accuracy. Data security professionals are all too aware of the problem of false positives (i.e. wrong classification and data findings) and false negatives (i.e. missing classification of sensitive data that remains unknown). This is mainly due to two reasons.

 

  • Legacy classification solutions rely solely on patterns, such as regular expressions, to identify sensitive data, which falls short in both unstructured data and structured data. 
  • These solutions don’t understand the business context around the data, such as how it is being used, by whom, for what purposes and more.

Without the business context, security teams can’t get any actionable items to remove or protect sensitive data against data risks and security breaches.

Lastly, there’s the reason behind high operational costs. Legacy data classification solutions were not built for the cloud, where each data read/write and network operation has a price tag.

The cloud also offers a much more cost efficient data storage solution and advanced data services that causes organizations to store much more data than they did before moving to the cloud. On the other hand, the public cloud providers also offer a variety of cloud-native APIs and mechanisms that can extremely benefit a data classification and security solution, such as automated backups, cross account federation, direct access to block storage, storage classes, compute instance types, and much more. However, legacy data classification tools, that were not built for the cloud, will completely ignore those benefits and differences, making them an extremely expensive solution for cloud-native organizations.

DSPM: Built to Solve Data Classification in the Cloud 

These challenges have led to the growth of a new approach to securing cloud data - Data Security Posture Management, or DSPM. Sentra’s DSPM  is able to provide full coverage and an up-to-date data catalog with classification of sensitive data, without any complex deployment or operational work involved. This is achieved thanks to a cloud-native agentless architecture, using cloud-native APIs and mechanisms.

A good example of this approach is how Sentra’s DSPM architecture leverages the public cloud mechanism of automated backups for compute instances, block storage, and more. This allows Sentra to securely run its robust discovery and classification technology from within the customer’s premises, in any VPC or subscription/account of the customer’s choice.

This offers a number of benefits:

  1. The organization does not need to change any existing infrastructure configuration, network policies, or security groups.
  1. There’s no need to provide individual credentials for each data source in order for Sentra to discover and scan it.
  1. There is never a performance impact on the actual workloads that are compute-based/bounded, such as virtual machines, that run in production environments. In fact, Sentra’s scanning will never connect via network or application layers to those data stores.

Another benefit of a DSPM built for the cloud is classification accuracy.  Sentra’s DSPM provides an unprecedented level of accuracy thanks to more modern and cloud-native capabilities.This starts with advanced statistical relevance for structured data, enabling our classification engine to understand with high confidence that sensitive data is found within a specific column or field, without scanning every row in a large table.

Sentra leverages even more advanced algorithms for key-value stores and document databases. For unstructured data, the use of AI and LLM -based algorithms unlock tremendous accuracy in understanding and detecting sensitive data types by understanding the context within the data itself. Lastly, the combination of data-centric and identity-centric security approaches provides greater context that allows Sentra’s users to know what actions they should take to remediate data risks when it comes to classification.

Here are two examples of how we apply this context:

1. Different Types of Databases

Personal Identifiable Information (PII) that is found in a database in which only users from the Analytics team have access to, is often a privacy violation and a data risk. On the other hand, PII that is found in a database that only three production microservices have access to is expected,  but requires the data to be isolated within a secure VPC. 

2. Different Access Histories

If 100 employees have access to a sensitive shadow data lake, but only 10 people have actually accessed it in the last year. In this case, the solution would be to reduce permissions and implement stricter access controls. We’d also want to ensure that the data has the right retention policy, to reduce both risks and storage costs. Sentra’s risk score prioritization engine takes multiple data layers into account, including data access permissions, activity, sensitivity, movement and misconfigurations, giving enterprises greater visibility and control over their data risk management processes.

With regards to costs, Sentra’s Data Security Posture Management (DSPM) solution utilizes innovative features that make its scanning and classification solution about two or three orders of magnitude more cost efficient than legacy solutions. The first is the use of smart sampling, where Sentra is able to cluster multiple data units that share the same characteristics, and using intelligent sampling with statistical relevance, understand what sensitive data exists within such data assets that are grouped automatically. This is extremely powerful especially when dealing with data lakes that are often the size of dozens of petabytes, without compromising the solution coverage and accuracy.

Second, Sentra’s modern architecture leverages the benefits of cloud ephemeral resources, such as snapshotting and ephemeral compute workloads with a cloud-native orchestration technology that leverages the elasticity and the scale of the cloud. Sentra balances its resource utilization with the needs of the customer's business, providing advanced scan settings that are built and designed for the cloud. This allows teams to optimize cost according to their business needs, such as determining the frequency and sampling of scans, among more advanced features.

To summarize:

  1. Given the current macroeconomic climate, CISOs should find DSPMs like Sentra as an opportunity to increase their security and minimize their costs
  2. DSPM solutions like Sentra bring an important context - awareness to security teams and tools, allowing them to do better risk management and prioritization by focusing on whats important
  3. Data is likely to continue to be the most important asset of every business, as more organizations embrace the power of the cloud. Therefore, a DSPM will be a pivotal tool in realizing the true value of the data while ensuring it is always secured
  4. Accuracy is key and AI is an enabler for a good data classification tool

<blogcta-big>

Yair brings a wealth of experience in cybersecurity and data product management. In his previous role, Yair led product management at Microsoft and Datadog. With a background as a member of the IDF's Unit 8200 for five years, he possesses over 18 years of expertise in enterprise software, security, data, and cloud computing. Yair has held senior product management positions at Datadog, Digital Asset, and Microsoft Azure Protection.

Subscribe

Latest Blog Posts

David Stuart
David Stuart
February 18, 2026
3
Min Read

Entity-Level vs. File-Level Data Classification: Effective DSPM Needs Both

Entity-Level vs. File-Level Data Classification: Effective DSPM Needs Both

Most security teams think of data classification as a single capability. A tool scans data, finds sensitive information, and labels it. Problem solved. In reality, modern data environments have made classification far more complex.

As organizations scale across cloud platforms, SaaS apps, data lakes, collaboration tools, and AI systems, security teams must answer two fundamentally different questions:

  1. What sensitive data exists inside this asset?
  2. What is this asset actually about?

These questions represent two distinct approaches:

  • Entity-level data classification
  • File-level (asset-level) data classification

A well-functioning Data Security Posture Management (DSPM) requires both.

What Is Entity-Level Data Classification?

Entity-level classification identifies specific sensitive data elements within structured and unstructured content. Instead of labeling an entire file as sensitive, it determines exactly which regulated entities are present and where they appear. These entities can include personal identifiers, financial account numbers, healthcare codes, credentials, digital identifiers, and other protected data types.

This approach provides precision at the field or token level. By detecting and validating individual data elements, security teams gain measurable visibility into exposure - including how many sensitive values exist, where they are located, and how they are used. That visibility enables targeted controls such as masking, redaction, tokenization, and DLP enforcement. In cloud and AI-driven environments, where risk is often tied to specific identifiers rather than document categories, this level of granularity is essential.

Examples of Entity-Level Detection

Entity-level classifiers detect atomic data elements such as:

  • Personal identifiers (names, emails, Social Security numbers)
  • Financial data (credit card numbers, IBANs, bank accounts)
  • Healthcare markers (diagnoses, ICD codes, treatment terms)
  • Credentials (API keys, tokens, private keys, passwords)
  • Digital identifiers (IP addresses, device IDs, user IDs)

This level of granularity enables precise policy enforcement and measurable risk assessment.

How Entity-Level Classification Works

High-quality entity detection is not just regex scanning. Effective systems combine multiple validation layers to reduce false positives and increase accuracy:

  • Deterministic patterns (regular expressions, format checks)
  • Checksum validation (e.g., Luhn algorithm for credit cards)
  • Keyword and proximity analysis
  • Dictionaries and structured reference tables
  • Natural Language Processing (NLP) with Named Entity Recognition
  • Machine learning models to suppress noise

This multi-signal approach ensures detection works reliably across messy, real-world data.

When Entity-Level Classification Is Essential

Entity-level classification is essential when security controls depend on the presence of specific data elements rather than broad document categories. Many policies are triggered only when certain identifiers appear together ,such as a Social Security number paired with a name - or when regulated financial or healthcare data exceeds defined thresholds. In these cases, security teams must accurately locate, validate, and quantify sensitive fields to enforce controls effectively.

This precision is also required for operational actions such as masking, redaction, tokenization, and DLP enforcement, where controls must be applied to exact values instead of entire files. In structured data environments like databases and warehouses, entity-level classification enables column- and table-level visibility, forming the basis for exposure measurement, risk scoring, and access governance decisions.

However, entity-level detection does not explain the broader business context of the data. A credit card number may appear in an invoice, a support ticket, a legal filing, or a breach report. While the identifier is the same, the surrounding context changes the associated risk and the appropriate response.

This is where file-level classification becomes necessary.

What Is File-Level (Asset-Level) Data Classification?

File-level classification determines the semantic meaning and business context of an entire data asset.

Instead of asking what sensitive values exist, it asks:

What kind of document or dataset is this? What is its business purpose?

Examples of File-Level Classification

File-level classifiers identify attributes such as:

  • Business domain (HR, Legal, Finance, Healthcare, IT)
  • Document type (NDA, invoice, payroll record, resume, contract)
  • Business purpose (compliance evidence, client matter, incident report)

This context is essential for appropriate governance, access control, and AI safety.

How File-Level Classification Works

File-level classification relies on semantic understanding, typically powered by:

  • Small and Large Language Models (SLMs/LLMs)
  • Vector embeddings for topic similarity
  • Confidence scoring and ensemble validation
  • Trainable models for organization-specific document types

This allows systems to classify documents even when sensitive entities are sparse, masked, or absent.

For example, an employment contract may contain limited PII but still require strict access controls because of its business context.

When File-Level Classification Is Essential

File-level classification becomes essential when security decisions depend on business context rather than just the presence of sensitive strings. For example, enforcing domain-based access controls requires knowing whether a document belongs to HR, Legal, or Finance - not just whether it contains an email address or account number. The same applies to implementing least-privilege access models, where entire categories of documents may need tighter controls based on their purpose.

File-level classification also plays a critical role in retention policies and audit workflows, where governance rules are applied to document types such as contracts, payroll records, or compliance evidence. And as organizations adopt generative AI tools, semantic understanding becomes even more important for implementing AI governance guardrails, ensuring copilots don’t ingest sensitive HR files or privileged legal documents.

That said, file-level classification alone is not sufficient. While it can determine what a document is about, it does not precisely locate or quantify sensitive data within it. A document labeled “Finance” may or may not contain exposed credentials or an excessive concentration of regulated identifiers, risks that only entity-level detection can accurately measure.

Entity-Level vs. File-Level Classification: Key Differences

Entity-Level Classification File-Level Classification
Detects specific sensitive values Identifies document meaning and context
Enables masking, redaction, and DLP Enables context-aware governance
Works well for structured data Strong for unstructured documents
Provides precise risk signals Provides business intent and domain context
Lacks semantic understanding of purpose Lacks granular entity visibility

Each approach solves a different security problem. Relying on only one creates blind spots or false positives. Together, they form a powerful combination.

Why Using Only One Approach Creates Security Gaps

Entity-Only Approaches

Tools focused exclusively on entity detection can:

  • Flag isolated sensitive values without context
  • Generate high alert volumes
  • Miss business intent
  • Treat all instances of the same entity as equal risk

A payroll file and a legal complaint may both contain Social Security numbers — but they represent different governance needs.

File-Only Approaches

Tools focused only on semantic labeling can:

  • Identify that a document belongs to “Finance” or “HR”
  • Apply domain-based policies
  • Enable context-aware access

But they may miss:

  • Embedded credentials
  • Excessive concentrations of regulated identifiers
  • Toxic combinations of data types (e.g., PII + healthcare terms)

Without entity-level precision, risk scoring becomes guesswork.

How Effective DSPM Combines Both Layers

The real power of modern Data Security Posture Management (DSPM) emerges when entity-level and file-level classification operate together rather than in isolation. Each layer strengthens the other. Context can reinforce entity validation: for example, a dense concentration of financial identifiers helps confirm that a document truly belongs in the Finance domain or represents an invoice. At the same time, entity signals can refine context. If a file is semantically classified as an invoice, the system can apply tighter validation logic to account numbers, totals, and other financial fields, improving accuracy and reducing noise.

This combination also enables more intelligent policy enforcement. Instead of relying on brittle, one-dimensional rules, security teams can detect high-risk combinations of data. Personal identifiers appearing within a healthcare context may elevate regulatory exposure. Credentials embedded inside operational documents may signal immediate security risk. An unusually high concentration of identifiers in an externally shared HR file may indicate overexposure. These are nuanced risk patterns that neither entity-level nor file-level classification can reliably identify alone.

When both layers inform policy decisions, organizations can move toward true risk-based governance. Sensitivity is no longer determined solely by what specific data elements exist, nor solely by what category a document falls into, but by the intersection of the two. Risk is derived from both what is inside the data and what the data represents.

This dual-layer approach reduces false positives, increases analyst trust, and enables more precise controls across cloud and SaaS environments. It also becomes essential for AI governance, where understanding both sensitive content and business context determines whether data is safe to expose to copilots or generative AI systems.

What to Look for in a DSPM Classification Engine

Not all DSPM platforms treat classification equally.

When evaluating solutions, security leaders should ask:

  • Does the platform classify and validate sensitive entities beyond basic regex?
  • Can it semantically identify document type and business domain?
  • Are entity-level and file-level signals tightly integrated?
  • Can policies reason across both layers simultaneously?
  • Does risk scoring incorporate both precision and context?

The goal is not simply to “classify data,” but to generate actionable, risk-aligned data  intelligence.

The Bottom Line

Modern data estates are too complex for single-layer classification models. Entity-level classification provides precision, identifying exactly what sensitive data exists and where.

File-level classification provides context - understanding what the data is and why it exists.

Together, they enable accurate risk detection, effective policy enforcement, least-privilege access, and AI-safe governance. In today’s cloud-first and AI-driven environments, data security posture management must go beyond isolated detections or broad labels. It must understand both the contents of data and its meaning - at the same time.

That’s the new standard for data classification.

Read More
Ariel Rimon
Ariel Rimon
Daniel Suissa
Daniel Suissa
February 16, 2026
4
Min Read

How Modern Data Security Discovers Sensitive Data at Cloud Scale

How Modern Data Security Discovers Sensitive Data at Cloud Scale

Modern cloud environments contain vast amounts of data stored in object storage services such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. In large organizations, a single data store can contain billions (or even tens of billions) of objects. In this reality, traditional approaches that rely on scanning every file to detect sensitive data quickly become impractical.

Full object-level inspection is expensive, slow, and difficult to sustain over time. It increases cloud costs, extends onboarding timelines, and often fails to keep pace with continuously changing data. As a result, modern data security platforms must adopt more intelligent techniques to build accurate data inventories and sensitivity models without scanning every object.

Why Object-Level Scanning Fails at Scale

Object storage systems expose data as individual objects, but treating each object as an independent unit of analysis does not reflect how data is actually created, stored, or used.

In large environments, scanning every object introduces several challenges:

  • Cost amplification from repeated content inspection at massive scale
  • Long time to actionable insights during the first scan
  • Operational bottlenecks that prevent continuous scanning
  • Diminishing returns, as many objects contain redundant or structurally identical data

The goal of data discovery is not exhaustive inspection, but rather accurate understanding of where sensitive data exists and how it is organized.

The Dataset as the Correct Unit of Analysis

Although cloud storage presents data as individual objects, most data is logically organized into datasets. These datasets often follow consistent structural patterns such as:

  • Time-based partitions
  • Application or service-specific logs
  • Data lake tables and exports
  • Periodic reports or snapshots

For example, the following objects are separate files but collectively represent a single dataset:

logs/2026/01/01/app_events_001.json

logs/2026/01/02/app_events_002.json

logs/2026/01/03/app_events_003.json

While these objects differ by date, their structure, schema, and sensitivity characteristics are typically consistent. Treating them as a single dataset enables more accurate and scalable analysis.

Analyzing Storage Structure Without Reading Every File

Modern data discovery platforms begin by analyzing storage metadata and object structure, rather than file contents.

This includes examining:

  • Object paths and prefixes
  • Naming conventions and partition keys
  • Repeating directory patterns
  • Object counts and distribution

By identifying recurring patterns and natural boundaries in storage layouts, platforms can infer how objects relate to one another and where dataset boundaries exist. This analysis does not require reading object contents and can be performed efficiently at cloud scale.

Configurable by Design

Sampling can be disabled for specific data sources, and the dataset grouping algorithm can be adjusted by the user. This allows teams to tailor the discovery process to their environment and needs.


Automatic Grouping into Dataset-Level Assets

Using structural analysis, objects are automatically grouped into dataset-level assets. Clustering algorithms identify related objects based on path similarity, partitioning schemes, and organizational patterns. This process requires no manual configuration and adapts as new objects are added. Once grouped, these datasets become the primary unit for further analysis, replacing object-by-object inspection with a more meaningful abstraction.

Representative Sampling for Sensitivity Inference

After grouping, sensitivity analysis is performed using representative sampling. Instead of inspecting every object, the platform selects a small, statistically meaningful subset of files from each dataset.

Sampling strategies account for factors such as:

  • Partition structure
  • File size and format
  • Schema variation within the dataset

By analyzing these samples, the platform can accurately infer the presence of sensitive data across the entire dataset. This approach preserves accuracy while dramatically reducing the amount of data that must be scanned.

Handling Non-Standard Storage Layouts

In some environments, storage layouts may follow unconventional or highly customized naming schemes that automated grouping cannot fully interpret. In these cases, manual grouping provides additional precision. Security analysts can define logical dataset boundaries, often supported by LLM-assisted analysis to better understand complex or ambiguous structures. Once defined, the same sampling and inference mechanisms are applied, ensuring consistent sensitivity assessment even in edge cases.

Scalability, Cost, and Operational Impact

By combining structural analysis, grouping, and representative sampling, this approach enables:

  • Scalable data discovery across millions or billions of objects
  • Predictable and significantly reduced cloud scanning costs
  • Faster onboarding and continuous visibility as data changes
  • High confidence sensitivity models without exhaustive inspection

This model aligns with the realities of modern cloud environments, where data volume and velocity continue to increase.

From Discovery to Classification and Continuous Risk Management

Dataset-level asset discovery forms the foundation for scalable classification, access governance, and risk detection. Once assets are defined at the dataset level, classification becomes more accurate and easier to maintain over time. This enables downstream use cases such as identifying over-permissioned access, detecting risky data exposure, and managing AI-driven data access patterns.

Applying These Principles in Practice

Platforms like Sentra apply these principles to help organizations discover, classify, and govern sensitive data at cloud scale - without relying on full object-level scans. By focusing on dataset-level discovery and intelligent sampling, Sentra enables continuous visibility into sensitive data while keeping costs and operational overhead under control.

<blogcta-big>

Read More
Elie Perelman
Elie Perelman
February 13, 2026
3
Min Read

Best Data Access Governance Tools

Best Data Access Governance Tools

Managing access to sensitive information is becoming one of the most critical challenges for organizations in 2026. As data sprawls across cloud platforms, SaaS applications, and on-premises systems, enterprises face compliance violations, security breaches, and operational inefficiencies. Data Access Governance Tools provide automated discovery, classification, and access control capabilities that ensure only authorized users interact with sensitive data. This article examines the leading platforms, essential features, and implementation strategies for effective data access governance.

Best Data Access Governance Tools

The market offers several categories of solutions, each addressing different aspects of data access governance. Enterprise platforms like Collibra, Informatica Cloud Data Governance, and Atlan deliver comprehensive metadata management, automated workflows, and detailed data lineage tracking across complex data estates.

Specialized Data Access Governance (DAG) platforms focus on permissions and entitlements. Varonis, Immuta, and Securiti provide continuous permission mapping, risk analytics, and automated access reviews. Varonis identifies toxic combinations by discovering and classifying sensitive data, then correlating classifications with access controls to flag scenarios where high-sensitivity files have overly broad permissions.

User Reviews and Feedback

Varonis

  • Detailed file access analysis and real-time protection capabilities
  • Excellent at identifying toxic permission combinations
  • Learning curve during initial implementation

BigID

  • AI-powered classification with over 95% accuracy
  • Handles both structured and unstructured data effectively
  • Strong privacy automation features
  • Technical support response times could be improved

OneTrust

  • User-friendly interface and comprehensive privacy management
  • Deep integration into compliance frameworks
  • Robust feature set requires organizational support to fully leverage

Sentra

  • Effective data discovery and automation capabilities (January 2026 reviews)
  • Significantly enhances security posture and streamlines audit processes
  • Reduces cloud storage costs by approximately 20%

Critical Capabilities for Modern Data Access Governance

Effective platforms must deliver several core capabilities to address today's challenges:

Unified Visibility

Tools need comprehensive visibility across IaaS, PaaS, SaaS, and on-premises environments without moving data from its original location. This "in-environment" architecture ensures data never leaves organizational control while enabling complete governance.

Dynamic Data Movement Tracking

Advanced platforms monitor when sensitive assets flow between regions, migrate from production to development, or enter AI pipelines. This goes beyond static location mapping to provide real-time visibility into data transformations and transfers.

Automated Classification

Modern tools leverage AI and machine learning to identify sensitive data with high accuracy, then apply appropriate tags that drive downstream policy enforcement. Deep integration with native cloud security tools, particularly Microsoft Purview, enables seamless policy enforcement.

Toxic Combination Detection

Platforms must correlate data sensitivity with access permissions to identify scenarios where highly sensitive information has broad or misconfigured controls. Once detected, systems should provide remediation guidance or trigger automated actions.

Infrastructure and Integration Considerations

Deployment architecture significantly impacts governance effectiveness. Agentless solutions connecting via cloud provider APIs offer zero impact on production latency and simplified deployment. Some platforms use hybrid approaches combining agentless scanning with lightweight collectors when additional visibility is required.

Integration Area Key Considerations Example Capabilities
Microsoft Ecosystem Native integration with Microsoft Purview, Microsoft 365, and Azure Varonis monitors Copilot AI prompts and enforces consistent policies
Data Platforms Direct remediation within platforms such as Snowflake BigID automatically enforces dynamic data masking and tagging
Cloud Providers API-based scanning without performance overhead Sentra’s agentless architecture scans environments without deploying agents

Open Source Data Governance Tools

Organizations seeking cost-effective or customizable solutions can leverage open source tools. Apache Atlas, originally designed for Hadoop environments, provides mature governance capabilities that, when integrated with Apache Ranger, support tag-based policy management for flexible access control.

DataHub, developed at LinkedIn, features AI-powered metadata ingestion and role-based access control. OpenMetadata offers a unified metadata platform consolidating information across data sources with data lineage tracking and customized workflows.

While open source tools provide foundational capabilities, metadata cataloging, data lineage tracking, and basic access controls, achieving enterprise-grade governance typically requires additional customization, integration work, and infrastructure investment. The software is free, but self-hosting means accounting for operational costs and expertise needed to maintain these platforms.

Understanding the Gartner Magic Quadrant for Data Governance Tools

Gartner's Magic Quadrant assesses vendors on ability to execute and completeness of vision. For data access governance, Gartner examines how effectively platforms define, automate, and enforce policies controlling user access to data.

<blogcta-big>

Read More
Expert Data Security Insights Straight to Your Inbox
What Should I Do Now:
1

Get the latest GigaOm DSPM Radar report - see why Sentra was named a Leader and Fast Mover in data security. Download now and stay ahead on securing sensitive data.

2

Sign up for a demo and learn how Sentra’s data security platform can uncover hidden risks, simplify compliance, and safeguard your sensitive data.

3

Follow us on LinkedIn, X (Twitter), and YouTube for actionable expert insights on how to strengthen your data security, build a successful DSPM program, and more!

Before you go...

Get the Gartner Customers' Choice for DSPM Report

Read why 98% of users recommend Sentra.

White Gartner Peer Insights Customers' Choice 2025 badge with laurel leaves inside a speech bubble.