All Resources
In this article:
minus iconplus icon
Share the Blog

Use Redshift Data Scrambling for Additional Data Protection

May 3, 2023
8
 Min Read

According to IBM, a data breach in the United States cost companies an average of 9.44 million dollars in 2022. It is now more important than ever for organizations to place high importance on protecting confidential information. Data scrambling, which can add an extra layer of security to data, is one approach to accomplish this. 

In this post, we'll analyze the value of data protection, look at the potential financial consequences of data breaches, and talk about how Redshift Data Scrambling may help protect private information.

The Importance of Data Protection

Data protection is essential to safeguard sensitive data from unauthorized access. Identity theft, financial fraud,and other serious consequences are all possible as a result of a data breach. Data protection is also crucial for compliance reasons. Sensitive data must be protected by law in several sectors, including government, banking, and healthcare. Heavy fines, legal problems, and business loss may result from failure to abide by these regulations.

Hackers employ many techniques, including phishing, malware, insider threats, and hacking, to get access to confidential information. For example, a phishing assault may lead to the theft of login information, and malware may infect a system, opening the door for additional attacks and data theft. 

So how to protect yourself against these attacks and minimize your data attack surface?

What is Redshift Data Masking?

Redshift data masking is a technique used to protect sensitive data in Amazon Redshift; a cloud-based data warehousing and analytics service. Redshift data masking involves replacing sensitive data with fictitious, realistic values to protect it from unauthorized access or exposure. It is possible to enhance data security by utilizing Redshift data masking in conjunction with other security measures, such as access control and encryption, in order to create a comprehensive data protection plan.

What is Redshift Data Masking

What is Redshift Data Scrambling?

Redshift data scrambling protects confidential information in a Redshift database by altering original data values using algorithms or formulas, creating unrecognizable data sets. This method is beneficial when sharing sensitive data with third parties or using it for testing, development, or analysis, ensuring privacy and security while enhancing usability. 

The technique is highly customizable, allowing organizations to select the desired level of protection while maintaining data usability. Redshift data scrambling is cost-effective, requiring no additional hardware or software investments, providing an attractive, low-cost solution for organizations aiming to improve cloud data security.

Data Masking vs. Data Scrambling

Data masking involves replacing sensitive data with a fictitious but realistic value. However, data scrambling, on the other hand, involves changing the original data values using an algorithm or a formula to generate a new set of values.

In some cases, data scrambling can be used as part of data masking techniques. For instance, sensitive data such as credit card numbers can be scrambled before being masked to enhance data protection further.

Setting up Redshift Data Scrambling

Having gained an understanding of Redshift and data scrambling, we can now proceed to learn how to set it up for implementation. Enabling data scrambling in Redshift requires several steps.

To achieve data scrambling in Redshift, SQL queries are utilized to invoke built-in or user-defined functions. These functions utilize a blend of cryptographic techniques and randomization to scramble the data.

The following steps are explained using an example code just for a better understanding of how to set it up:

Step 1: Create a new Redshift cluster

Create a new Redshift cluster or use an existing cluster if available. 

Redshift create cluster

Step 2: Define a scrambling key

Define a scrambling key that will be used to scramble the sensitive data.

 
SET session my_scrambling_key = 'MyScramblingKey';

In this code snippet, we are defining a scrambling key by setting a session-level parameter named <inlineCode>my_scrambling_key<inlineCode> to the value <inlineCode>MyScramblingKey<inlineCode>. This key will be used by the user-defined function to scramble the sensitive data.

Step 3: Create a user-defined function (UDF)

Create a user-defined function in Redshift that will be used to scramble the sensitive data. 


CREATE FUNCTION scramble(input_string VARCHAR)
RETURNS VARCHAR
STABLE
AS $$
DECLARE
scramble_key VARCHAR := 'MyScramblingKey';
BEGIN
-- Scramble the input string using the key
-- and return the scrambled output
RETURN ;
END;
$$ LANGUAGE plpgsql;

Here, we are creating a UDF named <inlineCode>scramble<inlineCode> that takes a string input and returns the scrambled output. The function is defined as <inlineCode>STABLE<inlineCode>, which means that it will always return the same result for the same input, which is important for data scrambling. You will need to input your own scrambling logic.

Step 4: Apply the UDF to sensitive columns

Apply the UDF to the sensitive columns in the database that need to be scrambled.


UPDATE employee SET ssn = scramble(ssn);

For example, applying the <inlineCode>scramble<inlineCode> UDF to a column saying, <inlineCode>ssn<inlineCode> in a table named <inlineCode>employee<inlineCode>. The <inlineCode>UPDATE<inlineCode> statement calls the <inlineCode>scramble<inlineCode> UDF and updates the values in the <inlineCode>ssn<inlineCode> column with the scrambled values.

Step 5: Test and validate the scrambled data

Test and validate the scrambled data to ensure that it is unreadable and unusable by unauthorized parties.


SELECT ssn, scramble(ssn) AS scrambled_ssn
FROM employee;

In this snippet, we are running a <inlineCode>SELECT<inlineCode> statement to retrieve the <inlineCode>ssn<inlineCode> column and the corresponding scrambled value using the <inlineCode>scramble<inlineCode> UDF. We can compare the original and scrambled values to ensure that the scrambling is working as expected. 

Step 6: Monitor and maintain the scrambled data

To monitor and maintain the scrambled data, we can regularly check the sensitive columns to ensure that they are still rearranged and that there are no vulnerabilities or breaches. We should also maintain the scrambling key and UDF to ensure that they are up-to-date and effective.

Different Options for Scrambling Data in Redshift

Selecting a data scrambling technique involves balancing security levels, data sensitivity, and application requirements. Various general algorithms exist, each with unique pros and cons. To scramble data in Amazon Redshift, you can use the following Python code samples in conjunction with a library like psycopg2 to interact with your Redshift cluster. Before executing the code samples, you will need to install the psycopg2 library:


pip install psycopg2

Random

Utilizing a random number generator, the Random option quickly secures data, although its susceptibility to reverse engineering limits its robustness for long-term protection.


import random
import string
import psycopg2

def random_scramble(data):
    scrambled = ""
    for char in data:
        scrambled += random.choice(string.ascii_letters + string.digits)
    return scrambled

# Connect to your Redshift cluster
conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()
# Fetch data from your table
cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

# Scramble the data
scrambled_rows = [(random_scramble(row[0]),) for row in rows]

# Update the data in the table
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

# Close the connection
cursor.close()
conn.close()

Shuffle

The Shuffle option enhances security by rearranging data characters. However, it remains prone to brute-force attacks, despite being harder to reverse-engineer.


import random
import psycopg2

def shuffle_scramble(data):
    data_list = list(data)
    random.shuffle(data_list)
    return ''.join(data_list)

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

scrambled_rows = [(shuffle_scramble(row[0]),) for row in rows]

cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(scrambled, original) for scrambled, original in zip(scrambled_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Reversible

By scrambling characters in a decryption key-reversible manner, the Reversible method poses a greater challenge to attackers but is still vulnerable to brute-force attacks. We’ll use the Caesar cipher as an example.


def caesar_cipher(data, key):
    encrypted = ""
    for char in data:
        if char.isalpha():
            shift = key % 26
            if char.islower():
                encrypted += chr((ord(char) - 97 + shift) % 26 + 97)
            else:
                encrypted += chr((ord(char) - 65 + shift) % 26 + 65)
        else:
            encrypted += char
    return encrypted

conn = psycopg2.connect(host='your_host', port='your_port', dbname='your_dbname', user='your_user', password='your_password')
cursor = conn.cursor()

cursor.execute("SELECT sensitive_column FROM your_table;")
rows = cursor.fetchall()

key = 5
encrypted_rows = [(caesar_cipher(row[0], key),) for row in rows]
cursor.executemany("UPDATE your_table SET sensitive_column = %s WHERE sensitive_column = %s;", [(encrypted, original) for encrypted, original in zip(encrypted_rows, rows)])
conn.commit()

cursor.close()
conn.close()

Custom

The Custom option enables users to create tailor-made algorithms to resist specific attack types, potentially offering superior security. However, the development and implementation of custom algorithms demand greater time and expertise.

Best Practices for Using Redshift Data Scrambling

There are several best practices that should be followed when using Redshift Data Scrambling to ensure maximum protection:

Use Unique Keys for Each Table

To ensure that the data is not compromised if one key is compromised, each table should have its own unique key pair. This can be achieved by creating a unique index on the table.


CREATE UNIQUE INDEX idx_unique_key ON table_name (column_name);

Encrypt Sensitive Data Fields 

Sensitive data fields such as credit card numbers and social security numbers should be encrypted to provide an additional layer of security. You can encrypt data fields in Redshift using the ENCRYPT function. Here's an example of how to encrypt a credit card number field:


SELECT ENCRYPT('1234-5678-9012-3456', 'your_encryption_key_here');

Use Strong Encryption Algorithms

Strong encryption algorithms such as AES-256 should be used to provide the strongest protection. Redshift supports AES-256 encryption for data at rest and in transit.


CREATE TABLE encrypted_table (  sensitive_data VARCHAR(255) ENCODE ZSTD ENCRYPT 'aes256' KEY 'my_key');

Control Access to Encryption Keys 

Access to encryption keys should be restricted to authorized personnel to prevent unauthorized access to sensitive data. You can achieve this by setting up an AWS KMS (Key Management Service) to manage your encryption keys. Here's an example of how to restrict access to an encryption key using KMS in Python:


import boto3

kms = boto3.client('kms')

key_id = 'your_key_id_here'
grantee_principal = 'arn:aws:iam::123456789012:user/jane'

response = kms.create_grant(
    KeyId=key_id,
    GranteePrincipal=grantee_principal,
    Operations=['Decrypt']
)

print(response)

Regularly Rotate Encryption Keys 

Regular rotation of encryption keys ensures that any compromised keys do not provide unauthorized access to sensitive data. You can schedule regular key rotation in AWS KMS by setting a key policy that specifies a rotation schedule. Here's an example of how to schedule annual key rotation in KMS using the AWS CLI:

 
aws kms put-key-policy \\
    --key-id your_key_id_here \\
    --policy-name default \\
    --policy
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    "{\\"Version\\":\\"2012-10-17\\",\\"Statement\\":[{\\"Effect\\":\\"Allow\\"
    \\":\\"kms:RotateKey\\",\\"Resource\\":\\"*\\"},{\\"Effect\\":\\"Allow\\",\
    \"Principal\\":{\\"AWS\\":\\"arn:aws:iam::123456789012:root\\"},\\"Action\\
    ":\\"kms:CreateGrant\\",\\"Resource\\":\\"*\\",\\"Condition\\":{\\"Bool\\":
    {\\"kms:GrantIsForAWSResource\\":\\"true\\"}}}]}"

Turn on logging 

To track user access to sensitive data and identify any unwanted access, logging must be enabled. All SQL commands that are executed on your cluster are logged when you activate query logging in Amazon Redshift. This applies to queries that access sensitive data as well as data-scrambling operations. Afterwards, you may examine these logs to look for any strange access patterns or suspect activities.

You may use the following SQL statement to make query logging available in Amazon Redshift:

ALTER DATABASE  SET enable_user_activity_logging=true;

The stl query system table may be used to retrieve the logs once query logging has been enabled. For instance, the SQL query shown below will display all queries that reached a certain table:

Monitor Performance 

Data scrambling is often a resource-intensive practice, so it’s good to monitor CPU usage, memory usage, and disk I/O to ensure your cluster isn’t being overloaded. In Redshift, you can use the <inlineCode>svl_query_summary<inlineCode> and <inlineCode>svl_query_report<inlineCode> system views to monitor query performance. You can also use Amazon CloudWatch to monitor metrics such as CPU usage and disk space.

Amazon CloudWatch

Establishing Backup and Disaster Recovery

In order to prevent data loss in the case of a disaster, backup and disaster recovery mechanisms should be put in place. Automated backups and manual snapshots are only two of the backup and recovery methods offered by Amazon Redshift. Automatic backups are taken once every eight hours by default. 

Moreover, you may always manually take a snapshot of your cluster. In the case of a breakdown or disaster, your cluster may be restored using these backups and snapshots. Use this SQL query to manually take a snapshot of your cluster in Amazon Redshift:

CREATE SNAPSHOT ; 

To restore a snapshot, you can use the <inlineCode>RESTORE<inlineCode> command. For example:


RESTORE 'snapshot_name' TO 'new_cluster_name';

Frequent Review and Updates

To ensure that data scrambling procedures remain effective and up-to-date with the latest security requirements, it is crucial to consistently review and update them. This process should include examining backup and recovery procedures, encryption techniques, and access controls.

In Amazon Redshift, you can assess access controls by inspecting all roles and their associated permissions in the <inlineCode>pg_roles<inlineCode> system catalog database. It is essential to confirm that only authorized individuals have access to sensitive information.

To analyze encryption techniques, use the <inlineCode>pg_catalog.pg_attribute<inlineCode> system catalog table, which allows you to inspect data types and encryption settings for each column in your tables. Ensure that sensitive data fields are protected with robust encryption methods, such as AES-256.

The AWS CLI commands <inlineCode>aws backup plan<inlineCode> and <inlineCode>aws backup vault<inlineCode> enable you to review your backup plans and vaults, as well as evaluate backup and recovery procedures. Make sure your backup and recovery procedures are properly configured and up-to-date.

Decrypting Data in Redshift

There are different options for decrypting data, depending on the encryption method used and the tools available; the decryption process is similar to of encryption, usually a custom UDF is used to decrypt the data, let’s look at one example of decrypting data scrambling with a substitution cipher.

Step 1: Create a UDF with decryption logic for substitution


CREATE FUNCTION decrypt_substitution(ciphertext varchar) RETURNS varchar
IMMUTABLE AS $$
    alphabet = 'abcdefghijklmnopqrstuvwxyz'
    substitution = 'ijklmnopqrstuvwxyzabcdefgh'
    reverse_substitution = ''.join(sorted(substitution, key=lambda c: substitution.index(c)))
    plaintext = ''
    for i in range(len(ciphertext)):
        index = substitution.find(ciphertext[i])
        if index == -1:
            plaintext += ciphertext[i]
        else:
            plaintext += reverse_substitution[index]
    return plaintext
$$ LANGUAGE plpythonu;

Step 2: Move the data back after truncating and applying the decryption function


TRUNCATE original_table;
INSERT INTO original_table (column1, decrypted_column2, column3)
SELECT column1, decrypt_substitution(encrypted_column2), column3
FROM temp_table;

In this example, encrypted_column2 is the encrypted version of column2 in the temp_table. The decrypt_substitution function is applied to encrypted_column2, and the result is inserted into the decrypted_column2 in the original_table. Make sure to replace column1, column2, and column3 with the appropriate column names, and adjust the INSERT INTO statement accordingly if you have more or fewer columns in your table.

Conclusion

Redshift data scrambling is an effective tool for additional data protection and should be considered as part of an organization's overall data security strategy. In this blog post, we looked into the importance of data protection and how this can be integrated effectively into the  data warehouse. Then, we covered the difference between data scrambling and data masking before diving into how one can set up Redshift data scrambling.

Once you begin to accustom to Redshift data scrambling, you can upgrade your security techniques with different techniques for scrambling data and best practices including encryption practices, logging, and performance monitoring. Organizations may improve their data security posture management (DSPM) and reduce the risk of possible breaches by adhering to these recommendations and using an efficient strategy.

Read insightful articles by the Sentra team about different topics, such as, preventing data breaches, securing sensitive data, and more.

Subscribe

Latest Blog Posts

Team Sentra
Team Sentra
December 9, 2024
3
Min Read
Data Security

8 Holiday Data Security Tips for Businesses

8 Holiday Data Security Tips for Businesses

As the end of the year approaches and the holiday season brings a slight respite to many businesses, it's the perfect time to review and strengthen your data security practices. With fewer employees in the office and a natural dip in activity, the holidays present an opportunity to take proactive steps that can safeguard your organization in the new year. From revisiting access permissions to guarding sensitive data access during downtime, these tips will help you ensure that your data remains protected, even when things are quieter.

Here's how you can bolster your business’s security efforts before the year ends:

  1. Review Access and Permissions Before the New Year
    Take advantage of the holiday downtime to review data access permissions in your systems. Ensure employees only have access to the data they need, and revoke permissions for users who no longer require them (or worse, are no longer employees). It's a proactive way to start the new year securely.
  2. Limit Access to Sensitive Data During Holiday Downtime
    With many staff members out of the office, review who has access to sensitive data. Temporarily restrict access to critical systems and data for those not on active duty to minimize the risk of accidental or malicious data exposure during the holidays.
  3. Have a Data Usage Policy
    With the holidays bringing a mix of time off and remote work, it’s a good idea to revisit your data usage policy. Creating and maintaining a data usage policy ensures clear guidelines for who can access what data, when, and how, especially during the busy holiday season when staff availability may be lower. By setting clear rules, you can help prevent unauthorized access or misuse, ensuring that your data remains secure throughout the holidays, and all the way to 2025.
  4. Eliminate Unnecessary Data to Reduce Shadow Data Risks
    Data security risks increase as long as data remains accessible. With the holiday season bringing potential distractions, it's a great time to review and delete any unnecessary sensitive data, such as PII or PHI, to prevent shadow data from posing a security risk as the year wraps up with the new year approaching.
  5. Apply Proper Hygiene to Protect Sensitive Data
    For sensitive data that must exist, be certain to apply proper hygiene such as masking/de-identification, encryption, logging, etc., to ensure the data isn’t improperly disclosed. With holiday sales, year-end reporting, and customer gift transactions in full swing, ensuring sensitive data is secure is more important than ever. Many stores have native tools that can assist (e.g., Snowflake DDM, Purview MIP, etc.).
  6. Monitor Third-Party Data Access
    Unchecked third-party access can lead to data breaches, financial loss, and reputational damage. The holidays often mean new partnerships or vendors handling seasonal activities like marketing campaigns or order fulfillment. Keep track of how vendors collect, use, and share your data. Create an inventory of vendors and map their data access to ensure proper oversight, especially during this busy time.
  7. Monitor Data Movement and Transformations
    Data is dynamic and constantly on the move. Monitor whenever data is copied, moved from one environment to another, crosses regulated perimeters (e.g., GDPR), or is ETL-processed, as these activities may introduce new sensitive data vulnerabilities. The holiday rush often involves increased data activity for promotions, logistics, and end-of-year tasks, making it crucial to ensure new data locations are secure and configurations are correct.
  8. Continuously Monitor for New Data Threats
    Despite our best protective measures, bad things happen. A user’s credentials are compromised. A partner accesses sensitive information. An intruder gains access to our network. A disgruntled employee steals secrets. The holiday season’s unique pressures and distractions increase the likelihood of these incidents. Watch for anomalies by continually monitoring data activity and alerting whenever suspicious things occur—so you can react swiftly to prevent damage or leakage, even amid the holiday bustle. A user’s credentials are compromised. A partner accesses sensitive information. An intruder gains access to our network. A disgruntled employee steals secrets. Watch for these anomalies by continually monitoring data activity and alerting whenever suspicious things occur - so you can react swiftly to prevent damage or leakage.

Wrapping Up the Year with Stronger Data Security

By taking the time to review and update your data security practices before the year wraps up, you can start the new year with confidence, knowing that your systems are secure and your data is protected. Implementing these simple but effective measures will help mitigate risks and set a strong foundation for 2025. Don't let the holiday season be an excuse for lax security - use this time wisely to ensure your organization is prepared for any data security challenges the new year may bring.

Read More
Romi Minin
Romi Minin
December 5, 2024
3
Min Read
Data Security

Top Data Security Resolutions

Top Data Security Resolutions

As we reflect on 2024, a year marked by a surge in cyber attacks, we are reminded of the critical importance of prioritizing data security. Widespread breaches in various industries, such as the significant Ticketmaster data breach impacting 560 million users, have highlighted vulnerabilities and led to both financial losses and damage to reputations. In response, regulatory bodies have imposed strict penalties for non-compliance, emphasizing the importance of aligning security practices with industry-specific regulations.

By September 2024, GDPR fines totaled approximately €2.41 billion, significantly surpassing the total penalties issued throughout 2023. This reflects stronger enforcement across sectors and a heightened focus on data protection compliance. Entering 2025, the dynamic threat landscape demands a proactive approach. Technology's rapid advancement and cybercriminals' adaptability require organizations to stay ahead. The importance of bolstering data security cannot be overstated, given potential legal consequences, reputational risks, and disruptions to business operations that a data breach can cause.

The data security resolutions for 2025 outlined below serve as a guide to fortify defenses effectively. Compliance with regulations, reducing attack surfaces, governing data access, safeguarding AI models, and ensuring data catalog integrity are crucial steps. Adopting these resolutions enables organizations to navigate the complexities of data security, mitigating risks and proactively addressing the evolving threat landscape.

Adhere to Data Security and Compliance Regulations

The first data security resolution you should keep in mind is aligning your data security practices with industry-specific data regulations and standards. Data protection regulatory requirements are becoming more stringent (for example, note the recent SEC requirement of public US companies for notification within 4 days of a material breach). Penalties for non compliance are also increasing.

With explosive growth of cloud data it is incumbent upon regulated organizations to facilitate effective data security controls and to while keeping pace with the dynamic business climate. One way to achieve this is through adopting Data Security Posture Management (DSPM) which automates cloud-native discovery and classification, improving accuracy and reporting timeliness. Sentra supports more than a dozen leading frameworks, for policy enforcement and streamlined reporting.

Reduce Attack Surface by Protecting Shadow Data and Enforcing Data Lifecycle Policies

As cloud adoption accelerates, data proliferates. This data sprawl, also known as shadow data, brings with it new risks and exposures. When a developer moves a copy of the production database into a lower environment for testing purposes, do all the same security controls and usage policies travel with it? Likely not. 

Organizations must institute security controls that stay with the data - no matter where it goes. Additionally, automating redundant, trivial, obsolete (ROT) data policies can offload the arduous task of ‘policing’ data security, ensuring data remains protected at all times and allowing the business to innovate safely. This has an added bonus of avoiding unnecessary data storage expenditure.

Implement Least Privilege Access for Sensitive Data

Organizations can reduce their attack surface by limiting access to sensitive information. This applies equally to users, applications, and machines (identities). Data Access Governance (DAG) offers a way to implement policies that alert on and can enforce least privilege data access automatically. This has become increasingly important as companies build cloud-native applications, with complex supply chain / ecosystem partners, to improve customer experience. DAG often works in concert with IAM systems, providing added context regarding data sensitivity to better inform access decisions. DAG is also useful if a breach occurs - allowing responders to rapidly determine the full impact and reach (blast radius) of an exposure event to more quickly contain damages.

Protect Large Language Models (LLMs) Training by Detecting Security Risks

AI holds immense potential to transform our world, but its development and deployment must be accompanied by a steadfast commitment to data integrity and privacy. Protecting the integrity and privacy of data in Large Language Models (LLMs) is essential for building responsible and ethical AI applications. By implementing data protection best practices, organizations can mitigate the risks associated with data leakage, unauthorized access, and bias/data corruption. Sentra's Data Security Posture Management (DSPM) solution provides a comprehensive approach to data security and privacy, enabling organizations to develop and deploy LLMs with speed and confidence.

Ensure the Integrity of Your Data Catalogs

Enrich data catalog accuracy for improved governance with Sentra's classification labels and automatic discovery. Companies with data catalogs (from leading providers such as Alation, Collibra, Atlan) and data catalog initiatives struggle to keep pace with the rapid movement of their data to the cloud and the dynamic nature of cloud data and data stores. DSPM automates the discovery and classification process - and can do so at immense scale - so that organizations can accurately know at any time what data they have, where it is located, and what its security posture is. DSPM also provides usage context (owner, top users, access frequency, etc.) that enables validation of information in data catalogs, ensuring they remain current, accurate, and trustworthy as the authoritative source for their organization. This empowers organizations to maintain security and ensure the proper utilization of their most valuable asset—data!

How Sentra’s DSPM Can Help Achieve Your 2025 Data Security Resolutions

By embracing these resolutions, organizations can gain a holistic framework to fortify their data security posture. This approach emphasizes understanding, implementing, and adapting these resolutions as practical steps toward resilience in the face of an ever-evolving threat landscape. Staying committed to these data security resolutions can be challenging, as nearly 80% of individuals tend to abandon their New Year’s resolutions by February. However, having Sentra’s Data Security Posture Management (DSPM) by your side in 2025 ensures that adhering to these data security resolutions and refining your organization's data security strategy becomes guaranteed.

To learn more, schedule a demo with one of our experts.

Read More
Gilad Golani
Gilad Golani
November 28, 2024
3
Min Read
Data Security

New Healthcare Cyber Regulations: What Security Teams Need to Know

New Healthcare Cyber Regulations: What Security Teams Need to Know

Why New Healthcare Cybersecurity Regulations Are Critical

In today’s healthcare landscape, cyberattacks on hospitals and health services have become increasingly common and devastating. For organizations that handle vast amounts of sensitive patient information, a single breach can mean exposing millions of records, causing not only financial repercussions but also risking patient privacy, trust, and care continuity.

Top Data Breaches in Hospitals in 2024: A Year of Costly Cyber Incidents

The year 2024 has seen a series of high-profile data breaches in the healthcare sector, exposing critical vulnerabilities and emphasizing the urgent need for stronger cybersecurity measures. Among the most significant incidents was the breach at Change Healthcare, Inc., which resulted in the exposure of 100 million records. As one of the largest healthcare data breaches in history, this event highlighted the challenges of securing patient data at scale and the immense risks posed by hacking incidents. Similarly, HealthEquity, Inc. suffered a breach impacting 4.3 million individuals, highlighting the vulnerabilities associated with healthcare business associates who manage data for multiple organizations. Finally, Concentra Health Services, Inc. experienced a breach that compromised nearly 4 million patient records, raising critical concerns about the adequacy of cybersecurity defenses in healthcare facilities. These incidents have significantly impacted patients and providers alike, highlighting the urgent need for robust cybersecurity measures and stricter regulations to protect sensitive data.

New York’s New Cybersecurity Reporting Requirements for Hospitals

In response to the growing threat of cyberattacks, many healthcare organizations and communities are implementing stronger cybersecurity protections. In October, New York State took a significant step by introducing new cybersecurity regulations for general hospitals aimed at safeguarding patient data and reinforcing security measures across healthcare systems. Under these regulations, hospitals in New York must report any “material cybersecurity incident” to the New York State Department of Health (NYSDOH) within 72 hours of discovery.

This 72-hour reporting window aligns with other global regulatory frameworks, such as the European Union’s GDPR and the SEC’s requirements for public companies. However, its application in healthcare represents a critical shift, ensuring incidents are addressed and reported promptly. The rapid reporting requirement aims to:

  • Enable the NYSDOH to assess and respond to cyber incidents across the state’s healthcare network.
  • Help mitigate potential fallout by ensuring hospitals promptly address vulnerabilities.
  • Protect patients by fostering transparency around data breaches and associated risks.

For hospitals, meeting this requirement means refining incident response protocols to act swiftly upon detecting a breach. Compliance with these regulations not only safeguards patient data but also strengthens trust in healthcare services.

With these regulations, New York is setting a precedent that could reshape healthcare cybersecurity standards nationwide. By emphasizing proactive cybersecurity and quick incident response, the state is establishing a higher bar for protecting sensitive data in healthcare organizations, inspiring other states to potentially follow as well.

HIPAA Updates and the Role of HHS

While New York leads with immediate, state-level action, the Department of Health and Human Services (HHS) is also working to update the HIPAA Security Rule with new cybersecurity standards. These updates, expected to be proposed later this year, will follow a lengthy regulatory process, including a notice of proposed rulemaking, a public comment period, and the eventual issuance of a final rule. Once finalized, healthcare organizations will have time to comply.

In the interim, the HHS has outlined voluntary cybersecurity goals, announced in January 2024. While these recommendations are a step forward, they lack the urgency and enforceability of New York’s state-level regulations. The contrast between the swift action in New York and the slower federal process highlights the critical role state initiatives play in bridging gaps in patient data protection.

Together, these developments—New York’s rapid reporting requirements and the ongoing HIPAA updates—show a growing recognition of the need for stronger cybersecurity measures in healthcare. They emphasize the importance of immediate action at the state level while federal efforts progress toward long-term improvements in data security standards.

Penalties for Healthcare Cybersecurity Non-Compliance in NY

Non-compliance with any health law or regulation in New York State, including cybersecurity requirements, may result in penalties. However, the primary goal of these regulations is not to impose financial penalties but to ensure that healthcare facilities are equipped with the necessary resources and guidance to defend against cyberattacks. Under Section 12 of health law regulations in New York State, violations can result in civil penalties of up to $2,000 per offense, with increased fines for more severe or repeated infractions. If a violation is repeated within 12 months and poses a serious health threat, the fine can rise to $5,000. For violations directly causing serious physical harm to a patient, penalties may reach $10,000. A portion of fines exceeding $2,000 is allocated to the Patient Safety Center to support its initiatives. These penalties aim to ensure compliance, with enforcement actions carried out by the Commissioner or the Attorney General. Additionally, penalties may be negotiated or settled under certain circumstances, providing flexibility while maintaining accountability.

Importance of Prioritizing Breach Reporting

With the rapid digitization of healthcare services, regulations are expected to tighten significantly in the coming years. HIPAA, in particular, is anticipated to evolve with stronger privacy protections and expanded rules to address emerging challenges.

Healthcare providers must make cybersecurity a top priority to protect patients from cyber threats. This involves adopting proactive risk assessments, implementing strong data protection strategies, and optimizing breach detection, response, and reporting capabilities to meet regulatory requirements effectively.

Data Security Platforms (DSPs) are essential for safeguarding sensitive healthcare data. These platforms enable organizations to locate and classify patient information, such as lab results, prescriptions, personally identifiable information, or medical images - across multiple formats and environments, ensuring comprehensive protection and regulatory compliance.

Breach Reporting With Sentra

A proper classification solution is essential for understanding the nature and sensitivity of your data at all times. With Sentra, you gain a clear, real-time view of your data's classification, making it easier to determine if sensitive data was involved in a breach, identify the types of data affected, and track who had access to it. This ensures that your breach reports are accurate, comprehensive, and aligned with regulatory requirements.

Sentra can help you to adhere to many compliance frameworks, including PCI, GDPR, SOC2 and more, that may be applicable to your sensitive data as it travels around the organization. It automatically will alert you to violations, provide insight into the impact of any compromise, help you to prioritize associated risks, and integrate with common IR tools to streamline remediation. Sentra automates these processes so you can focus energies on eliminating risks.

Data Breach Report November 2024

Read More
decorative ball