10 Best Practices Essential for Your Data Loss Prevention (DLP) Policy

Every organization, regardless of size or industry, needs a data loss prevention (DLP) strategy to prevent data from being improperly accessed or deleted. The strategy should focus on the protection of valuable, sensitive or regulated data, such as medical records, financial data and intellectual property. DLP typically involves both technologies and policies. For example, common techniques include configuring user workstations to block the use of USB devices and having formal policies regarding sharing confidential data via email.

Handpicked related content:

For more comprehensive protection, many organizations deploy a data loss prevention system, which can help them:

Creating a data loss prevention policy

To create your data loss prevention policy, you need to determine the level of protection required. For example, is your goal to spot unauthorized access attempts or to monitor all user actions?

Handpicked related content:

Basic parameters

Here are the basic parameters you should define:

Data states

As you shape your policy, remember to consider data in all its forms:

Apply this knowledge to help define the data flows in your organization and the permitted transmission paths for different types of documents. Create rules that specify the conditions for the processing, modification, copying, printing and other use of this data. Be sure to include business processes performed within applications and programs that access confidential data.

Legal and related concerns

Be sure to evaluate the potential legal ramifications of your DLP policy. In particular, recording employees’ actions on video could be seen as a violation of their constitutional rights, and false alerts from your DLP system can generate conflicts with employees whose legitimate actions are flagged as suspicious. Options for addressing these concerns might include modifying employment agreements and training employees about security policies.

Handpicked related content:

How DLP solutions work

Once you have created a DLP policy on paper, you can focus on configuring appropriate policies in your DLP system. Typically, a DLP system has a set of specific rules that are strictly followed by the program. Each rule consists of a condition and the action to be taken when that condition is met. The rules are ranked by priority, and the program processes them in that order. Some solutions include machine learning technologies that generate or improve rules.

For example, the process might proceed like this:

Detection techniques

The core functionality of a DLP system is detecting confidential information in a data stream. Different systems use different methods, including the following:

Naturally, accuracy is critical. False negatives — failure to spot information that is actually sensitive — can lead to undetected leaks. False positives — alerting on data that isn’t actually sensitive — wastes the security team’s resources and leads to conflict with users falsely accused of improper behavior. Therefore, you should look for a DLP solution that minimizes both false negatives and false positives.

Data Loss Prevention Best Practices

Data loss prevention (DLP) and auditing techniques should be used to continuously enforce data usage policies. The goal is to know how data is actually being used, where it is going or has gone, and whether this meets compliance policy standards like GDPR or not. When a suspicious event is detected, real-time notifications should be sent to administrators so they can investigate. Violators should face the consequences defined in the data security policy.

Handpicked related content:

The following data loss prevention best practices will help you protect your sensitive data from internal and external threats:

1. Identify and classify sensitive data.

To protect data effectively, you need to know exactly what types of data you have. Data discovery technology will scan your data repositories and report on the findings, giving you visibility into what content you need to protect. Data discovery engines usually use regular expressions for their searches; they are very flexible but quite complicated to create and fine-tune.

Using data discovery and data classification technology helps you control user data access and avoid storing sensitive data in unsecure locations, thus reducing the risk of data leaks and data loss. All critical or sensitive data should be clearly labeled with a digital signature that denotes its classification, so you can protect it in accordance with its value to the organization. Third-party tools, such as Netwrix Data Classification, can make data discovery and classification easier and more accurate.

As data is created, modified, stored or transmitted, the classification can be updated. However, controls should be in place to prevent users from falsifying classification levels. For example, only privileged users should be able to downgrade the classification of data.

Follow these guidelines to create a strong data classification policy. And don’t forget to perform data discovery and classification as part of your IT risk assessment process.

Handpicked related content:

Access control lists

An access control list (ACL) is a list of who can access what resource and at what level. It can be an internal part of an operating system or application. For example, a custom application might have an ACL that lists which users have what permissions in that system.

ACLs can be based on whitelists or blacklists. A whitelist is a list of items that are allowed, such as a list of websites that users are allowed to visit using company computers or a list of third-party software solutions that are authorized to be installed on company computers. Blacklists are lists of things that are prohibited, such as specific websites that employees are not permitted to visit or software that is forbidden to be installed on client computers.

Whitelists are used more commonly, and they are configured at the file system level. For example, in Microsoft Windows, you can configure NTFS permissions and create NTFS access control lists from them. You can find more information about how to properly configure NTFS permissions in this list of NTFS permissions management best practices. Remember that access controls should be implemented in every application that has role-base access control (RBAC); examples include Active Directory groups and delegation.

2. Use data encryption.

All critical business data should be encrypted while at rest or in transit. Portable devices should use encrypted disk solutions if they will hold any important data.

Encrypting the hard drives of computers and laptops will help avoid the loss of critical information even if attacks gain access to the device. The most basic way to encrypt data on your Windows systems is Encrypting File System (EFS) technology. When an authorized user opens an encrypted file, EFS decrypts the file in the background and provides an unencrypted copy to the application. Authorized users can view or modify the file, and EFS saves changes transparently as encrypted data. But unauthorized users cannot view a file’s content even if they have full access to the device; they will receive an “Access denied” error, preventing a data breach.

Another encryption technology from Microsoft is BitLocker. BitLocker complements EFS by providing an additional layer of protection for data stored on Windows devices. BitLocker protects endpoint devices that are lost or stolen against data theft or exposure, and it offers secure data disposal when you decommission a device.

Hardware-based encryption

In addition to software-based encryption, hardware-based encryption can be applied. Within the advanced configuration settings on some BIOS configuration menus, you can choose to enable or disable a trusted platform module (TPM), which is a chip that can store cryptographic keys, passwords or certificates. A TPM can assist with hash key generation and can protect devices other than PCs, such as smartphones. It can generate values used with whole disk encryption, such as BitLocker. A TPM chip can be installed on the motherboard.

3. Harden your systems.

Any place where sensitive data could reside, even temporarily, should be secured based on the types of information that system could potentially have access to. This includes all external systems that could get internal network access via remote connection with significant privileges, since a network is only as secure as the weakest link. However, usability must still be a consideration, and a suitable balance between functionality and security must be determined.

OS baselining

The first step to securing your systems is making sure the operating system’s configuration is as secure as possible. Out of the box, most operating systems come with unneeded services that give attackers additional avenues of compromise. The only programs and listening services that should be enabled are those that are essential for your employees to do their jobs. If something doesn’t have a business purpose, it should be disabled. It can also be beneficial to create a secure baseline image OS that is used for the typical employee. If anyone needs additional functionality, enable those services or programs on a case-by-case basis. Windows and Linux operating systems will each have their unique baseline configurations.

4. Implement a rigorous patch management strategy.

Ensuring that all operating systems and applications in your IT environment are up to date is essential for data protection and cybersecurity. While some things, such as updates to the signatures for antivirus tools, can be automated, patches for critical infrastructure need to be thoroughly tested to ensure that no functionality is compromised and no vulnerabilities are introduced into the system.

5. Allocate roles.

Clearly define the role of each individual involved in the data loss prevention strategy. Specify who owns which data, which IT security officers are responsible for which aspects of security incident investigations and so on.

6. Automate as much as possible.

The more DLP processes are automated, the more broadly you’ll be able to deploy them across the organization. Manual DLP processes are inherently limited in scope and cannot scale to meet the needs of any but the smallest IT environments.

7. Use anomaly detection.

To identify abnormal user behavior, some modern DLP solutions supplement simple statistical analysis and correlation rules with machine learning and behavioral analytics. Generating a model of the normal behavior of each user and group of users enables more accurate detection of suspicious activity that could result in data leakage.

8. Educate stakeholders.

Putting a DLP policy in place is not enough. Invest in making stakeholders and users of data aware of the policy, its significance and what they need to do to safeguard the organization’s data.

9. Establish metrics.

Measure the effectiveness of your DLP strategy using metrics like the percentage of false positives, the number of incidents and the mean time to incident response.

10. Don’t save unnecessary data.

An organization should store only information that is essential. Data that you don’t have cannot go missing.

Jeff is a former Director of Global Solutions Engineering at Netwrix. He is a long-time Netwrix blogger, speaker, and presenter. In the Netwrix blog, Jeff shares lifehacks, tips and tricks that can dramatically improve your system administration experience.