All About Data Encryption in the Ping Identity Directory Server

Directory servers are often used to store and interact with sensitive and/or personally identifiable information, so data security and privacy are critical. Encryption both for data as it goes over the wire and for data at rest. TLS (also known by the more outdated term SSL) is the best way to secure data in transit, but it’s also important to have the data encrypted on disk, and not only in the database, but also in backups, LDIF exports, sensitive log files, and files containing secrets, and in a variety of other areas.

We’re very committed to security in the Ping Identity Directory Server, and data encryption is tightly ingrained into the product. You can (and should) enable data encryption when setting up the server for the best level of protection. But there are a lot of components involved in our support for data encryption, and it can be a lot to take in for someone who is new to the product. So I thought I’d write up an overview of all of the components and how they work together.

The information provided here reflects the 9.3.0.0 release. Although much of the foundational information is the same for older versions as well, some of the features I cover were only just introduced in the 9.3 release and aren’t available in older versions.

The encryption-settings Tool

Aside from allowing the setup process to create or import encryption settings definitions, the encryption-settings tool is one of the primary means through which you’ll manage the set of encryption settings definitions and the encryption settings database. It offers the following subcommands:

  • list — Displays a list of the encryption settings definitions that reside in the encryption settings database.
  • create — Creates a new encryption settings definition.
  • delete — Removes an encryption settings definition.
  • export — Exports one or more encryption settings definitions to a passphrase-protected file.
  • import — Imports the encryption settings definitions contained in an export file.
  • set-preferred — Specifies which encryption settings definition should be preferred for subsequent encryption operations.
  • get-data-encryption-restrictions — Displays information about the set of data encryption restrictions that are available for use and which are currently in effect.
  • set-data-encryption-restrictions — Updates the set of data encryption restrictions that are currently in effect.
  • is-frozen — Indicates whether the encryption settings database is currently frozen.
  • freeze — Freezes the encryption settings database with a passphrase.
  • unfreeze — Unfreezes the encryption settings database with the freeze passphrase.
  • supply-passphrase — Supplies a passphrase needed to unlock the encryption settings database in conjunction with the wait-for-passphrase cipher stream provider.

These subcommands will be discussed in more detail below in conjunction with the functions that they provide.

Encryption Settings Definitions

One of the most fundamental components of the Directory Server’s data encryption framework is the encryption settings definition. Encryption settings definitions encapsulate two primary pieces of information:

  • A symmetric encryption key, which is used to actually encrypt and decrypt the data.
  • The cipher transformation, which specifies the algorithm used to perform the encryption and decryption.

Rather than storing the encryption key itself, an encryption settings definition stores the information needed to derive it. Each definition is backed by a password/passphrase, and we use an algorithm (PBKDF2WithHmacSHA256) to generate the key, along with a salt and an iteration count. When creating a new definition (whether during setup or after the fact using the encryption-settings create command), you have the option of specifying the passphrase directly or allowing the server to generate one at random. When creating a definition after the fact, you can also specify the PBKDF2 iteration count, the cipher transformation, and the key length.

Each encryption settings definition has an identifier that is used as a unique name for that definition. This identifier is deterministically generated so that if you create a definition with the same underlying passphrase and other settings (like the cipher transformation, iteration count, and key length) on two different servers, it should result in two definitions with the same identifier and the same underlying encryption key.

Wenever the server encrypts some data, it includes the ID of the definition used to perform that encryption (or a compact token that is tied to the ID) as part of the encrypted output so that we know which key was used to encrypt it, and therefore which key needs to be used to decrypt it.

The Encryption Settings Database

The encryption settings database holds the set of encryption settings definitions that are available for use in the Directory Server. One of those definitions is marked as the preferred definition, which is the one used for encrypting new data by default. Whenever the server needs to encrypt some data, it can either request a specific encryption settings definition by its identifier, but in most cases, it will just fall back to using the preferred definition.

Whenever the server encounters some encrypted data that it needs to decrypt, it will extract the identifier to determine which encryption settings definition was used to encrypt it, retrieve that definition from the encryption settings database, and use it to decrypt the data.

As of the 9.3 release of the Directory Server, the encryption settings database also stores the set of data encryption restrictions that are in effect for the server, and it may optionally be frozen. I’ll cover data encryption restrictions and freezing the encryption settings database a little bit later.

Cipher Stream Providers

The encryption settings database is just a file that contains the encryption settings definitions and some other encryption-related information. Obviously, we don’t want to just leave it sitting around in the clear, because anyone who can access that file can use the information it contains to access any of the encrypted data that the server stores or generates. So we need a way to protect the encryption settings database.

To do that, we use a component called a cipher stream provider. In retrospect, we probably should have chosen a better name for this component, but we had originally thought we might use it for a variety of purposes. But really, we just use it for protecting the contents of the encryption settings database.

The Directory Server includes support for several different types of cipher stream providers, including:

  • A file-based cipher stream provider, which encrypts the encryption settings database with a key derived from a password stored in a file. This isn’t the most secure option, because anyone who can see that file and figure out how to use the password to derive the key used to encrypt the database can get access to the definitions, but it is the one we use by default because it doesn’t require any additional configuration or access to an external service.
  • An Amazon Key Management Service (KMS) cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database, and then encrypts that key with a key stored in KMS. Whenever the server needs to open the encryption settings database, it sends the encrypted key to KMS to be decrypted, and then uses the decrypted key to decrypt the encryption settings database.
  • An Amazon Secrets Manager cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database. It then retrieves a secret password from the Amazon Secrets Manager service and uses that to encrypt the generated key. Whenever the server needs to open the encryption settings database, it retrieves the same secret from Secrets Manager and uses it to decrypt the key so that it can decrypt the encryption settings database.
  • An Azure Key Vault cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from Azure Key Vault rather than Amazon Secrets Manager.
  • A CyberArk Conjur cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from a CyberArk Conjur instance rather than Amazon Secrets Manager.
  • A HashiCorp Vault cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from a HashiCorp Vault instance rather than Amazon Secrets Manager.
  • A PKCS #11 cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database, and then uses a certificate contained in a PKCS #11 token (for example, a hardware security module, or HSM for short) to encrypt that key. Whenever the server needs to open the encryption settings database, it uses the same certificate to decrypt the key, and then uses the decrypted key to decrypt the encryption settings database.
  • A wait-for-passphrase cipher stream provider, which derives an encryption key from a password/passphrase supplied by an administrator, and uses that key to encrypt the contents of the encryption settings database. When the server needs to open the encryption settings database, it will wait for the administrator to interactively supply that passphrase so that it can use it to re-derive the encryption key, which it will then use to decrypt the encryption settings database.

If you’d rather protect the contents of the encryption settings database in some other way, you also have the option of using the Server SDK to create your own custom cipher stream provider.

By default, when you set up the server with data encryption enabled, it will use the file-based cipher stream provider because it’s the only one that can be used without requiring any additional setup. If you want to use an alternative cipher stream provider to protect the encryption settings database, then you have two options:

  • Make an appropriate set of configuration changes with the server online to create the desired cipher stream provider and then activate it by changing the encryption-settings-cipher-stream-provider global configuration property to use it. The server will use the former cipher stream provider to read the encryption settings database, and then it will rewrite it (and therefore re-encrypt it) using the new cipher stream provider. Attempting to change the cipher stream provider while the server is not online won’t work because the new cipher stream provider won’t be able to read the existing encryption settings database.
  • Set up the server with a pre-existing encryption settings database that is already protected with the appropriate cipher stream provider.

Each of these options will be discussed in later sections.

Data Encryption Restrictions

Data encryption restrictions can be used to prevent administrators (or attackers who may gain access to the server system) from performing actions that could potentially grant them access to encrypted data. Restrictions that can be imposed include:

  • prevent-disabling-data-encryption — Prevents updating the configuration to disable data encryption. If you were to disable data encryption, then subsequent writes made to the server would not be encrypted. With this restriction in effect, if you try to make a configuration change to disable data encryption while the server is running, then the server will reject the attempt. If you disable data encryption with the server offline, then it will refuse to start.
  • prevent-changing-cihpher-stream-provider — Prevents updating the configuration from changing which cipher stream provider is used to protect the encryption settings database. If you can control which cipher stream provider is in effect, then you could potentially be able to decrypt the encryption settings database and access the definitions that it contains.
  • prevent-encryption-settings-export — Prevents using the encryption-settings export command to export the encryption settings definitions to a passphrase-protected file. If you can export the encryption settings definitions, then you could create a new encryption settings database with the same definitions, but without any restrictions in place.
  • prevent-unencrypted-ldif-export — Prevents exporting the data in any backend to an unencrypted LDIF file, as that would grant unprotected access to that data.
  • prevent-passphrase-encrypted-ldif-export — Prevents exporting the data in any backend to an LDIF file that is encrypted with a specified passphrase rather than an encryption settings definition. If you can export the data to a file that is encrypted with a passphrase that you know, then that would allow you to decrypt its contents.
  • prevent-unencrypted-backup — Prevents creating an unencrypted backup. Note that even if the backup itself is unencrypted, if the backend contains encrypted data, then it will remain encrypted in the backup.
  • prevent-passphrase-encrypted-backup — Prevents creating a backup that is encrypted with a specified passphrase rather than an encryption settings definition.
  • prevent-decrypt-file — Prevents using the encrypt-file --decrypt command to decrypt a file that has been encrypted, regardless of whether it was encrypted with an encryption settings definition or a passphrase.

For maximum security, we recommend enabling all data encryption restrictions in the server, as that can significantly hamper an attacker’s ability to gain access to encrypted data. But we strongly recommend creating an export of the encryption settings definitions (as will be described in a later section) so that you have a passphrase-protected backup of the definitions that can be used for disaster recovery if you have a really bad day, since you won’t be able to do that once the prevent-encryption-settings-export restriction is in effect.

You can use the encryption-settings set-data-encryption-restrictions command to add and remove data encryption restrictions from the server, whether individually or all at once. For example, if you want to enable all data encryption restrictions, then you can use the command:

$ bin/encryption-settings set-data-encryption-restrictions \
     --add-all-restrictions

You can use the encryption-settings get-data-encryption-restrictions command to display a list of all available restrictions and which are currently in effect.

If you have activated any data encryption restrictions in the server, then we strongly recommend freezing the encryption settings database to prevent those restrictions from being removed. This is covered in the next section.

Freezing the Encryption Settings Database

Freezing the encryption settings database places it in read-only mode. The server and its associated tools will still have access to the definitions that it contains, but you won’t be able to make any changes to the encryption settings database, this includes:

  • Creating new definitions
  • Deleting definitions
  • Importing definitions from an export file
  • Changing which definition is preferred for encrypting new data
  • Making changes to the active set of data encryption restrictions

When you freeze the encryption settings database, you need to provide a freeze passphrase, and that same passphrase will be required to unfreeze the database. If the database is frozen and an attacker gains access to the system, then they won’t be able to make any changes to the set of encryption settings definitions or get around any data encryption restrictions as they don’t know and aren’t able to guess the freeze passphrase.

You can freeze the encryption settings database with the following command:

  $ bin/encryption-settings freeze

You will be interactively prompted for the freeze passphrase (and a second time to confirm it), and the encryption settings database will be placed in read-only mode. It will remain that way until it is unfrozen, which you can do with the command:

  $ bin/encryption-settings unfreeze

And supplying the same passphrase that you used to freeze the database. In either case, you can have the tool obtain the freeze passphrase from a file (via the --passphrase-file argument) rather than having the tool interactively prompt for it.

Enabling Data Encryption During Setup

The best time to enable data encryption is when setting up the server. Although it’s possible to enable data encryption for an existing instance, this doesn’t necessarily offer the same level of protection. In particular:

  • New writes will be encrypted, but any entries that are already in the server will remain unencrypted until they are updated or until the backend is exported to LDIF and re-imported.
  • Indexes will remain unencrypted until the backend is exported to LDIF and re-imported.
  • New records added to the replication database and LDAP changelog will be encrypted, but any existing records in the replication database and LDAP changelog will remain unencrypted.
  • Merely enabling data encryption won’t automatically enable encryption for backups and LDIF exports, although you can turn that on at the same time that you enable data encryption.
  • Any passwords files written to the filesystem during setup (for example, the PIN files needed to access certificate key and trust stores) won’t have been encrypted.

As such, it is strongly recommended that if you’re going to enable data encryption, you do so during setup, and if you want to enable data encryption for an existing instance, you may wish to consider setting up a new instance and migrating the data over to ensure that the maximum amount of protection is in place.

To enable data encryption during setup, provide one of the following arguments:

  • --encryptDataWithRandomPassphrase — Indicates that the server should enable data encryption using an encryption settings definition created using a very strong randomly generated passphrase that won’t be divulged by the server. This is a good option for setting up the first instance in a topology (or a standalone instance to use for testing), but it’s not recommended for setting up multiple instances because they’ll end up with different encryption settings definitions, and you want all servers in the topology to have the same definitions. You can use this argument with either setup or manage-profile setup, and in the latter case, there are no special requirements for the server profile (aside from including this argument in the setup-arguments.txt file, which is true of any of these arguments).

  • --encryptDataWithPassphraseFromFile — Indicates that the server should enable data encryption using an encryption settings definition created using a passphrase that you specify. If you set up all instances with the same passphrase, then they will all end up with the same encryption settings definitions, so this is a suitable option for setting up a multi-instance topology. Also, if you know the passphrase used to create an encryption settings definition, you can use that passphrase to decrypt files that were encrypted with that definition. You can use this argument with either setup or manage-profile setup, and in the case of manage-profile setup, you need to make sure that the server profile contains the passphrase file (for example, in the misc-files directory).

  • --encryptDataWithSettingsImportedFromFile — Indicates that the server should enable data encryption using one or more encryption settings definitions contained in a file created using the encryption-settings export command. This is a good option to use if you set up the first instance with a randomly generated passphrase and want to ensure that subsequent instances have the same definition. It’s also a good option if you want to include multiple encryption settings definitions, or if you want to use definitions created with settings that differ from the default settings that setup would have used (e.g., a different cipher transformation or PBKDF2 iteration count). When using this argument, you also need to provide the --encryptionSettingsExportPassphraseFile argument to specify the passphrase used to protect the export. This argument can be used with either setup or manage-profile setup, and if you use manage-profile setup, then the server profile will need to contain both the export and passphrase files (probably in the misc-files directory).

  • --encryptDataWithPreExistingEncryptionSettingsDatabase — Indicates that the server should enable data encryption using an encryption settings database that you’ve already set up as desired on another instance. This provides the greatest degree of flexibility, as the provided database can be protected with a cipher stream provider other than the default file-based provider, and it can also be locked down with data encryption restrictions and frozen with a passphrase (although you can change the cipher stream provider, enable restrictions, and freeze the database after setup). This option should only be used with manage-profile setup, and the server profile will need to include the following:

    • The encryption settings database itself should be included in the profile in the server-root/pre-setup/config/encryption-settings/encryption-settings-db file.
    • You will need to include any configuration changes needed to configure and activate the cipher stream provider in one or more dsconfig batch files placed in the pre-setup-dsconfig directory.
    • Any additional metadata files that the cipher stream provider needs to access the encryption settings database should must be included in the appropriate locations beneath the server-root/pre-setup directory.

Managing Encryption Settings Definitions

The server setup process can generate encryption settings definitions for you when using either the --encryptDataWithRandomPassphrase or --encryptDataWithPassphraseFromFile arguments. However, if you want to create additional definitions (presumably, using different settings than the one created by default), then the encryption-settings tool can be used to accomplish that. In addition, that tool can be used for other ways of managing encryption settings definitions and the encryption settings database. I’ve already covered using the tool to set data encryption restrictions and to freeze the encryption settings database, but this section will cover using the tool for managing encryption settings definitions.

Exporting Encryption Settings Definitions

One of the most important ways that you can use the encryption-settings tool is to create a passphrase-protected export of the definitions in the encryption settings database. This is vital, because an export is the best way to back up the encryption settings definitions, and if you lose your encryption settings definitions (or lose access to them), then you lose access to any data encrypted with them.

An encryption settings export is better than backing up the encryption settings database for a couple of reasons:

  • A backup of the encryption settings database is tied to the cipher stream provider used to protect it. If the cipher stream provider relies on an external service, and if that service (or the information that the cipher stream provider relies on inside that service) becomes unavailable, then the encryption settings database becomes unusable, and the definitions inside it are inaccessible. An encryption settings export is not tied to the cipher stream provider implementation, so it is more portable and not potentially reliant on an external service.

  • While you can use the backup tool to create a backup of the encryption settings database, that backup will only contain the encryption settings database itself and won’t include any metadata files that the associated cipher stream provider needs to interact with the database (although it will tell you which additional files need to be backed up separately). An encryption settings export doesn’t rely on any other files, but only on the passphrase you chose to protect it.

To create an encryption settings export, use a command like:

$ bin/encryption-settings export \
     --output-file /path/to/output/file

This will interactively prompt you for the passphrase to use to protect the export, and then will write all definitions in the encryption settings database into the specified file. You can also use the --passphrase-file argument to have it obtain the export passphrase from a file rather than interactively, or the --id argument if you only want to back up specific definitions.

This export file (and the passphrase needed to access its contents) should be carefully protected and reliably archived, as it may be the last line of defense that can allow you to access your encrypted data in a worst-case scenario. You shouldn’t need to create another export unless you make changes to the set of encryption settings definitions, but it’s probably a good idea to periodically verify that the archive is still valid and hasn’t succumbed to bit rot.

Importing Encryption Settings Definitions

When setting up a new instance with data encryption, you can use the --encryptDataWithSettingsImportedFromFile argument to use the encryption settings definitions contained in an import file. But if you have created additional definitions after setup and want to make them available in all instances, you can export the definition(s) from the instance in which it was created and import them into the remaining instances. This can be done with encryption-settings import, using a command like the following:

$ bin/encryption-settings import \
     --import-file /path/to/export/file

You will be prompted for the passphrase used to protect the export, or you can provide it non-interactively with the --passphrase-file argument.

In addition, the --set-preferred argument can be used to make the definition marked as preferred in the export as the new preferred definition in the encryption settings database. If the –set-preferred argument is not provided, and if the encryption settings database into which the new definitions are being imported already has one or more existing definitions, then the existing preferred definition will remain the preferred definition.

Note that unlike the import-ldif command, which is used to replace all data in a backend with data loaded from a specified LDIF file (or set of LDIF files), the encryption-settings import command merge new definitions into the encryption settings database, and any existing definitions that aren’t in the file being imported will be retained.

Creating a New Encryption Settings Definition

If you want to create a new encryption settings definition, use the encryption-settings create command. The most important arguments offered by this command include:

  • --cipher-algorithm — A required argument that specifies the name of the base encryption algorithm that should be used when encrypting data with this definition. Although you can technically use any cipher algorithm that the JVM supports, the only one that we currently recommend for use is “AES”.
  • --cipher-transformation — An optional argument that specifies the full cipher transformation (including the mode and padding algorithm) that should be used when encrypting data with this definition. When using the AES cipher algorithm, we recommend either “AES/CBC/PKCS5Padding” or “AES/GCM/NoPadding” (the latter of which offers somewhat better security, as it offers better integrity protection). If you don’t specify a cipher transformation when using the AES algorithm, a default of “AES/CBC/PKCS5Padding” will be used.
  • --key-length-bits — A required argument that specifies the length of the encryption key to be generated. When using AES, allowed key lengths are 128, 192, and 256 bits.
  • --key-factory-iteration-count — An optional argument that specifies the number of PBKDF2 iterations that should be used when deriving the encryption key from the passphrase that backs the definition. If this is not specified, then the tool will use a default value that depends on whether definition is backend by a known passphrase or a randomly generated one. If it’s backed by a randomly generated passphrase, then we use the OWASP-recommended 600,000 iterations. If it’s backed by a generated passphrase, then we use a smaller iteration count of 16,384 to preserve backward compatibility with older versions so that supplying the same passphrase and all other settings will be able to consistently reproduce the same definition. As such, if you’re using a known passphrase and don’t need to worry about definitions created in older versions, we recommend that you explicitly specify the iteration count for better protection of the derived key.
  • --prompt-for-passphrase — An optional argument that indicates that the tool should interactively prompt for the passphrase to use to create the definition. At most one of the --prompt-for-passphrase and --passphrase-file arguments may be provided, and if neither is provided, then the definition will be backed by a randomly generated passphrase.
  • --passphrase-file — An optional argument that indicates that definition should be backed by a passphrase read from the specified file. At most one of the --prompt-for-passphrase and --passphrase-file arguments may be provided, and if neither is provided, then the definition will be backed by a randomly generated passphrase.
  • --set-preferred — An optional argument that indicates that the newly created definition should be set as the preferred definition for new encryption operations. If this is not specified, then the new definition will only be set as preferred if it’s the first one in the database; if there’s already an existing preferred definition, then it will remain the preferred definition.
  • --description — An optional argument that specifies a human-readable description to use for the definition. If this is not specified, then the definition will not have a description.

For example:

$ bin/encryption-settings create \
     --cipher-algorithm AES \
     --cipher-transformation AES/GCM/NoPadding \
     --key-length-bits 256 \
     --key-factory-iteration-count 600000 \
     --prompt-for-passphrase \
     --set-preferred

After creating a new definition, we strongly recommend using encryption-settings export to back up the resulting definitions to a passphrase-protected file so that you have a backup of that and any other definitions, and then import those definitions into the other servers in the topology. Alternatively, if you created the new definition with a known passphrase, then you should be able to issue the same command with the same passphrase on the other instances to generate the same definition in those servers.

Deleting an Encryption Settings Definition

My first recommendation for deleting an encryption settings definition is: don’t. There’s no harm in keeping a definition around that isn’t being used. On the other hand, if you delete a definition, then anything encrypted with it (whether data in the database, or the LDAP changelog or replication changelog, or in encrypted files) will become inaccessible. If the server tries to interact with data that was encrypted with an encryption settings definition and that definition is available, then it will at best encounter an error, and in some cases the server may not be able to start.

As long as you create a new preferred encryption settings definition, then the server should start using it to encrypt new data. If you want to export and re-import any backends containing encrypted data, then that data will be automatically re-encrypted with the new definition. This includes the LDAP changelog as well, although any existing records encrypted with the old definition will eventually be purged as appropriate based on the server configuration. Old records in the replication database will also be purged over time, but if you want to start fresh with a new definition, then you can disable and re-enable replication. If there are any files that are encrypted with the old definition, then you can decrypt and re-encrypt them with the new definition.

If you really want an old encryption settings definition gone, then the best way to do that safely would probably be to set up a new instance with the desired new definition and migrate the data over. In particular:

  1. Add the new definition to the existing instance and make it preferred.
  2. Export the data to an LDIF file that is encrypted with the new definition.
  3. Set up a new server instance with data encryption enabled and only using the new definition (or all definitions you want to preserve but excluding those you want to get rid of).
  4. Import the data from LDIF.

But if you are absolutely confident that an encryption settings definition isn’t in use anymore and want to remove it from an encryption settings database, then we first strongly recommend ensuring that you have a backup of that definition created with the encryption-settings export command so that you can restore it if necessary. Then, you can get rid of it with the encryption-settings delete command, using the --id argument to specify the ID of the encryption settings definition that you want to remove. I strongly recommend doing this on a test instance first and verify that everything still works after a restart before trying it on anything in production.

Changing the Preferred Encryption Settings Definition

If you already have multiple encryption settings definitions in the database and want to change which one is preferred for new encryption operations, then you can use the encryption-settings set-preferred command with the --id argument to specify the ID of the existing definition that you want to make preferred.

Note that while you can create a new definition and make it preferred in a single command, it may be better to create a new definition that is initially non-preferred, and make sure it is defined across all of the instances before making it the new preferred definition. Although most forms of data encryption only protect the data locally and not when it’s replicated to other instances (we use TLS to protect the data in transit, and then the recipient server’s data encryption configuration to protect it when it’s stored there), there are some cases in which we store encrypted data within the entry itself. For example, if you use the AES256 password storage scheme, the encoded representation of the password will be encrypted with an encryption settings definition, and it won’t be possible to authenticate in other instances until they have been updated with the new definition. By ensuring that the definition is available in all instances before setting it preferred, and then setting it as preferred in all instances, you won’t have to worry about the possibility of instances encountering data encrypted with definitions they don’t have.

Managing Data Encryption in the Server Configuration

As previously stated, the best time to set up data encryption is when you set up the server. However, it is possible to enable encryption after the fact, and there are also other options that you can configure. Some of the encryption-related configuration properties include:

  • The encrypt-data property in the global configuration controls whether data encryption is enabled in the server. Note that you won’t be able to enable data encryption if you haven’t created any encryption settings definitions, and you won’t be able to disable data data encryption if the prevent-disabling-data-encryption restriction is in effect.
  • The encryption-settings-cipher-stream-provider property in the global configuration controls which cipher stream provider is used to protect the encryption settings database. If you’re going to change the cipher stream provider with data encryption enabled, then you need to do so with the server online so that it can automatically re-encrypt the database with the new provider. You won’t be able to change the active cipher stream provider if the prevent-changing-cipher-stream-provider restriction is in effect.
  • The encrypt-backups-by-default property in the global configuration controls whether the server will automatically encrypt backups, even if you don’t use the --encrypt argument. This will be set to true by default if you enable data encryption during setup, and you’ll have to use the --doNotEncrypt argument to create an unencrypted backup (which won’t be allowed if the prevent-unencrypted-backup restriction is in effect).
  • The backup-encryption-settings-definition-id property in the global definition allows you to explicitly specify which definition should be used to encrypt backups by default. If this is not specified, the server’s preferred definition will be used.
  • The encrypt-ldif-exports-by-default property in the global configuration allows you to indicate whether LDIF exports will be encrypted by default (in which case you need to use the --doNotEncrypt argument to create an unencrypted export, which won’t be allowed if the prevent-unencrypted-ldif-export restriction is in effect). This will be set to true by default if data encryption is enabled during setup.
  • The ldif-export-encryption-settings-definition-id property in the global configuration allows you to specify which definition should be used to encrypt LDIF exports by default. If this is not specified, the server’s preferred definition will be used.
  • The automatically-compress-encrypted-ldif-exports property in the global configuration can be used to whether LDIF exports should also be gzip-compressed if they are encrypted. This is set to true by default.
  • The AES256 password storage scheme is a reversible scheme that encrypts user passwords with the passphrase that backs an encryption settings definition (even if that definition doesn’t normally use 256-bit AES). We strongly recommend using non-reversible schemes to encode user passwords for better security, but if you have a legitimate need to store passwords in a reversible form, then the AES256 scheme is currently the best option. By default, it will use the preferred encryption settings definition, but you can specify an alternative definition with the encryption-settings-definition-id configuration property.
  • The backup recurring task provides a number of options that allow you to control whether recurring backups are encrypted, and if so whether they are encrypted with an encryption settings definition or a passphrase.
  • The signing-encryption-settings-id property in the crypto manager configuration can be used to indicate which encryption settings definition should be used to generate digital signatures if signing is enabled (e.g., for signed log files). By default, digital signatures will be generated using the preferred encryption settings definition.
  • The encrypt attribute values plugin provides a way of encrypting the values for a specified set of attributes, and they will appear in encoded form when retrieved by clients. Note that this is only useful for a limited subset of attributes that may used to hold secret information that the server needs to have in the clear, but that shouldn’t be exposed to clients (e.g., one-time passwords).
  • The LDIF export recurring task provides essentially the same encryption-related options as the backup recurring task.
  • Loggers that write to files provide an option for encrypting the log file (and if so, with either a specified encryption settings definition or the server’s preferred definition). They also support signed logging, and signatures are also generated using an encryption settings definition.

The encrypt-file Tool

In many cases, the Directory Server and its associated tools support reading data from encrypted files. We don’t support encrypting the configuration itself, since we need to be able to read it to get the information needed to instantiate the cipher stream provider, but most other files can be encrypted. This includes things like:

  • Files containing the passphrase needed to access certificate key and trust stores
  • Files containing the passphrase to use in LDAP bind requests
  • Properties files used to provide default values for command-line tool arguments (e.g., tools.properties)
  • LDIF files for use with tools like import-ldif, ldapmodify, ldifsearch, ldifmodify, and ldif-diff.

In addition, the server may write encrypted files for a number of purposes, including encrypted backups, LDIF exports, and log files.

Some of these files are automatically encrypted when you set up the server with data encryption enabled. This includes the config/ads-truststore.pin, config/keystore.pin, and config/truststore.pin files that contain the passphrases needed to access certificate key and trust stores. And although it doesn’t automatically encrypt the config/tools.properties file, it will encrypt the config/tools.pin file if you use the --populateToolPropertiesFile argument with a value of bind-password.

If you would like to encrypt other files for use by the server, then you can use the encrypt-file tool. Files can be encrypted with either an encryption settings definition or a passphrase, and you can also use the tool to decrypt files (although that won’t be allowed if the prevent-decrypt-file restriction is in effect).

Some of the arguments supported by the encrypt-file tool include:

  • --decrypt — Indicates that the input data should be decrypted rather than encrypted.
  • --input-file — Specifies the path to the plaintext file whose contents are to be encrypted (or the path to the encrypted file to be decrypted). If this isn’t specified, then the input data will be read from standard input.
  • --output-file — Specifies the path to which the encrypted (or decrypted) output should be written. If this isn’t provided, then the output data will be written to standard output.
  • --encryption-settings-id — The ID of the encryption settings definition that should be used to encrypt the file. At most one of the --encryption-settings-id, --prompt-for-passphrase, and --passphrase-file arguments may be provided, and if none of them are given, then the file will be encrypted with the preferred encryption settings definition. Note that this argument should not be provided in conjunction with the --decrypt argument, because the encryption header of an encrypted file will indicate which definition was used to encrypt it.
  • --prompt-for-passphrase — Indicates that the encryption key should be generated from a provided passphrase rather than an encryption settings definition, and that the tool should interactively prompt for that passphrase.
  • --passphrase-file — Indicates that the encryption key should be generated from a provided passphrase rather than an encryption settings definition, and that passphrase should be read from a specified file.
  • --decompress-input — Indicates that the input file contains gzip-compressed data. When encrypting data, decompression will be performed before encryption. When decrypting data, decompression will be performed after decryption.
  • --compress-output — Indicates that the output file should be gzip-compressed. When encrypting data, compression will be performed after encryption. When decrypting data, compression will be performed after decryption.

For example, to encrypt a file named “clear.input” to “encrypted.output” using the server’s preferred encryption settings definition, you can use a command like the following:

$ bin/encrypt-file \
     --input-file /path/to/clear.input \
     --output-file /path/to/encrypted.output

And then to decrypt it:

$ bin/encrypt-file \
     --decrypt
     --input-file /path/to/encrypted.output \
     --output-file /path/to/clear.input

Monitoring Cipher Stream Provider Availability

Protecting the encryption settings database with a cipher stream provider that relies on an external service can add a layer of security to the Directory Server in that it makes it more difficult for an attacker who gains access to the system to get at the underlying encryption keys. However, it also introduces a risk because if that external service, or necessary information within that service, becomes unavailable, then it could adversely affect the server’s ability to function properly.

For example, if you’re using the Amazon Key Management cipher stream provider, then you need to be aware of at least the following potential risks:

  • An outage in the KMS service itself, or in your ability to reach the service
  • Your AWS account becomes unavailable
  • The KMS key that the cipher stream provider relies on is removed or revoked

Any of those issues will prevent the server from being able to open the encryption settings database. This will prevent the server from starting, and it will also prevent you from being able to run tools that require access to the encryption settings database (e.g., to interact with encrypted data in the database, or to read or write an encrypted file). This won’t directly interfere with a server that’s already running because it caches the information it needs to interact with the encryption settings database on startup, although it can inhibit its ability to invoke certain administrative tasks that involve spawning a separate process, like invoking an LDIF export task.

You probably want to be made aware of any outages that might affect the ability to access the encryption settings database as quickly as possible. To help with that, we offer a monitor provider that will periodically verify that the server can open the encryption settings database without relying on any cached data. This is the Encryption Settings Database Accessibility monitor provider, and it offers the following configuration properties:

  • check-frequency — This indicates how frequently the server should check the encryption settings database accessibility. By default, it will check every five minutes.
  • prolonged-outage-duration — This specifies the length of time required for an outage to be considered prolonged. By default, an outage will be considered prolonged once it has lasted for at least twelve hours.
  • prolonged-outage-behavior — This specifies the behavior that the server should take once it decides that the outage is prolonged. You may wish to have the server take an additional action in the event of a prolonged outage, as will be discussed below. Supported values include:

    • none — Don’t take any additional action when the outage becomes prolonged
    • issue-alert — Generate one additional encryption-settings-database-prolonged-outage administrative alert when the outage becomes prolonged
    • enter-lockdown-mode — Place the server in lockdown mode once the server becomes prolonged
    • shut-down-server — Shut down the server once the outage becomes prolonged

When the monitor provider is active and an outage is detected, the server will generate an encryption-settings-database-inaccessible administrative alert and raise an alarm. Once the outage has been resolved and the database is accessible again, then the server will clear the alarm and issue an encryption-settings-database-access-restored alert.

The primary purpose behind the monitor’s support for taking action after a prolonged outage is to support a case in which the encryption settings database is protected by a service that is managed by a different set of people than those that manage the Directory Server itself, and there may be a legitimate reason for revoking access to the encrypted data. This use case is covered in the next section.

Maintaining a Separation of Duties Between Data Encryption Management and Server Management

As hinted at in the end of the previous section, there may be cases in which the people who manage the Directory Server are different from the people who are responsible for the data contained in it. For example, this may be the case if one organization is hosting the server on behalf of another. Alternatively, it may be the case that the data contained in the server is considered sensitive and access to it should be limited. In such cases, there may be a good reason to limit the amount of access that those responsible for administering the server have to the data contained in that server.

A substantial portion of this can be achieved through a combination of four features that were introduced in the 9.3 release:

  • The ability to impose data encryption restrictions
  • The ability to freeze the encryption settings database
  • The ability to set up the server with a pre-existing encryption settings database
  • The ability to monitor encryption settings database accessibility and take action if that access is revoked

In particular, the organization responsible for the data could do the following:

  1. Set up a temporary Directory Server instance and use it to create an encryption settings database that has an appropriate set of definitions and that is protected with the desired cipher stream provider.
  2. Create a passphrase-protected export of those definitions so that they are backed up for disaster recovery purposes.
  3. Impose a complete set of data encryption restrictions on the encryption settings database.
  4. Freeze the encryption settings database with a passphrase.

At that point, they could provide the following files to the server administrators:

  • The locked-down encryption settings database (the config/encryption-settings/encryption-settings-db file)
  • A dsconfig batch file to use to set up and activate the cipher stream provider used to protect the encryption settings database
  • Any additional metadata files that the cipher stream provider might need to access the encryption settings database (e.g., for the KMS cipher stream provider, this would be the config/encryption-settings-passphrase.kms-encrypted file).

That temporary Directory Server instance can then be destroyed, as it is no longer needed. However, the encryption settings definition export must be reliably and securely backed up, making sure to take note of the passphrase used to protect the export and the passphrase used to freeze the encryption settings database.

The Directory Server administrators can then create a server profile that will set up the server with that encryption settings database. Among all of the other things that would normally go in the server profile (e.g., to apply the desired configuration, include files in the server root, define JVM arguments, etc.), that profile would need to include the following:

  • In addition to all other appropriate arguments to use when setting up the server, the setup-arguments.txt file needs to include the --encryptDataWithPreExistingEncryptionSettingsDatabase argument.
  • The dsconfig batch file(s) needed to set up the cipher stream provider should go in the pre-setup-dsconfig directory. At present, the only configuration changes that should go in this directory are those needed to set up the cipher stream provider. Any other configuration changes that may need to be applied should go in the dsconfig directory so that they are applied after setup has completed.
  • The encryption settings database should be included as the server-root/pre-setup/config/encryption-settings/encryption-settings-db file.
  • Any metadata files needed by the cipher stream provider should also go in the appropriate locations below the server-root/pre-setup directory structure. For example, for the KMS cipher stream provider, that would be server-root/pre-setup/config/encryption-settings-passphrase.kms-encrypted.

If it is desirable to monitor the accessibility of the encryption settings database and potentially take action if it becomes unavailable, then the dsconfig directory should include a batch file with the necessary configuration to set up that monitor. For example:

dsconfig create-monitor-provider \
     --provider-name "Encryption Settings Database Accessibility" \
     --type encryption-settings-database-accessibility \
     --set enabled:true \
     --set "prolonged-outage-duration:8 h" \
     --set prolonged-outage-behavior:shut-down-server

After using manage-profile setup to set up the server with this profile, the server will have data encrypted with the definitions created by the first organization, but in a way that prevents server administrators from exporting those definitions, disabling data encryption, changing the cipher stream provider, exporting the data in the clear or with a known passphrase, or decrypting an encrypted LDIF export.

Note that these restrictions don’t have any effect on an administrator’s ability to access the data over LDAP. However, that could potentially be restricted through a number of other mechanisms (e.g., access controls, client connection policy restrictions, sensitive attribute configuration, etc.), and at the very least, such access could be audited through access logs.