Ping Identity Directory Server 10.0.0.0

We have just released version 10.0.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Important Notices

  • As of the 10.0 release, the Directory Server only supports Java versions 11 and 17. Support for Java 8 has been removed, as a critical component (the embedded web container we use to support HTTP requests, including the Directory REST API, SCIM, and the Administration Console) no longer supports Java 8.
  • As of the 10.0 release, we are no longer offering the Metrics Engine product as part of the Directory Server suite (the Directory Proxy Server and Synchronization Server are still included, as is the Server SDK for developing custom extensions). You should instead rely on the server’s ability to integrate with other monitoring software, through mechanisms like our support for OpenMetrics (used by Prometheus and other software), StatsD, and the Java Management Extensions (JMX).

Summary of New Features and Enhancements

  • Added support for inverted static groups [more information]
  • Added support for post-LDIF-export task processors, which can be used to perform custom processing after successfully exporting an LDIF file, including the option to upload the resulting file to an Amazon S3 bucket [more information]
  • Added a new log file rotation listener that can be used to upload newly rotated log files to a specified Amazon S3 bucket [more information]
  • Added a new amazon-s3-client command-line tool [more information]
  • Added authentication support to the Directory REST API [more information]
  • Added support for a generate access token request control [more information]
  • Added support for configuring the server with a single database cache that may be shared across all local DB backends [more information]
  • Added an option to automatically re-encode passwords after changing the configuration of the associated password storage scheme [more information]
  • Exposed a request-handler-per-connection configuration property in the LDAP connection handler configuration [more information]
  • Updated the encrypt-file tool to add a --re-encrypt argument [more information]
  • Updated the encrypt-file tool to add a --find-encrypted-files argument [more information]
  • Updated the replication server and replication domain configuration to add a new missing-changes-policy property that can be used to customize the way the server behaves in the event that missing changes are detected in the environment, and the server will now remain available by default under a wider range of circumstances that may not represent actual problems
  • Significantly improved performance for creating a backup, restoring a backup, or performing online replica initialization
  • Significantly improved static group update performance
  • Improved performance for the process of validating the server state immediately after completing an update
  • Added a split-ldif tool that can be used to split a single LDIF file into multiple sets for use in setting up an entry-balanced deployment with the Directory Proxy Server
  • Updated the bcrypt password storage scheme to include support for the 2b variant (in addition to the existing 2y, 2a, and 2x variants)
  • Updated the HTTP connection handler to add an option for performing SNI hostname validation during TLS negotiation
  • Updated the backup tool to display a warning when creating a compressed backup of an encrypted backend, since encrypted backends cannot be effectively compressed, but attempting to do so will make the backup process take longer
  • Updated the dsreplication command so that it uses a separate log file per subcommand, and so that log files representing failed runs of the tool are archived rather than overwritten by subsequent runs
  • Removed the dsreplication removed-defunct-server subcommand, which is better provided through the dedicated remove-defunct-server tool
  • Removed the dsreplication cleanup-local-server subcommand, which is better provided through the remove-defunct-server --performLocalCleanup command
  • Updated dsreplication initialize-with-static-topology to add an --allowServerInstanceDelete argument that can be used to remove servers from the topology if they are not included in the provided JSON file
  • Updated dsreplication initialize-with-static-topology to add an --allowDomainIDReuse argument that can be used to allow domain IDs to be used with different base DNs
  • Updated the check-replication-domains tool so that it no longer requires the --serverRoot argument
  • Updated the replication server configuration to add an option that can be used to include information about all remote servers in monitor messages, which can be useful in large topologies where that can constitute a large amount of data
  • Added support for an access log field request control that can be used to include arbitrary fields in the log message for the associated operation
  • Updated the configuration API to treat patch operations with empty arrays as a means of resetting the associated configuration property
  • Added the ability to configure connect and response timeouts when connecting to certain external services over HTTP, including CyberArk Conjur instances, HashiCorp Vault instances, the Pwned Passwords service, and YubiKey OTP validation servers
  • Updated the Synchronization Server to improve performance when setting the startpoint to the end of the changelog for an Active Directory server
  • Reduced the default amount of memory allocated for the export-ldif and backup tools

Summary of Bug Fixes

  • Fixed an issue in which the Directory REST API could fail to strip out certain kinds of encoded passwords in responses to clients (although only to clients that were authorized to access those attributes)
  • Improved the way that the replication generation ID is computed, which can help ensure the same ID is generated across replicas when they are populated by LDIF import instead of online replica initialization
  • Fixed an issue that could cause an error while trying to initialize aggregate pass-through authentication handlers
  • Fixed an issue that could cause “invalid block type” errors when interacting with compressed files
  • Fixed an issue that could prevent the server from properly including an encrypted representation of the new password in the changelog entry for a password modify extended operation when the server was configured with the changelog password encryption plugin
  • Fixed an issue in which the server could fail to update a user’s password history on a password change that included a password update behavior request control indicating that the server should ignore password history violations
  • Fixed an issue that could cause the server to add two copies of the current password in the password history when changing a password with the password modify extended operation
  • Fixed an issue in which the server could incorrectly allow a user to set an empty password. Even though that password could not be used to authenticate, the server should not have allowed it to be set
  • Fixed an issue that could cause the dictionary password validator to incorrect accept certain passwords that contained a dictionary word as a substring that was larger than the maximum allowed percentage of the password
  • Fixed an issue in which the server could be unable to properly interpret the value of the allow-pre-encoded-passwords configuration property in password policies defined in user data that were created prior to the 9.3 release of the server
  • Fixed an issue in which the server may not have properly applied replace modifications for attributes with options
  • Fixed an issue in which the first unsuccessful bind attempt after a temporary failure lockout had expired may not be counted as a failed attempt toward a new failure lockout
  • Fixed an issue in which running manage-profile generate-profile against an updated server instance could result in a profile that may not be usable for setting up new instances
  • Fixed an issue in which dsreplication initialize could suggest using the --force argument in cases where that wouldn’t help, like when attempting to authenticate with invalid credentials
  • Fixed an issue with dsreplication enable-with-static-topology in which the server could report an error when trying to connect to a remote instance
  • Fixed an issue with dsreplication enable-with-static-topology in which case sensitivity in base DNs was not handled properly
  • Fixed an issue in which the remove-defunct-server command could fail in servers configured with the AES256 password storage scheme
  • Fixed an issue that could cause a replication error if missing changes were found for an obsolete replica that is not configured in all servers
  • Fixed an issue in which the server did not check the search time limit often enough during very expensive index processing, which could allow the server to process a search request for substantially longer than the maximum time limit for that operation
  • Fixed an issue that caused the server to incorrectly include client certificate messages in the expensive operations access log
  • Fixed an internal error that could be encountered if an administrative alert or alarm is raised at a specific point in the shutdown process
  • Fixed an issue with synchronizing Boolean attributes (e.g., “enabled”) to PingOne
  • Fixed an issue in which the Synchronization Server could fail to properly synchronize changes involving the unicodePwd attribute to Active Directory if the sync class was not configured with a DN map
  • Fixed an issue that could cause the create-sync-pipe-config command to improperly generate correlated attribute definitions for generic JDBC sync destinations
  • Fixed an error that could prevent manage-topology add-server from adding a Synchronization Server instance to a topology that already had at least two Synchronization Server instances
  • Fixed an issue in which the server did not properly log an alternative authorization DN for multi-update operations that used proxied authorization
  • Fixed an issue in which dsjavaproperties --initialize could result in duplicate arguments in the java.properties file
  • Fixed an issue that could cause a spurious message to be logged to the server’s error log when accessing the status page in the Administration Console

Inverted Static Groups

In the 10.0 release, we’re introducing support for inverted static groups, which try to combine the primary benefits of traditional static groups and dynamic groups without their most significant disadvantages.

Traditional static groups contain an attribute (either member or uniqueMember, depending on the group’s object class) explicitly listing the DNs of the members of that group. They are pretty straightforward to use and are widely supported by LDAP-enabled applications, but as the number of members in the group increases, so does the size of the entry and the cost of reading and writing that entry and updating group membership.

Traditional static groups also support nesting, but it’s not necessarily easy to distinguish between members that are users and those that are nested groups. The server has to maintain an internal cache so that it can handle nested memberships efficiently, and this requires both extra memory consumption and a processing overhead when the group is updated.

Dynamic groups, on the other hand, don’t have an explicit list of members, but instead are defined with one or more LDAP URLs whose criteria will be used for membership determinations. Because there is no member list to maintain, dynamic groups don’t have the same scalability issues as traditional static groups, and the number of members in the group isn’t a factor when attempting to determine whether a specific user is a member. However, dynamic groups aren’t as widely supported as traditional static groups among LDAP-enabled applications, there’s no way to directly add or remove members in a dynamic group (at least, not without altering the entries in a way that causes them to match or stop matching the membership criteria, which varies on a group-by-group basis), and they don’t support nesting.

Inverted static groups provide a way to explicitly manage group membership like with traditional static groups, but with the scalability of dynamic groups. Rather than storing the membership as a list of DNs in the group entry itself, each user entry has a list of the DNs of the inverted static groups in which they’re a member (in the ds-member-of-inverted-static-group-dn operational attribute). This means that the number of members doesn’t affect the performance of many group-related operations, like adding a new member to the group, removing an existing member from the group, or determining whether a user is a member of the group.

The only way in which the size of the group does impact performance is if you want to retrieve the entire list of members for the group (which you can do by performing a subtree search to find entries whose isMemberOf attribute has a value that matches the DN of the target group). While this is slower than simply retrieving a traditional static group entry and retrieving the list of member DNs, this is actually not an analogous comparison for a couple of key reasons:

  • Retrieving the list of member DNs from a traditional static group only gives you the DNs of the member entries. That isn’t enough if you need the values of any other attributes from the member entries.
  • Retrieving the list of member DNs from a traditional static group doesn’t work well if that group includes one or more nested groups. There’s no good way to tell which of the member DNs reference users and which represent nested groups, and the member DN list won’t include members of the nested groups.

As such, the best way to retrieve a list of all members of a traditional static group is also to perform a subtree search that targets the isMemberOf attribute, and it should be at least as fast to do that for an inverted static group as it is for a traditional static group.

The other key difference that inverted static groups have over traditional static groups lies in the way that we handle nested membership. As previously noted, traditional static groups can include both user DNs and group DNs in their membership attribute, and there’s not a good way to distinguish between them. Inverted static groups distinguish between user members and nested groups. Rather than adding a nested group to the inverted static group as a regular member, you need to add the DN of the nested group to the ds-nested-group-dn attribute of the inverted static group entry. This does make it possible to distinguish between user entries and nested groups, and it allows the server to handle nesting for these types of groups without a separate cache or expensive processing.

The main disadvantage that inverted static groups have in comparison to traditional static groups is that because they are a new feature, existing applications don’t directly support them. If an application only cares about making group membership determinations, makes those determinations using the isMemberOf attribute, and doesn’t need to alter group membership, then it should work just as well with inverted static groups as it does with traditional static groups. However, if it does need to alter group membership, or if it doesn’t support using the isMemberOf attribute, then that’s a bigger hurdle to overcome. To help with that, we’re including a “Traditional Static Group Support for Inverted Static Groups” plugin that can be used to allow clients to interact with inverted static groups in some of the same ways they might try to interact with traditional static groups. This includes:

  • The plugin will intercept attempts to modify the group entry to add or remove member or uniqueMember values, and instead make the corresponding updates to the ds-member-of-inverted-static-group-dn attribute in the target user entries.
  • The plugin can generate a virtual member or uniqueMember attribute for the group entry. It can do this in a few different ways, which may have different performance characteristics:

    • It can do it in a way that works for compare operations or equality search operations that target the membership attribute but don’t actually attempt to retrieve the membership list. This is the most efficient way to determine if a traditional static group has a specific DN in its list of members, and it should be about as fast to make this determination for an inverted static group as it is for a traditional static group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct members of the group (excluding nested members). The performance of this does depend on the number of direct members in the group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct and nested members of the group. The performance of this depends both on the number of direct members in the group as well as the types and sizes of the nested groups.

Support for the Amazon S3 Service

The data stored in the Directory Server is often absolutely critical to the organizations that use it, so it’s vital to have a good backup strategy, and that must include some kind of off-site mechanism. Amazon’s S3 (Simple Storage Service) is a popular cloud-based mechanism that is often used for this kind of purpose, and in the 10.0 release, we’re introducing a couple of ways to have the Directory Server take advantage of it. In particular, you can now easily use it as off-site storage for LDIF exports and log files. We also include a new amazon-s3-client tool that can be used to interact with the S3 service from the command line.

Post-LDIF-Export Task Processors

We’ve introduced a new API in the server that can be used to have the server perform additional processing after successfully exporting data to an LDIF file as part of an administrative task (including those created by a recurring task). You can use the Server SDK to create custom post-LDIF-export task processors that do whatever you want, but we’re including an “Upload to S3” implementation that can copy the resulting export file (which will ideally have already been compressed and encrypted during the export process) to a specified S3 bucket. This processor includes retention support, so you can have it automatically remove previous export files created more than a specified length of time ago, or you can have it keep a specified number of the newest files in the bucket.

The export-ldif command-line tool now has a new --postExportProcessor argument that you can use to indicate which processors should be invoked for after the export file has been successfully written, and you can also specify which processors to use when creating the tasks programmatically (for example, using the task support in the UnboundID LDAP SDK For Java) or by simply adding an appropriate formatted entry to the server’s tasks backend. We’ve also updated the configuration for the LDIF export recurring task to include a new post-ldif-export-task-processor property to specify which processor(s) should be invoked for LDIF exports created by that recurring task.

Note that the post-LDIF-export task processor functionality is only available for LDIF exports invoked as recurring tasks, and not those created using the export-ldif tool in the offline, standalone mode. This is because post-LDIF-export task processors may need access to a variety of server components, and it’s difficult to ensure that all necessary components would be available outside of the running server process.

The Upload to S3 Log File Rotation Listener

The Directory Server already had an API for performing custom processing whenever a log file is rotated out of service, including copying the file to an alternative location on the server filesystem or invoking the summarize-access-log tool on it. In the 10.0 release, we’re including a new “Upload to S3” log file rotation listener that can be used to upload the rotated log file to a specified S3 bucket. This is available for all of the following types of log files:

  • Access
  • Error
  • Audit
  • Data recovery
  • HTTP operations
  • Sync
  • Debug
  • Periodic stats

Although there are obvious benefits to copying all of these types of log files to an external service, I want to specifically call out the importance of having off-site backups for the data recovery log. The data recovery log is a specialized type of audit log that keeps track of all write operations processed by the server in a form that allows them to be easily replayed or reverted should the need arise. The data recovery log can be used as a kind of incremental backup mechanism for keeping track of changes made since the most recent backup or LDIF export, and in a worst-case scenario in which all server instances are lost and you need to start over from scratch, you can restore the most recent backup or import the most recent LDIF export, and the replay any additional changes from the data recovery log that were made after the backup or export was created.

As with the Upload to S3 post-LDIF-export task processor, the new log file rotation listener also includes retention support so that you can choose to keep either a specified number of previous log files, or those uploaded less than a specified length of time in the past.

The amazon-s3-client Command-Line Tool

The new amazon-s3-client tool allows you to interact with the S3 service from the command line, including in shell scripts or batch files. It supports the following types of operations:

  • List the existing buckets in the S3 environment
  • Create a new bucket
  • Remove an existing bucket (optionally removing the files that it contains)
  • List the files in a specified bucket
  • Upload a file to a specified bucket
  • Download a specified file from a bucket
  • Download one or more of the newest files from a specified bucket, based on the number of files to download, the age of files to download, or files created after a specified time
  • Remove a file from a bucket

This allows you to perform a number of functions, including manually upload additional files that the server doesn’t support uploading automatically, or to download files for use in bootstrapping new instances. It can generate output as either human-readable text or machine-parsable JSON.

Authentication Support in the Directory REST API

The Directory REST API allows you to submit requests and retrieve data from the server using a JSON-formatted HTTP-based API. It’s always had support for all of the typical operations needed for interacting with the data, like:

  • Creating new entries
  • Updating existing entries
  • Removing existing entries
  • Retrieving individual entries
  • Searching for all entries matching a given set of criteria

Within the last few releases, we’ve also introduced support for a wide variety of request controls, and also certain extended operations. But one of the big gaps between what the server offered over the Directory REST API versus what you could get via LDAP was in its support for authentication. The Directory REST API has always supported authorizing individual requests using either HTTP basic authorization or OAuth 2 bearer tokens, but it didn’t really provide any good way to authenticate clients and verify client credentials. And if you wanted to authorize requests with stronger authentication than just a DN and password, you had to have an external service configured for issuing OAuth tokens.

This is being addressed in the 10.0 release with a new /authenticate endpoint, which currently supports the following authentication methods:

  • password — Username or bind DN and a static password
  • passwordPlusTOTP — Username or bind DN, a static password, and a time-based one-time password (TOTP)
  • passwordPlusDeliveredOTP — Username or bind DN, a static password, and a one-time password delivered through some out-of-band mechanism like email or SMS
  • passwordPlusYubiKeyOTP — Username or bind DN, a static password, and a one-time password generated by a YubiKey device

We’re also adding other new endpoints in support of these mechanisms, including:

  • One for generating a TOTP secret, storing it in the user’s entry, and returning it to the client so that it can be imported into an app (like Authy or Google Authenticator) for generating time-based one-time passwords for use with the passwordPlusTOTP authentication method.
  • One for revoking a TOTP secret so that it can no longer be used to generate time-based one-time passwords that will be accepted by the passwordPlusTOTP authentication method.
  • One for generating a one-time password and delivering it to the user through some out-of-band mechanism so that it can be used to authenticate with the passwordPlusDeliveredOTP authentication method.
  • One for registering a YubiKey device with a user’s entry so that it can be used to authenticate with the passwordPlusYubiKeyOTP method.
  • One for deregistering a YubiKey device with a user’s entry so that it can no longer be used to authenticate with the passwordPlusYubiKeyOTP method.

If the authentication is successful, the response may include the following content:

  • An access token that can be used to authorize subsequent requests as the user via the Bearer authorization method.
  • An optional set of attributes from the authenticated user’s entry.
  • If applicable, the length of time until the user’s password expires.
  • A flag that indicates whether the user is required to choose a new password before they will be allowed to do anything else.
  • An optional array of JSON-formatted response controls.

Generated Access Tokens in the Directory Server

We’ve added support for a new “generate access token” request control that can be included in a bind request to indicate that if the bind succeeds, the server should return a corresponding response control with an access token that can be used to authenticate subsequent connections via the OAUTHBEARER SASL mechanism. While this control is used behind the scenes in the course of implementing the new authentication support in the Directory REST API, it can also be very useful in certain LDAP-only contexts.

For example, this ability may be especially useful in cases where you want to authenticate a client with a mechanism that relies on single-use credentials, like the UNBOUNDID-TOTP, UNBOUNDID-DELIVERED-OTP, or UNBOUNDID-YUBIKEY-OTP SASL mechanism. In such cases, the credentials can only be used once, which means you can’t use them to authenticate multiple connections (for example, as part of a connection pool), or to re-establish a connection if the initial one becomes invalidated.

Shared Database Cache for Local DB Backends

You can configure the Directory Server with multiple local DB backends. You should do this if you want to have multiple naming contexts for user data, and you can also do it for different portions of the same hierarchy if you want to maintain them separately for some reason (e.g., to have them replicated differently, as in an entry-balanced configuration where some of the DIT needs to be replicated everywhere while the entry-balanced portion needs to be replicated only to a subset of servers).

Previously, each local DB backend had its own separate database cache, which had to be sized independently. This gives you the greatest degree of control over caching for each of the backends, which may be particularly important if you don’t have enough memory to hold everything based on the current caching configuration, but it can also be a hassle in some deployments. And if you don’t keep track of how much you’ve allocated to each backend, you could potentially oversubscribe the available memory.

In the 10.0 release, we’re adding the ability to share the same cache across all local DB backends. To do this, set the use-shared-database-cache-across-all-local-db-backends global configuration property to true, and set the shared-local-db-backend-database-cache-percent property to the percentage of JVM memory to allocate to the cache. Note that this doesn’t apply to either the LDAP changelog or the replication database, both of which intentionally use very small caches because their sequential access patterns don’t really require caching for good performance.

Re-Encoding Passwords on Scheme Configuration Changes

The Directory Server has always had support for declaring certain password storage schemes to be deprecated as a way of transparently migrating passwords from one scheme to another. If a user successfully authenticates in a manner that provides the server with access to the clear-text password (which includes a number of SASL mechanisms in addition to regular LDAP simple binds), and their password is currently encoded with a scheme that is configured as deprecated, then the server will automatically re-encode the password with the currently configured default scheme.

Deprecated password storage schemes can only be used to migrate users from one scheme to another, but there may be legitimate reasons to want to re-encode a user’s password without changing the scheme. For example, several schemes use multiple rounds of encoding to make it more expensive for attackers to attempt to crack passwords, and you may want to have passwords re-encoded if you change the number of rounds that the server uses.

In the 10.0 release, we’re adding a new re-encode-passwords-on-scheme-config-change property to the password policy configuration. If this is set to true, if a client authenticates in a manner that provides the server with access to their clear-text password, and if their password is currently encoded with different settings than are currently configured for the associated scheme, then the server will automatically re-encode that password with the default scheme using current settings. This functionality is available for the following schemes:

  • AES256 — If there is a change in the definition used to encrypt passwords.
  • ARGON2, ARGON2D, ARGON2I, ARGON2ID — If there is a change in the iteration count, parallelism factor, memory usage, salt length, or derived key length.
  • BCRYPT — If there is a change in the cost factor.
  • PBKDF2 — If there is a change in the digest algorithm, iteration count, salt length, or derived key length.
  • SCRYPT — If there is a change in the CPU/memory cost factor exponent, the block size, or the parallelization parameter.
  • SSHA, SSHA256, SSHA384, SSHA512 — If there is a change in the salt length.

It is also possible to enable this functionality for custom password storage schemes created using the Server SDK by overriding some new methods added to the API.

Separate Request Handlers for Each LDAP Client Connection

When the Directory Server accepts a new connection from an LDAP client, it hands that connection off to a request handler, which will be responsible for reading requests from that client and adding them to the work queue so that they can be picked up for processing by worker threads. By default, the server automatically determines the number of request handlers to use (although you can explicitly configure the number if you want), and a single request handler may be responsible for reading requests from several connections.

The vast majority of the time, having request handlers responsible for multiple connections isn’t an issue. Just about the only thing that a request handler has to do is wait for requests to arrive, read them, decode them, and add them to the work queue so that they will be processed. While it’s doing this for a request from one client, any other clients that are sending requests at the same time will need to wait, but because the entire process of reading, decoding, and enqueueing a request is very fast, it’s rare that processing for one client will have any noticeable impact on the request handler’s ability to process other clients. However, there are a couple of instances in which this might not be the case:

  • If a client is submitting a very large number of asynchronous requests at the same time.
  • If the server needs to perform TLS negotiation on the connection to set up a secure communication channel, and some part of that negotiation is taking a long time to complete.

In practice, neither of these is typically an issue. Even if there are a ton of asynchronous requests submitted all at once, it’s still pretty unlikely that it will cause any noticeable starvation in the server’s ability to read requests from other clients. And the individual steps of performing TLS negotiation also tend to be processed very quickly. However, there have been exceptional cases in which these kinds of processing may have had a noticeable impact. In such instances, the best way to deal with that possibility is to have the server use a separate request handler for each connection that is established so that the process of reading, decoding, and enqueueing requests from one client cannot impact the server’s ability to do the same for other clients that may be sending requests at exactly the same time. In the unlikely event that the need arises in your environment, you can now use the request-handler-per-connection property in the connection handler configuration to cause the server to allocate a new request handler for every client connection that is established.

Updates to the encrypt-file Tool

As its name implies, the encrypt-file tool can be used to encrypt (or decrypt) the contents of files, either using a definition from the server’s encryption settings database or with an explicitly-provided passphrase. In many cases, if a file used by the server (or a command-line tool) is encrypted with an encryption settings definition, the server can detect that and automatically decrypt it as it’s reading the contents of that file.

If an administrator wishes to retire an encryption settings definition for some reason, and especially if they want to remove it from the encryption settings database, they need to ensure that it is no longer needed to decrypt any existing encrypted data. In the past, some customers have overlooked encrypted files when ensuring that a definition is no longer needed. To help avoid that, we’ve added two new arguments to the encrypt-file tool:

  • --find-encrypted-files {path} — This argument can be used to search for encrypted files below the specified path on the server filesystem. By default, it will find files that have been encrypted with any encryption settings definition or with a passphrase, but you can also provide the --encryption-settings-id argument to indicate that you only want it to find files encrypted with the specified definition.
  • --re-encrypt {path} — This argument can be used to re-encrypt an existing encrypted file, using either a different encryption settings definition or a new passphrase.

If you do plan to retire an existing encryption settings definition, then you should use the encrypt-file --find-encrypted-files command to identify any files that have been encrypted with that definition, and then use encrypt-file --re-encrypt to re-encrypt them with a different definition so that the server can still access them even if you remove the retired definition from the encryption settings database.