Ping Identity Directory Server 9.3.0.1 (and others) addressing a security issue

We have just released several new versions of the Ping Identity Directory Server to address a security issue that we discovered. The issue is in a component of the server that is only enabled when setting up the Delegated Admin product, and customers who are using that product are strongly advised to upgrade. Customers who are not using the Delegated Admin product should not be affected by the issue.

The following new versions are now available and contain the fix for this issue:

  • 9.3.0.1
  • 9.2.0.2
  • 9.1.0.3
  • 8.3.0.9

The security issue was discovered internally, and we have no reason to believe that it has been independently discovered or exploited. Ping is not currently prepared to provide additional information about the vulnerability at this time, but is expected to release a security advisory with additional details in the future.

Ping Identity Directory Server 9.3.0.0

We have just released version 9.3.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Summary of New Features and Enhancements

  • Added support for data encryption restrictions [more information]
  • Added the ability to freeze the encryption settings database [more information]
  • Added the ability to set up the server with a pre-existing encryption settings database [more information]
  • Added support for monitoring the availability of the encryption settings database [more information]
  • Added other data encryption improvements [more information]
  • Added an aggregate pass-through authentication handler [more information]
  • Added a PingOne pass-through authentication handler [more information]
  • Improved dsreplication performance in topologies with a large number of servers and/or high network latency between some of the servers
  • Added more options for allowing pre-encoded passwords [more information]
  • Added the ability to use the proxied authorization v1 or v2 request control in password modify extended requests
  • Updated the Directory REST API to provide support for the password modify, get password quality requirements, and suggest password extended operation types [more information]
  • Added a disallowed characters password validator [more information]
  • Added a UTF-8 password validator [more information]
  • Added the ability to include ds-pwp-modifiable-state-json in add operations [more information]
  • Added the ability to automatically apply changes to TLS protocol and cipher suite configuration [more information]
  • Added new account-authenticated and account-deleted account status notification types [more information]
  • Added configuration properties for managing the configuration archive [more information]
  • Added a new replication-missing-changes-risk alert type [more information]
  • Added a new replication-not-purging-obsolete-replicas alert type [more information]
  • Added a new check-replication-domains tool that can list known replication domains identify any that may be obsolete
  • Added a --showPartialBacklog argument to dsreplication status
  • Added the ability to synchronize Boolean-valued attributes to the PingOne sync destination
  • Updated replace-certificate to support obtaining new certificate information from PEM files
  • Added support for encrypted PKCS #8 private keys [more information]
  • Added caching support to the PKCS #11 key manager provider [more information]
  • Added the ability to specify the start and end times for the range of log messages to include in collect-support-data archives when invoking the tool as an administrative task

Summary of Bug Fixes

  • Fixed an issue when modifying ds-pwp-modifiable-state-json with other attributes [more information]
  • Fixed an issue that could prevent the server from properly building indexes with very long names
  • Fixed an issue that could cause the server to omit matching entries when configuring compact-common-parent-dn values [more information]
  • Fixed an issue in which failover may not work properly after updating a Synchronization Server instance with manage-profile replace-profile
  • Fixed an issue with replace modifications for attributes containing variants with options [more information]
  • Improved support for passwords containing characters with multiple encodings [more information]
  • Fixed an issue that could prevent obsolete replicas from being automatically purged in certain circumstances
  • Fixed an issue that could prevent the servers in a replication topology from being able to select the authoritative server for maintaining information in the topology registry
  • Increased timeouts used by the dsreplication tool to reduce the chance that they would be incorrectly encountered when interacting with a large replication topology
  • Fixed an issue that caused the Directory REST API to always include the permissive modify request control when updating entries
  • Improved access control behavior for the password policy state extended operation [more information]
  • Fixed an issue in which subtree searches based at the server’s root DSE could omit entries from backends with base DNs subordinate to those of other backends
  • Fixed an issue that could prevent a user from using grace logins to change their own password in a modify request that contained the proxied authorization request control
  • Fixed an issue with substring filters containing logically empty substrings [more information]
  • Improved error handling when using automatic authentication with client certificates [more information]
  • Improved Directory Proxy Server error handling when using the rebind authorization method [more information]
  • Fixed an issue that prevented including permit-export-reversible-passwords privilege in the default set of root privileges
  • Fixed an issue that could cause manage-profile setup to complain about being unable to find certain utilities used by the collect-support-data tool
  • Fixed an error that could occur if an archived configuration file was removed in the middle of an attempt to back up the config backend
  • Fixed an issue that prevented the Directory Proxy Server from logging search result entry messages for entries passed through from a backend server
  • Fixed an issue when synchronizing account state from Active Directory when using modifies-as-creates
  • Suppressed servlet information in HTTP error messages by default
  • Restricted the RSA key size for inter-server certificates to a maximum of 3072 bits
  • Fixed an issue with base DN case sensitivity when enabling replication with a static topology
  • Changed the result code used when rejecting an attempt to change a password that is within the minimum age from 49 (invalidCredentials) to 53 (unwillingToPerform)
  • Fixed an issue that could cause the server to return multiple password validation details response controls in the response to a password modify extended request
  • Fixed an issue that could prevent the server from returning a generated password for a password modify extended operation processed with the no-operation request control

Encryption Settings Database Improvements

We have made a set of changes to the way that the server manages and interacts with the encryption settings database. When used in combination, this can allow for a separation of duties between those responsible for managing the Directory Server itself and those responsible for managing data encryption, which could be used to limit the access that server administrators have to encrypted data. However, even in environments where this strict separation of duties is not required, these changes can still substantially improve the overall security of the directory environment.

These changes come in the form of four key enhancements:

  • We have introduced the ability to impose restrictions on the ways that administrators can interact with encrypted data. These restrictions are defined in the encryption settings database itself and can prevent administrators from doing any or all of the following:

    • Disabling data encryption
    • Changing the cipher stream provider used to protect the encryption settings database
    • Exporting the encryption settings definitions
    • Exporting or backing up backend data in unencrypted form
    • Exporting or backing up backend data in a form that is encrypted with a supplied passphrase rather than an encryption settings definition
    • Using the encrypt-file tool to decrypt files

  • We have added the ability to freeze the encryption settings database with a passphrase. If the encryption settings database has been frozen, then no changes may be made to it, including creating, importing, or removing definitions, changing which is the preferred definition, and altering the set of data encryption restrictions. If any changes are needed, then the encryption settings database may be un-frozen using the same passphrase that was initially used to freeze it.

  • We have added the ability to set up the server with a pre-existing encryption settings database. This database should already have the desired set of definitions, and it may be configured with data encryption restrictions and frozen so that no changes will be allowed. This database will also be tied to a specific cipher stream provider used to protect its contents. To set up the server with a pre-existing encryption settings database, you should use manage-profile setup with a server profile that has the following characteristics:

    • The setup-arguments.txt file must contain the new --encryptDataWithPreExistingEncryptionSettingsDatabase argument.
    • The configuration changes needed to set up the associated cipher stream provider must be included in dsconfig batch files contained in the pre-setup-dsconfig directory.
    • The encryption settings database itself, along with any metadata files that the cipher stream provider might need, should be included in the appropriate locations below the server-root/pre-setup directory.
  • We have added support for a new monitor provider that can periodically ensure that the server can read the contents of the encryption settings database without relying on any caching that the cipher stream provider may normally use to improve performance and reliability. Not only does this offer better overall monitoring for the health of the server, but it can also be used to take action if the information that it needs to interact with the encryption settings database has been disabled. For example, if the cipher stream provider relies on an external key or secret (e.g., from Amazon Key Management Service, Amazon Secrets Manager, Azure Key Vault, CyberArk Conjur, or HashiCorp Vault) to be able to unlock the encryption settings database, and that key or secret has been intentionally removed or revoked, then it may be desirable to take action to limit the server’s ability to access encrypted data, like entering lockdown mode or shutting down entirely.

When all of these are combined, and the encryption settings database is protected by a cipher stream provider that relies on an external service, the server administrators could be able to maintain the server with data encryption enabled, but with substantially restricted access to the data that it contains. Although it’s not yet available as an option, this could be useful in environments like PingOne Advanced Services, where Ping personnel can manage a Directory Server deployment on behalf of an organization with limited access to that organization’s data.

Other Data Encryption Improvements

We have made a number of other additional improvements in the server’s support for data encryption. These include:

  • We added a new --key-factory-iteration-count argument to the encryption-settings create command to make it possible to specify the PBKDF2 iteration count to use for the new definition. When creating a new encryption settings definition that is backed by a randomly generated secret, the server will now default to using an iteration count of 600,000 in accordance with the latest OWASP guidelines.
  • We updated most cipher stream providers to make it possible to explicitly specify the PBKDF2 iteration count that they use when deriving the key used to protect the encryption settings database. They also now use a higher default iteration count of 600,000 when creating a new database.
  • We updated the file-based cipher stream provider to support using a separate metadata file with additional details about the encryption that it uses to protect the encryption settings database. When setting up a new instance of the server in a manner that uses this cipher stream provider, a metadata file will be automatically generated to allow it to use stronger encryption to protect the database than has been used in the past.
  • We have improved the strength of the encryption used to protect encryption settings exports, backups, LDIF exports, encrypted log files, and other types of file encryption. We now prefer 256-bit AES over 128-bit AES when it’s available, and we use a higher PBKDF2 iteration count to protect the key.
  • We have improved the performance of file encryption and decryption operations performed by the server in the common case in which the encryption uses an encryption settings definition rather than a separate passphrase. Although the server (or standalone tools that need to access encryption settings definitions) may take a little longer to start up with the stronger encryption settings that are in place, the use of caching should dramatically reduce the cost of subsequent encryption and decryption operations.
  • We updated the encryption settings backend to provide additional information about the definitions contained in the encryption settings database. The base entry for the backend will also indicate whether the encryption settings database is frozen and/or configured with any data encryption restrictions.
  • We updated setup so that if it is configured to generate a tools.pin file with the default password that command-line tools may use to authenticate to the server, that password will now be automatically encrypted if data encryption is enabled.

New Pass-Through Authentication Handlers

The Directory Server has long supported the ability to use pass-through authentication, in which a bind attempt against a local entry may ultimately be forwarded to an external service to verify the credentials, optionally setting those credentials if they are confirmed to be correct.

Initially, pass-through authentication was only supported for other LDAP servers. Later, we introduced support for passing through authentication attempts to PingOne. In the 9.0 release, we introduced a new pluggable pass-through authentication plugin that made it possible to use the Server SDK to develop custom authentication handlers that could be used to target other types of services.

We have only ever supported a single instance of the pass-through authentication plugin (of any type) active in the server at once. In the 9.3 release, we are introducing a new aggregate pass-through authentication handler for use with the pluggable pass-through authentication plugin. This aggregate handler allows you to configure pass-through authentication so that it can support multiple external services, of the same or different types. You can now configure each of the pass-through authentication handlers to indicate which types of bind requests they support, and you can optionally attempt to use multiple handlers for the same bind operation under certain circumstances.

We have also added a new PingOne pass-through authentication handler that works in conjunction with the pluggable pass-through authentication plugin to allow passing through bind requests to PingOne. This handler offers essentially the same functionality as the PingOne pass-through authentication plugin, but the new PingOne handler allows it to be used in conjunction with the aggregate pass-through authentication handler so that pass-through authentication attempts can use PingOne in addition to other services.

More Options for Allowing Pre-Encoded Passwords

By default, the server does not allow clients to set pre-encoded passwords. This is something that we made it possible to disable via the allow-pre-encoded-passwords configuration property in the password policy, but we strongly discourage that because the server can’t perform any validation for pre-encoded passwords. A client could use this to set a password that doesn’t meet the server’s password strength requirements. And because most password storage schemes have many different ways of encoding the same password (which is important to protect against attacks using precomputed password dictionaries), this could also be exploited to allow a user to continue using the same password indefinitely, in spite of password expiration or other related settings.

The biggest risk to allowing pre-encoded passwords lies in self password changes rather than administrative password reset. Previously, administrators could override the server’s prohibition against pre-encoded passwords in one of two ways:

  • If they are authorized to use the password update behavior request control, then they can use it to allow or reject pre-encoded passwords on a per-operation basis.
  • If they have the bypass-pw-policy privilege, then they will be allowed to set pre-encoded passwords for other users (and do other things that the password policy configuration may prevent by default).

Previously, the allow-pre-encoded-passwords configuration property only offered two values: false (the default setting, in which the server would not allow clients to set pre-encoded passwords) or true (in which the server would allow any client to set pre-encoded passwords). In the 9.3 release, we are making it possible to use three additional values for this property:

  • add-only — Indicates that the server will allow administrators to include pre-encoded passwords in add requests, but will continue to reject pre-encoded passwords for both self password changes and administrative password resets.
  • admin-reset-only — Indicates that the server will allow administrators to perform an administrative password reset with a pre-encoded password, but will continue to reject pre-encoded passwords in add requests and for self password changes.
  • add-and-admin-reset-only — Indicates that the server will allow administrators to include pre-encoded passwords in add requests and when performing administrative password resets, but will continue to reject pre-encoded passwords in self password changes.

In cases where an application may have a legitimate need to be able to set pre-encoded passwords for users, but it’s not feasible to update the application to use the password update behavior request control or to give the account is use the bypass-pw-reset privilege, these new options may make it possible for that application to set pre-encoded passwords for users while still preventing them for self password changes.

Directory REST API Support for More Password Operations

Updated the Directory REST API to add support for equivalents to the following LDAP extended operations:

  • The standard password modify extended operation, which makes it possible to perform a self password change or an administrative password reset. Although you could previously change a user’s password by updating the password attribute, this operation offers a number of advantages, including:

    • You don’t need to know which attribute is used to store passwords in the target user’s entry.
    • There’s a dedicated field for supplying the user’s current password, which the password policy may require for self password changes.
    • You can optionally omit a new password and have the server automatically generate a new one and return it in the response.
    • It can be used in conjunction with a password reset token to allow a user to perform a self password change in cases where their account may be otherwise unusable.
  • The proprietary get password quality requirements extended operation, which can be used to obtain a list of the requirements that new passwords will be required to satisfy, in both machine-parsable and human-readable forms.
  • The proprietary generate password extended operation (which is called “suggest passwords” in the Directory REST API), which can be used to cause the server to generate suggested new passwords for a user.

New Password Validators

We have added a new disallowed characters password validator that makes it possible to reject passwords that contain any of a specified set of characters. You can define characters that are not allowed to appear anywhere in a password, as well as characters that are not allowed to appear at the beginning and/or the end of a password.

We have also added a new UTF-8 password validator that can be used to ensure that only passwords that are provided as valid UTF-8 strings will be allowed. You can optionally choose to limit passwords to only containing ASCII character or to allow non-ASCII characters, and you can also specify which classes of characters (e.g., letters, numbers, punctuation, symbols, spaces, etc.) should be allowed.

Support for ds-pwp-modifiable-state-json in Add Operations

If you enable the Modifiable Password Policy State Plugin in the server, then you can use the ds-pwp-modifiable-state-json operational attribute to set certain aspects of a user’s password policy state, including:

  • Whether the account is administratively disabled
  • Whether the account is locked as a result of too many failed authentication attempts
  • Whether the account is in a “must change password” state
  • The password changed time
  • The account activation time
  • The account expiration time
  • The password expiration warned time

Previously, the ds-pwp-modifiable-state-json attribute could only be used in a modify operation to alter the password policy state for an existing user. As of the 9.3 release, we now also allow it to be used in an add operation to specify state information for the account being created.

Fix Database Ordering With compact-common-parent-dn Values

The compact-common-parent-dn configuration property can be used to reduce the amount of disk space consumed by the database by tokenizing portions of entry DNs that lots of entries have in common. For example, in a database in which most of the entries are below “ou=People,dc=example,dc=com”, defining that as a compact-common-parent-dn value could reduce the size needed to store DNs of entries below that by up to 26 bytes. And this compaction can happen not only in the encoded representation of the entry, but also in an internal index that we use to map the DNs of entries to the identifier of the record that contains the data for that entry, so that can double the space savings.

Unfortunately, we discovered a bug in the way that the maintained that DN-to-ID database when custom compact-common-parent-dn values were specified. This bug could only appear under very specific circumstances, including:

  • When one or more compact-common-parent-dn values were specified that are two or more levels below the base DN for the backend.
  • When processing an unindexed search in which the base DN for the search is below the base DN for the backend, but above one or more of the compact-common-parent-dn values.

In such cases, the search could have incorrectly omitted entries from the search results that were below those compact-common-parent-dn values. This was due to an issue with the way that we ordered records in that DN-to-ID database.

We have fixed the problem in the 9.3 release. However, because the issue relates to the order in which records were stored in the database, if you are affected by this problem, then you will need to export the contents of the database to LDIF and re-import it. This isn’t something that you need to do unless you have configured compact-common-parent-dn values that are at least two levels below the base DN for the backend.

A Fix for Modifying ds-pwp-modifiable-state-json With Other Attributes

When we initially introduced support for the ds-pwp-modifiable-state-json operational attribute, we did not allow altering it in conjunction with any other attribute in the user’s entry. We also included an optimization in the plugin that handles that attribute so that if the requested change did not actually alter the user’s password policy state (e.g., because the new value only attempted to set state properties to values that they already had), the plugin would tell the server to skip much of the normal processing for that modify operation.

We have since updated the server to allow you to alter other attributes (except the password attribute) in the same request as one that modifies ds-pwp-modifiable-state-json. However, in doing so, we neglected to update the plugin so that it no longer skipped the remainder of the core modify operation processing if the ds-pwp-modifiable-state-json update did not result in any password policy state changes but there were still other, unrelated changes that should be applied. In such cases, the server could have failed to apply changes to those other attributes. This has been fixed, and other modifications in the request will still be processed even if a change to ds-pwp-modifiable-state-json does not alter the user’s password policy state.

Automatically Apply Configuration Changes to TLS Protocols and Cipher Suites

By default, the server automatically selects an appropriate set of TLS protocols and cipher suites that it will use for secure communication. This default configuration should provide a good level of security that avoids options with known weaknesses and that don’t support forward secrecy while still remaining compatible with virtually any client from the last fifteen years.

However, the server does allow you to explicitly configure the set of TLS protocols and/or cipher suites if you have a need to do so. Previously, making any such changes required you to either restart the affected connection handler or the entire server for them to actually take effect. As of the 9.3 release, these changes will now automatically take effect for the LDAP connection handler so that any new secure sessions that are negotiated after the change is made will use the updated settings.

Note that this is currently only supported for the LDAP connection handler. It is still necessary to perform a restart to make the change take effect for other types of connection handlers (like the HTTP or JMX handlers), or to make the change take effect for replication.

New Account Status Notification Types

We have added a new account-authenticated account status notification type that can be used to generate an account status notification any time a user authenticates with a bind operation that matches the criteria contained in an account status notification handler’s account-authentication-notification-result-criteria configuration property.

We have added a new account-deleted account status notification type that can be used to generate an account status notification any time a user’s account is removed with a delete operation that matches the criteria contained in an account status notification handler’s account-deletion-notification-request-criteria configuration property.

Fix a Replace Issue for Modifies of Attributes With Options

We have fixed an issue that could cause the server to behave incorrectly when processing a replace modification for an attribute that has some variants with attribute options in the target entry. If the replace modification does not include any values for the target attribute, the server would have previously removed all variants of the attribute from the entry, including those with and without attribute options. It will now correctly only remove the variant without any attribute options.

Improved Support for Passwords Containing Characters With Multiple Encodings

Unicode is an international standard that defines the set of characters that computers are intended to support. This includes a wide range of characters encompassing most written languages on earth, including not only the core set of ASCII characters used in English, but also characters with diacritical marks used in many Latin-based languages, non-Latin symbols like those used in Chinese hanzi and Japanese kanji, and even emojis.

In some cases, Unicode supports multiple ways of encoding the same character. For example, the “ñ” (Latin small letter N with tilde) character can be represented in two different ways:

  • As the single Unicode character U+00F1
  • As a lowercase ASCII letter n followed by Unicode character U+0303, which is a special combining mark that basically means “put a tilde over the previous character”

Previously, when encoding passwords, the Directory Server would always encode them using the exact set of UTF-8 bytes provided by the client in the request used to set that password, and the server would always use the exact set of UTF-8 bytes included in a bind request as a way of attempting to verify whether the provided password was correct. This approach works just fine in the vast majority of cases, but it has the potential to fail in circumstances where the password contains characters that have multiple Unicode representations, and the encoding used in the bind request is different from the encoding used when the password was originally set.

As of the 9.3 release, we have improved our support for authenticating with passwords that contain characters with multiple Unicode representations. If a password storage scheme indicates that a given plaintext password could not have been used to create the encoded stored password, but if the provided plaintext password contains one or more non-ASCII characters, then we will check to see if that password may have alternative encodings that could have been used to generate the stored password.

Improved Access Control Behavior for the Password Policy State Extended Operation

The password policy state extended operation provides the ability to retrieve and update a wide range of properties that are part of a user’s password policy state. In the 9.3 release, we have improved the way that it operates to avoid a common pitfall that has caused issues in the past for clients that are subject to access control restrictions.

Previously, whenever the extended operation handler retrieved the entry for the target user, it did so only under the authority of the authenticated user. If that user was subject to access control restrictions and didn’t have the permission to retrieve some or all of the operational attributes that are used to maintain password policy state information in a user’s entry, then the version of the entry that was retrieved would not contain those attributes, even if they were present. When using that entry to construct a view of the user’s password policy state, this might result in a state that is different from what the server would actually use when interacting with that account.

For example, if the target account has been locked as a result of too many failed authentication attempts, but the requester doesn’t have permission to see the attribute used to maintain information about those failed attempts, then the password policy state extended operation could report that the account was not locked even though it was. Note that this did not affect the server’s behavior when actually enforcing the account lockout, but it could still provide misleading or unexpected behavior for the client that issued the password policy state extended request.

As of the 9.3 release, we have changed the extended operation handler’s behavior to avoid this kind of problem. The server will still verify that the requester has permission to retrieve the target user’s account, but it will now re-retrieve that account using an internal root user that is not subject to password policy restrictions. This ensures that the extended operation handler will have access to all of the operational attributes that are needed to construct an accurate representation of the user’s password policy state.

Note that this new behavior really only affects attempts to retrieve information about a user’s password policy state. It does not have any effect on attempts to update that state. If the client attempts to use the password policy state extended operation to make a change to a user’s password policy state, but the requester does not have permission to write to the necessary operational attribute(s) in the target user’s entry, then the update attempt will continue to fail.

Configuration Properties for Managing the Configuration Archive

We have updated the config backend to add support for a few new configuration properties that can help better manage the configuration archive and reduce unnecessary bloat that it may cause. The new properties include:

  • maintain-config-archive — Indicates whether the server should maintain a configuration archive at all. The configuration archive is maintained by default, and disabling it will not remove existing archived configurations.
  • max-config-archive-size — Specifies the maximum number of archived configurations that the server should maintain. By default, this is unlimited. If this is enabled and there are more than the maximum allowed archived configurations, then the oldest configurations will be removed to make room for newer configurations.
  • insignificant-config-archive-base-dn — Specifies base DNs for configuration changes that should not be preserved in the configuration archive. If a configuration change affects only affects entries below one of these base DNs, then it will not result in that change being maintained separately in the configuration archive. By default, we have included a value of “cn=topology,cn=config” so that changes to entries in the topology registry are excluded from the configuration archive. Certain updates to the topology registry, like adding a new replica into the topology, may result in a large number of changes that previously included a lot of bloat in the configuration archive.

Improved Substring Filter Handling for Logically Empty Substrings

Substring search filters are not allowed to contain empty (zero-length) substrings. If a client attempted to process a search with a substring filter that contains an empty substring, the server would have properly rejected it. However, there were cases in which the server did not properly handle substring search filters that contained non-empty substrings that normalized to empty strings (for example, a substring filter that targeted the telephoneNumber attribute in which one of the substrings contained only characters that are considered insignificant when matching telephone number values). The server would have incorrectly considered that normalized-to-empty substring as matching anything rather than nothing, and in some cases, that could cause the search to return unexpected matches. This has been corrected, and substring filters with substrings that normalize to empty values will now properly never match anything.

Improved Error Handling With Automatic Certificate-Based Authentication

By default, whenever a client presents their own certificate chain to the server during TLS negotiation and wants to use that certificate chain to authenticate, it needs to send a SASL EXTERNAL bind request to the server to cause it to perform the appropriate authentication processing. However, LDAP connection handlers offer an auto-authenticate-using-client-certificate configuration property that can cause them to attempt to automatically authenticate a client that presented its own certificate chain as soon as the TLS negotiation completes. Because this automatic authentication attempt happens without any explicit request from the client, there’s no way for the server to indicate whether it completed successfully.

Previously, if an automatic authentication attempt failed, the server would keep the connection alive, but in an unauthenticated state. This could yield unexpected behavior in applications that issued request with the expectation that they had authenticated, only to find those requests rejected due to access control restrictions. As of the 9.3 release, the server will now immediately terminate any client connection that presented its own certificate chain when auto-authenticate-using-client-certificate was set to true but the server was unable to successfully authenticate that client for some reason.

Improved Rebind Error Handling in the Directory Proxy Server

The Directory Proxy Server allows you to set an authorization-method value of rebind, which may be useful in cases where the backend server doesn’t support the intermediate client or proxied authorization request controls (e.g., Active Directory). In such cases, the Directory Proxy Server will remember the credentials that the client used to authenticate, and it will re-bind with those credentials before sending a request that should be authorized as that user.

Previously, if the rebind attempt failed for any reason, the Directory Proxy Server would have immediately reported a failure for the attempt to process a request that should have been authorized as that user. We have updated this behavior so that if the failure suggests that the underlying connection used for that attempt may no longer be valid, the server may re-try the attempt in a different server or on a newly created connection to the same server.

New Replication-Related Alert Types

We have defined a couple of new administrative alert types that the server can use to notify administrators of replication-related concerns:

  • replication-missing-changes-risk — This alert type will be used if the server has developed a replication backlog that is large enough that the server is at risk of missing changes if it can’t catch up.
  • replication-not-purging-obsolete-replicas — This alert type will be used when bringing replication online (most likely during server startup) if the replication-purge-obsolete-replicas configuration property is not set to true. This property is set to true by default as of the 9.2 release, so this primarily applies to older servers that have been updated. We strongly recommend enabling automatic purging of obsolete replicas to reduce unnecessary overhead in replication storage and network traffic, and this alert can be used to ensure that administrators are aware of this recommendation.

Encrypted PKCS #8 Private Keys

We have introduced support for encrypted PKCS #8 private key PEM files. This allows you to make use of private key files that don’t expose the key in the clear. Encrypted private keys require a password to access their contents.

It should be possible to use encrypted private key files anywhere that you can use an unencrypted private key file, including:

  • When importing a certificate chain with manage-certificates import
  • When exporting a private key with manage-certificates export-private-key
  • When setting up the server with a certificate chain and private key obtained from PEM files
  • When using replace-certificate to replace a listener or inter-server certificate with a certificate chain and private key obtained from PEM files

PKCS #11 Caching

The PKCS #11 key manager provider can be used to allow the server to obtain a listener certificate chain from a PKCS #11 token, like a hardware security module (HSM). Whenever the server needs to negotiate a new TLS session with a client, it can access the PKCS #11 token to identify the listener certificate chain that should be used for that processing. This processing may require multiple accesses to the PKCS #11 token. In cases where the PKCS #11 token is remotely accessed over a network, and especially when there is a notable network latency involved in that access, this can have a notable impact on the performance of the TLS negotiation process.

In the 9.3 release, we have introduced a new pkcs11-max-cache-duration property in the PKCS #11 key manager provider configuration. By setting this to a nonzero value, the server can use a degree of caching to eliminate the need for some of the interaction with the HSM, which can dramatically reduce the number of requests that need to be made to the PKCS #11 token.

Note that the use of caching does have a risk incorrect or unexpected behavior if the contents of the PKCS #11 token are altered so that the cached results are no longer accurate. As such, if you decide to enable caching, we recommend temporarily disabling that caching when making changes to the contents of the PKCS #11 token, and then re-enabling the caching once the changes are complete.

UnboundID LDAP SDK for Java 6.0.9

We have just released version 6.0.9 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

As announced in the previous release, the LDAP SDK source code is now maintained only at GitHub. The SourceForge repository is still available for its discussion forum, mailing lists, and release downloads, but the source code is no longer available there.

You can find the release notes for the 6.0.9 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We made it possible to customize the set of result codes that the LDAP SDK uses to determine whether a connection may no longer be usable. Previously, we used a hard-coded set of result codes, and that is still the default, but you can now override that using the ResultCode.setConnectionNotUsableResultCodes method.
  • We added a new HTTPProxySocketFactory class that can be used to establish LDAP and LDAPS connections through an HTTP proxy server.
  • We added a new SOCKSProxySocketFactory class that can be used to establish LDAP and LDAPS connections through a SOCKSv4 or SOCKSv5 proxy server.
  • We updated the ldap-diff tool to add a --byteForByte argument that can be used to indicate that it should use a byte-for-byte comparison when determining whether two attribute values are equivalent rather than using a schema-aware comparison (which may ignore insignificant differences in some cases, like differences in capitalization or extra spaces). Previously, the tool always used byte-for-byte matching, but we decided to make it a configurable option, and we determined that it is better to use schema-aware comparison by default.
  • We fixed an issue in which a non-default channel binding type was not preserved when duplicating a GSSAPI bind request. We also added a GSSAPIBindRequest.getChannelBindingType method to retrieve the selected channel binding type for a GSSAPI bind request.
  • We added a ResultCode.getStandardName method that can be used to retrieve the name for the result code in a form that is used to reference it in standards documents. Note that this may not be available for result codes that are not defined in known specifications.
  • We added a mechanism for caching the derived secret keys used for passphrase-encrypted input and output streams so that it is no longer necessary to re-derive the same key each time it is used. This can dramatically improve performance when the same key is used multiple times.
  • We updated the StaticUtils.isLikelyDisplayableCharacter method to consider additional character types to be displayable, including modifier symbols, non-spacing marks, enclosing marks, and combining spacing marks.
  • We added a new StaticUtils.getCodePoints method that can be used to retrieve an array of the code points that comprise a given string.
  • We added a new StaticUtils.unicodeStringsAreEquivalent method that can be used to determine whether two strings represent an equivalent string of Unicode characters, even if they use different forms of Unicode normalization.
  • We added a new StaticUtils.utf8StringsAreEquivalent method that can be used to determine whether two byte arrays represent an equivalent UTF-8 string of Unicode characters, even if they use different forms of Unicode normalization.
  • We added a new StaticUtils.isValidUTF8WithNonASCIICharacters method that can be used to determine whether a given byte array represents a valid UTF-8 string that contains at least one non-ASCII character.
  • We updated the client-side support for the collect-support-data administrative task to make it possible to specify the start and end times for the set of log messages to include in the support data archive.
  • We updated the documentation so that the latest versions of draft-melnikov-sasl2 and draft-melnikov-scram-sha-512 are included in the set of LDAP-related specifications.

UnboundID LDAP SDK for Java 6.0.8

We have just released version 6.0.8 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

Note that this is the last release for which the LDAP SDK source code will be maintained in both the GitHub and SourceForge repositories. The LDAP SDK was originally hosted in a subversion repository at SourceForge, but we switched to GitHub as the primary repository a few years ago. We have been relying on GitHub’s support for accessing git repositories via subversion to synchronize changes to the legacy SourceForge repository, but that support is being discontinued. The SourceForge project will continue to remain available for the discussion forum, mailing lists, and release downloads, but up-to-date source code will only be available on GitHub.

You can find the release notes for the 6.0.8 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added a DN.getDNRelativeToBaseDN method that can be used to retrieve the portion of DN that is relative to a given base DN (that is, the portion of a DN with the base DN stripped off). For example, if you provide it with a DN of “uid=test.user,ou=People,dc=example,dc=com” and a base DN of “dc=example,dc=com”, then the method will return “uid=test.user,ou=People”.
  • We added LDAPConnectionPool.getServerSet and LDAPThreadLocalConnectionPool.getServerSet methods that can be used to retrieve the server set that the connection pool uses to establish new connections for the pool.
  • We updated the Filter class to alternative methods with shorter names for constructing search filters from their individual components. For example, as an alternative to calling the Filter.createANDFilter method for constructing an AND search filter, you can now use Filter.and, and as an alternative to calling Filter.createEqualityFilter, you can now use Filter.equals. The older versions with longer method names will remain available for backward compatibility.
  • We added support for encrypted PKCS #8 private keys, which require a password to access the private key. The PKCS8PrivateKey class now provides methods for creating the encrypted PEM representation of the key, and the PKCS8PEMFileReader class now has the ability to read encrypted PEM files. We also updated the manage-certificates tool so that the export-private-key and import-certificate subcommands now support encrypted private keys.
  • We updated PassphraseEncryptedOutputStream to use a higher key factory iteration count by default. When using the strongest available 256-bit AES encryption, it now follows the latest OWASP recommendation of 600,000 PBKDF2 iterations. You can still programmatically explicitly specify the iteration count when creating a new output stream if desired, and we have also added system properties that can override the default iteration count without any code change.
  • We added a PassphraseEncryptedOutputStream constructor that allows you to provide a PassphraseEncryptedStreamHeader when creating a new instance of the output stream. This will reuse the secret key that was already derived for the provided stream header (although with newly generated initialization vector), which can be significantly faster than deriving a new secret key from the same passphrase.
  • We added a new ObjectTrio utility class that can be useful in cases where you need to reference three typed objects as a single object (for example, if you want a method to be able to return three objects without needing to define a new class that encapsulates those objects). This complements the existing ObjectPair class that supports two typed objects.
  • We updated the documentation to include RFC 9371 in the set of LDAP-related specifications. This RFC formalizes the process for requesting a private enterprise number (PEN) to use as the base object identifier (OID) for your own definitions (e.g., for use in defining custom attribute types or object classes). The OID-related documentation has also been updated to provide a link to the IANA site that you can use to request an official base OID for yourself or your organization.

Ping Identity Directory Server 9.2.0.0

We have just released version 9.2.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Potential Backward Compatibility Issues

Summary of New Features and Enhancements for All Products

  • Added support for Java 17 [more information]
  • Added support for accessing external services through an HTTP proxy server [more information]
  • Added a Prometheus monitoring servlet extension [more information]
  • Added support for authenticating to Amazon AWS using an IRSA role [more information]
  • Added support for generating digital signatures with encryption settings definitions [more information]
  • Updated replace-certificate when running in interactive mode so that it can re-prompt for a certificate file if the initial file existed but did not contain valid certificate data

Summary of New Features and Enhancements for the Directory Server

  • Improved support for data security auditors [more information]
  • Added new secure, connectioncriteria, and requestcriteria access control keywords [more information]
  • Added support for defining resource limits for unauthenticated clients [more information]
  • Added Argon2i, Argon2d, and Argon2id password storage schemes to supplement the existing Argon2 scheme [more information]
  • Changed the default value of the replication-purge-obsolete-replicas global configuration property from false to true
  • Updated migrate-ldap-schema to support migrating attribute type definitions from Active Directory in spite of their non-standards-compliant format
  • Improved the usage text for the dsreplication enable command

Summary of New Features and Enhancements for the Directory Proxy Server

  • Exposed the maximum-attributes-per-add-request and maximum-modifications-per-modify-request properties in the global configuration

Summary of New Features and Enhancements for the Synchronization Server

  • Added support for synchronizing to SCIMv2 destinations [more information]
  • Added a sync-pipe-view tool that can display information about the set of sync pipes configured in the server
  • Added sync pipe monitor attributes related to account password policy state when synchronizing to a Ping Identity Directory Server

Summary of Bug Fixes

  • Fixed an issue that could cause replication protocol messages to be dropped, potentially resulting in paused replication
  • Fixed an issue in which a timeout could prevent adding servers to a large topology
  • Fixed an issue in which an unexpected error could cause a replication server to stop accepting new connections
  • Fixed an issue that prevented resource limits from being set properly for the topology administrator
  • Fixed an issue in which the dsreplication tool incorrectly handled DNs in a case-sensitive manner
  • Fixed an issue that could cause dsreplication enable to fail if there were any topology administrators without passwords
  • Fixed an issue that could cause a configured idle timeout to interfere with replica initialization
  • Fixed an issue that could prevent the server from generating an administrative alert when clearing an alarm that triggered an alert when it was originally raised
  • Fixed an issue that could cause degraded performance to a PingOne sync destination
  • Fixed an issue that could prevent users from changing their own passwords with the password modify extended operation if their account was in a “must change password” state and the request passed through the Directory Proxy Server
  • Fixed an issue in which dsconfig would always attempt to use simple authentication when applying changes to servers in a group, regardless of the type of authentication used when launching dsconfig
  • Fixed an issue that could cause certain kinds of Directory REST API requests to fail if they included the uniqueness request control
  • Fixed an issue in which an unclean shutdown could cause the server to create exploded index databases
  • Disabled the index cursor entry limit by default, which could cause certain types of indexed searches to be considered unindexed
  • Fixed an issue that could adversely affect performance in servers with a large number of virtual static groups

Removed Support for Incremental Backups

We have removed support for incremental backups. This feature was deprecated in the 8.3.0.0 release after repeated issues that could interfere with the ability to properly restore those backups. These issues do not affect full backups, which continue to be supported.

As an alternative to full or incremental backups, we recommend using LDIF exports, which are more useful and more portable than backups. They are also typically very compressible and can be taken more frequently than backups without consuming as much disk space. Further, the extract-data-recovery-log-changes tool can be used in conjunction with either LDIF exports or backups to replay changes recorded in the data recovery log since the time the LDIF export or backup was created.

Updated the Groovy Language Version

In order to facilitate support for Java 17, we have updated the library providing support for the Groovy scripting language from version 2.x to 3.x. While this should largely preserve backward compatibility, there may be some issues that could prevent existing Groovy scripted extensions from continuing to work without any problems.

The only compatibility issue that we have noticed is that the 3.x version of the Groovy support library cannot parse Java import statements that are broken up across multiple lines, like:

import java.util.concurrent.atomic.
            AtomicLong;

This was properly handled in Groovy 2.x, but the Groovy 3.x library does not appear to support this. To address the problem, you will need to update the script to put the entire import statement on a single line, like:

import java.util.concurrent.atomic.AtomicLong;

If you have any Groovy scripted extensions, we strongly recommend verifying them in a test environment before attempting to update production servers.

Java 17 Support

We have updated the server to support running on JVMs running Java version 17, which is the latest LTS release of the Java language. Java versions 8 and 11 also continue to be supported.

Note that Java 17 support is limited to the Directory Server, Directory Proxy Server, and Synchronization Server. Java 17 is not supported for the Metrics Engine, although it continues to be supported on Java 8 and 11.

The best way to enable Java 17 support is to have the JAVA_HOME environment variable set to the path of the Java 17 installation when installing the server using either the setup or manage-profile setup commands. It’s more complicated to switch to Java 17 for an existing instance that was originally set up on Java 8 or 11 because there are changes in the set of JVM arguments that should be used with Java 17. As such, if you want to switch to Java 17, then we recommend installing new instances and migrating the data to them.

By default, installations using Java 17 will use the garbage first garbage collection algorithm (G1GC), which is the same default as Java 11. We also support using the Z garbage collector (ZGC) on Java 17, although we have observed that it tends to consume a significantly greater amount of memory than the garbage first algorithm. While ZGC can exhibit better garbage collection performance than G1GC, if you wish to use it, we recommend configuring a smaller JVM heap size and thoroughly testing the server under load and at scale before enabling it in production environments.

HTTP Forward Proxy Support

We have updated several server components to provide support for issuing outbound HTTP and HTTPS requests through a proxy server. Updated components include:

  • The Amazon Key Manager cipher stream provider
  • The Amazon Secrets Manager cipher stream provider, passphrase provider, and password storage scheme
  • The Azure Key Vault cipher stream provider, passphrase provider, and password storage scheme
  • The PingOne pass-through authentication plugin
  • The PingOne sync source and destination
  • The Pwned Passwords password validator
  • The SCIMv1 sync destination
  • The SCIMv2 sync destination
  • The Twilio alert handler and OTP delivery mechanism
  • The UNBOUNDID-YUBIKEY-OTP SASL mechanism handler

To enable HTTP forward proxy support for any of these components, first, create an HTTP proxy external server configuration object with a command like:

dsconfig create-external-server \
     --server-name "Example HTTP Proxy Server" \
     --type http-proxy \
     --set server-host-name:proxy.example.com \
     --set server-port:3128

You can also optionally use the basic-authentication-username and basic-authentication-passphrase-provider properties if the HTTP proxy server requires authentication.

Once the HTTP proxy external server has been created, update the target component to reference that server. For example:

dsconfig set-password-validator-prop \
     --validator-name "Pwned Passwords" \
     --set "http-proxy-external-server:Example HTTP Proxy Server"

Prometheus Monitoring Servlet Extension

We have added support for a new HTTP servlet extension that can be used to expose certain server metrics in a format that can be consumed by Prometheus or other monitoring systems that support the OpenMetrics data format. To enable it, add the servlet extension to the desired HTTP connection handlers and either restart the server or disable and re-enable those connection handlers. For example:

dsconfig set-connection-handler-prop \
     --handler-name "HTTPS Connection Handler" \
     --add "http-servlet-extension:Prometheus Monitoring" \
     --set enabled:false

dsconfig set-connection-handler-prop \
     --handler-name "HTTPS Connection Handler" \
     --set enabled:true

By default, the server is preconfigured to expose a variety of metrics. You can customize this to remove metrics that you don’t care about, or to add additional metrics that we didn’t include by default. Any single-valued numeric monitor attribute can be exposed as a metric. You can also customize the set of labels included in metric definitions, on both a server-wide and per-metric basis.

Improved AWS Authentication Support

The server offers a number of components that can interact with Amazon Web Services components,
including:

  • A cipher stream provider that can use the Key Management Service
  • A cipher stream provider, passphrase provider, and password storage scheme that can use the Secrets Manager

In the past, you could authenticate to AWS using either a secret access key or using an IAM role that is associated with the EC2 instance or EKS container in which the server is running. In the 9.2.0.0 release, we’re introducing support for authenticating with an IRSA (IAM role for service accounts) role. We are also adding support for a default credentials provider chain that can attempt to automatically identify an appropriate authentication method for cases in which the server is running in an AWS environment, or in cases where information about a secret access key is available through either environment variables or Java system properties.

To use the new authentication methods, first create an AWS external server that specifies the desired value for the authentication-method property. Then, reference that external server when creating the desired component. For example:

dsconfig create-external-server \
     --server-name AWS \
     --type amazon-aws \
     --set authentication-method:irsa-role \
     --set aws-region-name:us-east-2

dsconfig create-cipher-stream-provider \
     --provider-name KMS \
     --type amazon-key-management-service \
     --set enabled:true \
     --set aws-external-server:AWS \
     --set kms-encryption-key-arn:this-is-the-key-arn

Data Security Auditor Improvements

The server offers a data security auditor framework that can be used to iterate across entries in a number of backends and examine them for potential security-related issues or items of note. In the past, we’ve offered auditors that can do the following:

  • Identify entries that define access control rules
  • Identify accounts that have been administratively disabled
  • Identify accounts that have passwords that are expired, are about to expire, or that have not been changed in longer than a given length of time
  • Identify accounts that are locked as a result of too many authentication failures, because it’s been too long since the user last authenticated, or because they did not choose a new password in a timely manner after an administrative reset.
  • Identify accounts with multiple passwords
  • Identify accounts with privileges assigned by real or virtual attributes
  • Identify accounts encoded with a variety of weak password storage schemes, including 3DES, AES, BASE64, BLOWFISH, CLEAR, MD5, RC4, or the default variant of the CRYPT scheme

In the 9.2 release, we’ve introduced support for several new types of data security auditors, including those that can do the following:

  • Identify accounts with account usability errors, warnings, and/or notices
  • Identify accounts that have an activation time in the future, an expiration time in the past, or an expiration time in the near future
  • Identify accounts that have passwords encoded with a deprecated password storage scheme
  • Identify accounts that have not authenticated in longer than a specified period of time, or that have not ever authenticated
  • Identify accounts that reference a nonexistent password policy
  • Identify entries that match a given search filter

We have also updated the Server SDK so that you can create your own data security auditors to use whatever logic you want.

In addition, we have updated the locked account data security auditor so that it can identify accounts that are locked as a result of attempting to authenticate with a password that fails password validator criteria, and we have updated the weakly encoded password data security auditor so that the following schemes are also considered weak: SMD5, SHA, SSHA, and the MD5 variant of the CRYPT scheme.

Finally, we’ve introduced support for a new audit data security recurring task that you can use to have the server automatically perform an audit on a regular basis.

New Access Control Keywords

We have introduced three new access control keywords.

The secure bind rule can be used to make access control decisions based on whether the client is using a secure connection (e.g., LDAPS or LDAP with StartTLS) to communicate with the server. Using a bind rule of secure="true" indicates that the ACI only applies to clients communicating with the server over a secure connection, while secure="false" indicates that the ACI only applies to clients communicating with the server over an insecure connection.

The connectioncriteria bind rule can be used to make access control decisions based on whether the client connection matches a specified set of connection criteria. The value of the bind rule can be either the name or the DN of the desired connection criteria.

The requestcriteria target can be used to make access control decisions based on whether the operation matches a specified set of request criteria. The value of the target can be either the name or the DN of the desired request criteria.

Note that because the Server SDK provides support for creating custom types of connection and request criteria, the introduction of these last two bind rules adds support for being able to define custom access control logic if the server’s existing access control framework doesn’t support what you want.

Resource Limits for Unauthenticated Clients

The server’s global configuration includes the following configuration properties that can be used to set default resource limits that will apply to all users that don’t have specific limits set for them:

  • size-limit — Specifies the maximum number of entries that can be returned for a search operation
  • time-limit — Specifies the maximum length of time the server should spend processing a search operation
  • idle-time-limit — Specifies the maximum length of time that a client connection may remain established without any operations in progress
  • lookthrough-limit — Specifies the maximum number of entries that the server can examine in the course of processing a search operation

These properties set global defaults for all clients, including those that aren’t authenticated. However, you may want to set lower limits for unauthenticated connections than for users that are authenticated. To make that easier to accomplish, we have added the following new additional properties that specifically apply to unauthenticated clients:

  • unauthenticated-size-limit
  • unauthenticated-time-limit
  • unauthenticated-idle-time-limit
  • unauthenticated-lookthrough-limit

By default, these properties don’t have any values, which will cause the server to inherit the value from the property that doesn’t specifically apply to unauthenticated clients (for example, if unauthenticated-size-limit is not set, then the server will use the size-limit value as the default for both authenticated and unauthenticated clients).

Improved Signature Generation

The server supports cryptographically signing log messages, backups, and LDIF exports. Previously, those signatures were always generated with MAC keys shared among other servers in the same topology. These keys are difficult to back up and restore, and the resulting signatures cannot be verified outside of the topology.
In the 9.2.0.0 release, we have updated the server so that it now generates digital signatures with encryption settings definitions. The server’s preferred definition will be used by default, but you can specify an alternative definition with the signing-encryption-settings-id property in the crypto manager configuration.

If digital signing is enabled but no encryption settings definitions are available, then a legacy topology key will continue to be used as a fallback.

Additional Argon2 Password Storage Schemes

The Argon2 key derivation function is a popular mechanism for encoding passwords, especially after it was selected as the winner of a password hashing competition in 2015. We introduced support for an ARGON2 password storage scheme in the 8.0.0.0 release.

There are actually three variants of the Argon2 algorithm:

  • Argon2i — Provides better protection against side-channel attacks. The existing ARGON2 scheme uses this variant.
  • Argon2d — Provides better protection against GPU-accelerated attacks.
  • Argon2id — Mixes the strategies used in the Argon2i and Argon2d variants to provide a degree of protection against both types of attacks.

In the 9.2.0.0 release, we are introducing three new password storage schemes, ARGON2I, ARGON2D, and ARGON2ID, which provide explicit support for each of these variants.

Note that if you want to use the Argon2 algorithm to encode passwords, and you need to run in an environment that contains pre-9.2.0.0 servers, then you should use the existing ARGON2 scheme. The newer schemes should only be used in environments containing only servers running version 9.2.0.0 or later.

SCIMv2 Sync Destination

The Synchronization Server has included support for SCIMv1 servers as a sync destination since the 3.2.2.0 release. This support relies on an XML-based configuration to map LDAP source attributes to SCIM destination attributes.

In the 9.2.0.0 release, we’re introducing support for SCIMv2 servers as a sync destination. For this destination, all of the necessary configuration is held in the server’s configuration framework, so there is no need for a separate file with mapping information. This implementation introduces several new types of configurable components, including:

  • HTTP authorization methods, which provide support for a variety of mechanisms for authenticating to HTTP-based services, including basic authentication and OAuth 2 bearer tokens (and in the latter case, you may configure either a static bearer token or have the server obtain one from an OAuth authorization server using the client_credentials grant type).
  • A SCIM2 external server, which provides the SCIM service URL, authorization method, and other settings to use when interacting with the SCIMv2 service.
  • SCIM2 attribute mappings, which describe how to generate SCIM attributes from the LDAP representation of a source entry.
  • SCIM2 endpoint mappings, which associate a set of attribute mappings with an endpoint in the SCIMv2 server.
  • The SCIM2 sync destination, which associates the SCIM2 external server and the SCIM2 endpoint mappings.

The documentation describes the process for configuring the Synchronization Server to synchronize changes to a SCIMv2 server. In addition, the config/sample-dsconfig-batch-files/configure-synchronization-to-scim2.dsconfig file provides an example that illustrates a set of changes that can be used to synchronize inetOrgPerson LDAP entries to urn:ietf:params:scim:schemas:core:2.0:UserM. SCIMv2 entries.

UnboundID LDAP SDK for Java 6.0.7

We have just released version 6.0.7 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes included in this version:

  • We fixed a bug in the SearchResultEntry.equals method that could prevent a SearchResultEntry from matching other types of Entry objects.
  • We fixed a bug in the Entry.applyModifications method that could cause it to fail with a NOT_ALLOWED_ON_RDN result if the provided entry was missing one or more of the attribute values used in its RDN.
  • We fixed a bug in the argument parser’s support for mutually dependent arguments with a set containing more than two arguments. Previously, the constraint would have been satisfied if at least two of the arguments were provided, rather than requiring all of them to be provided.
  • We added JSONObject methods for retrieving fields by name using case-insensitive matching (by default, JSON field names are treated in a case-sensitive manner). Because it is possible that a JSON object will have multiple fields with the same name when using case-insensitive matching, there are a few options for indicating how such conflicts should be handled, including only returning the first match, returning a map with all matching fields, or throwing an exception if there are multiple matches.
  • We updated the set of LDAP-related specifications to include the latest version of the draft-schmaus-kitten-sasl-ht proposal.

UnboundID LDAP SDK for Java 6.0.6

We have just released version 6.0.6 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes included in this version:

General Updates

  • We fixed an issue that could cause request failures when closing a connection operating in asynchronous mode with outstanding operations.
  • We fixed an issue that could interfere with the ability to get a default SSLContext on Java 17 when running in FIPS 140-2-compliant mode.
  • We updated LDAPConnectionOptions to add support for a new system property that can enable certificate hostname verification by default without any code changes.
  • We updated the LDAP command-line tool framework to add a new --verifyCertificateHostnames argument to enable hostname verification when performing TLS negotiation.
  • We improved the class-level Javadoc documentation for the SSLUtil class to provide a better overview of TLS protocol versions, TLS cipher suites, key managers, trust managers, and certificate hostname verification, and to provide better examples that illustrate best practices for establishing secure connections.
  • We fixed an issue in the JNDI compatibility support for controls, as well as extended requests and responses. Even though the implementation was based on the JNDI documentation, it appears that at least OpenJDK implementations do not abide by that documentation. The LDAP SDK is now compatible with the observed behavior rather than the documentation, although a system property can be used to revert to the former behavior.
  • We updated the SearchRequest class to add constructors that allow you to provide the search base DN with a DN object (as an alternative to existing constructors that allow you to specify it as a String).
  • We fixed an issue in the command-line tool framework in which an Error (for example, OutOfMemoryError) could cause the tool to report a NullPointerException rather than information about the underlying error.
  • We fixed an issue in the IA5 argument value validator that could allow it to accept argument values with non-ASCII characters.
  • We fixed an issue in the DNS hostname argument value validator that could prevent it from properly validating the last component of a fully qualified domain name, or the only component of an unqualified name.
  • We updated the identify-references-to-missing-entries tool to provide an option to generate an LDIF file with changes that can be used to remove identified references.
  • We updated the SelfSignedCertificateGenerator class to perform better validation for the subject alternative DNS names that it includes in a certificate.
  • We updated the manage-certificates generate-self-signed-certificate command to rename the --replace-existing-certificate argument to be --use-existing-key-pair. The former argument name still works, but it is hidden from the usage.
  • We included a native-image/resource-config.json file in the LDAP SDK jar file manifest, which can be used by the GraalVM native-image tool to ensure that appropriate resource files are included in the resulting image.

Updates Specific to Use With the Ping Identity Directory Server

  • * We updated the summarize-access-log tool to report on many more things, including the most common IP addresses for failed bind attempts, the most consecutive failed binds, information about work queue wait times, information about request and response controls, the number of components in search filters, and search filters that may indicate injection attempts.
  • We updated support for the audit data security administrative task to make it possible to specify the number and/or age of previous reports to retain.
  • We fixed issues that prevented specifying the criticality of the administrative operation and join request controls.

UnboundID LDAP SDK for Java 6.0.5

We have just released version 6.0.5 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes included in this version:

General Updates:

  • We fixed an issue that could occasionally cause the LDAP SDK to hide the actual cause of a StartTLS failure by using information from a second, less useful exception.
  • We fixed an issue that could cause the ldifsearch tool to display a malformed message when the first unnamed trailing argument was expected to be a search filter but could not be parsed as a valid filter.
  • We improved support for validating and comparing values using the telephone number syntax. Previously, we used a loose interpretation of the specification, which would consider any printable string (including strings without any digits) to be valid, and would only ignore spaces and hyphens when comparing values. You can now configure varying levels of strictness (either programmatically or using system properties), including requiring at least one digit or strict conformance to the X.520 specification. You can also configure it to ignore all non-digit characters when comparing values, and this is now the default behavior.
  • We fixed a bug in which the ldapcompare tool did not properly close its output file if one was configured. The output file does get automatically closed when the tool exits so it’s not an issue when running ldapcompare from the command line, but this can cause problems if the tool is invoked programmatically from another application.
  • We fixed an issue with the tool properties file created using the --generatePropertiesFile argument in command-line tools that support it. The generated properties file did not properly escape backslash, carriage return, line feed, or form feed characters.

Updates Specific to Use With the Ping Identity Directory Server:

  • We added support for encoding controls to JSON objects, and for decoding JSON objects as controls. There is a generic JSON representation that will work for any type of control (in which the value is provided as the base64-encoded representation of the raw value used in the LDAP representation of the control), but most controls provided as part of the LDAP SDK also support a more user-friendly representation in which the components of the value are represented in a nested JSON object.
  • We added client-side support for a new JSON-formatted request control that can be used to send request controls to a Ping Identity Directory Server with the controls encoded as JSON objects rather than a raw LDAP representation. We also added support for a JSON-formatted response control that can be used to receive JSON-encoded response controls from the server.
  • We updated the ldapsearch and ldapmodify command-line tools to add a --useJSONFormattedRequestControls argument that will cause any request controls to be sent using a JSON-formatted request control, and it will cause any response controls returned by the server to be embedded in a JSON-formatted response control.
  • We fixed an issue with the way that the parallel-update tool created assured replication request controls when an explicit local or remote assurance level was specified. Previously, it would only specify a minimum assurance level without specifying a maximum level, which could cause the server to use a higher assurance level than requested by the client.
  • We updated the topology registry trust manager to allow trusting a certificate chain if either the peer certificate or any of its issuers is found in the server’s topology registry. Previously, it would only trust a certificate chain if the peer certificate itself was found in the topology registry, and having an issuer certificate was not sufficient. The former behavior is still available with a configuration option.
  • We updated the topology registry trust manager to make it possible to ignore the certificate validity window for peer and issuer certificates. The validity window is still respected by default, but if the trust manager is configured to ignore it, then a certificate chain may be trusted even if the peer or an issuer certificate is expired or not yet valid.

UnboundID LDAP SDK for Java 6.0.4

We have just released version 6.0.4 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes included in this version:

General Updates:

  • We fixed an issue with the Filter.matchesEntry method that could cause it to throw an exception rather than returning an appropriate Boolean result when evaluating an AND or an OR filter in which one of the nested elements used inappropriate matching (for example, if the assertion value did not conform to the syntax for the associated attribute type).
  • We fixed an issue with the way that decodeable controls are registered with the LDAP SDK. Under some circumstances, a thread could become blocked while attempting to create a new control.
  • We updated the JVM-default trust manager to properly check for the existence of a “jssecacerts” trust store file in accordance with the JSSE specification. It had previously only looked for a file named “cacerts”.
  • We updated the logic used to select the default set of supported cipher suites so that it will no longer exclude suites with names starting with “SSL_” by default on JVMs with a vendor string that includes “IBM”. IBM JVMs appear to use the “SSL_” prefix for some or all cipher suites, including those that are not associated with TLS protocols rather than a legacy SSL protocol. We also added a TLSCipherSuiteSelector.setAllowSSLPrefixedSuites method that can be used to override the default behavior.
  • We updated the LDIF reader to support reading modifications with attribute values read from a file referenced by URL. This was previously supported when reading LDIF entries or add change records, but it had been overlooked for LDIF modify change records.
  • We updated the LDIF reader so that it no longer generates comments attempting to clarify the contents of base64-encoded values if the value is longer than 1,000 bytes.
  • We updated the documentation to include the latest versions of the draft-behera-ldap-password-policy, draft-coretta-x660-ldap, and draft-ietf-kitten-scram-2fa specifications.

Updates Specific to Use With the Ping Identity Directory Server:

  • We added a new API for parsing access log messages generated by the server. The new API supports both text-formatted and JSON-formatted log messages, whereas the previous version only supported messages in the default text (“name=value”) format.
  • We updated the summarize-access-log tool (which can be used to perform basic analysis of server access log files) to add support for JSON-formatted log files.
  • We added support for retrieving and parsing X.509 certificate monitor entries.
  • We added client-side support for an administrative task that can cause the server to immediately refresh any cached certificate monitor data. The server will automatically refresh the cache every minute, but the task can be used to cause an immediate refresh.

Ping Identity Directory Server 9.0.0.0

We have just released version 9.0.0.0 of the Ping Identity Directory Server. The release notes provide a pretty comprehensive overview of what’s included, but here’s my summary.

Ping Identity Directory Server Products Do Not Use log4j

Recently, a very serious security issue (CVE-2021-44228) was identified in the Apache log4j library, which is used by many Java applications to provide logging support. None of the Ping Identity Directory Server, Directory Proxy Server, Synchronization Server, and Metrics Engine products make use of this library in any way, and it is not included as part of the server. Some of the libraries that we include with the server do have support for logging to log4j, but that functionality is not used, and the log4j library is not included as part of the server.

The standalone version of the Admin Console does include the log4j library, but it is included only as a transitive dependency of one of the other libraries used by the console, and the log4j library is not used in any way by the Admin Console. Because this vulnerability was disclosed very late in the release cycle for the Ping Identity server products, we have chosen to update to a non-vulnerable version of the library rather than remove it entirely, as that requires less testing. Again, even though the log4j library is included with the standalone Admin Console, it is not used in any way, so even if you are using an older version of the console with an older version of the log4j library, you are not vulnerable to the security issue.

The UnboundID LDAP SDK for Java does not include any third-party dependencies at all (other than a Java SE runtime environment at Java version 7 or later). It does not include or interact with log4j in any way.

Changes Affecting All Server Products

  • Added cipher stream providers for PKCS #11 tokens, Azure Key Vault, and CyberArk Conjur. [more information]
  • Added passphrase providers for Azure Key Vault and CyberArk Conjur. [more information]
  • Added password storage schemes for authenticating with passwords stored in external services, including AWS Secrets Manager, Azure Key Vault, CyberArk Conjur, and HashiCorp Vault. [more information]
  • Added extended operations for managing server certificates. [more information]
  • Added the ability to redact the values of sensitive configuration properties when constructing the dsconfig representation for a configuration change. [more information]
  • Included the original requester DN and client IP address in log messages for mirrored configuration changes. [more information]
  • Added TLS configuration properties for outbound connections. [more information]
  • Updated the Admin Console to support using PKCS #12 and BCFKS trust stores.
  • Updated the file servlet to support authenticating with OAuth 2.0 access tokens and OpenID Connect ID tokens, which makes it possible to download collect-support-data archives and server profiles generated through the Admin Console when authenticated with SSO.
  • Fixed an issue that could cause degraded performance and higher CPU utilization for some clients using TLSv1.3.
  • Fixed an issue that prevented the manage-profile replace-profile tool from working properly for servers running in FIPS 140-2-compliant mode.
  • Updated export-ldif to always base64-encode attribute values containing any ASCII control characters. Previously, only the null, line feed, and carriage return control characters caused values to be base64-encoded.
  • Fixed an issue in which some tools that operate on the server’s configuration did not use the correct matching rule for attribute types configured to use case-sensitive matching. If a config entry had an attribute with multiple values differing only in capitalization, all but one of the values could be lost.
  • Updated the Directory REST API to add support for attribute options.
  • Added the ability to recognize JVM builds from Eclipse Foundation, Eclipse Adoptium, and BellSoft.
  • Removed “-XX:RefDiscoveryPolicy=1” from the default set of options used to launch the JVM. In some cases, this option has been responsible for JVM crashes.

Changes Affecting the Directory Server

  • Added support for pluggable pass-through authentication. [more information]
  • Fixed an issue that could prevent authenticating with certain types of reversibly encrypted passwords that were encrypted on an instance that was subsequently removed from the topology. [more information]
  • Fixed an issue that prevented decoding the value of a proxied authorization v2 request control when the authorization identity had a specific length.
  • Fixed an issue that could cause sporadic failures when attempting to back up a backend with data encryption enabled. In such cases, the backup would likely succeed if re-attempted.
  • Added a replica-partial-backlog attribute to the replication summary monitor entry to provide information about how each replica contributes to the overall replication backlog.
  • Fixed an issue in which the server could use incorrect resource limit values (including size limit, time limit, lookthrough limit, and idle time limit) for users with custom limits who authenticated via pass-through authentication.
  • Fixed an issue in which the server did not properly update certain password policy state information for simple bind attempts targeting users without a password.
  • Fixed an issue in which the server may not handle other controls properly when processing an operation that includes the join request control. The server may have overlooked a control immediately following the join request control in the operation request, and it may have omitted appropriate non-join result controls from the response.
  • Fixed an issue in which a newly initialized server could go into lockdown mode with a warning about missing changes if it was restarted immediately after initialization completed.
  • Fixed an issue that could prevent changes applied to non-RDN attributes in the course of processing a modify DN operation from being replicated.
  • Fixed an issue that could prevent composed attribute values from being properly updated for operations that are part of a muti-update extended operation.
  • Improved performance for modify operations that need to update a composite index to add an entry ID to the middle of a very large ID set.
  • Added limits for the maximum number of attributes in an add request and the maximum number of modifications in a modify request. [more information]
  • Updated the dsreplication initialize-all command to support initializing multiple replicas in parallel.
  • Updated remove-defunct-server to add a --performLocalCleanup option that can be used to remove replication metadata from a server that is offline.
  • Added an option to the mirror virtual attribute provider to make it possible to bypass access control evaluation for the internal searches that it performs to retrieve data from other entries.
  • Fixed an issue in which an entry added with a createTimestamp attribute could lose the original formatting for that attribute when replicated to other servers.
  • Fixed an issue that could lead to long startup times in large topologies with data encryption enabled.
  • Updated the ldap-diff tool to add several new features. [more information]
  • Updated the migrate-ldap-schema tool to add several new features. [more information]

Changes Affecting the Directory Proxy Server

  • Fixed an issue that could cause certain internal operations initiated in the Directory Proxy Server to fail when forwarded to a backend Directory Server whose default password policy was configured in a way that interfered with the account used to authorize internal operations.
  • Improved the logic used to select the best error result to return to the client for operations broadcast to all backend sets. Previously, the server could have incorrectly returned a result indicating that the target entry did not exist when the operation failed for some other reason.
  • Updated the entry counter, hash DN, and round-robin placement algorithms to support excluding specific backend sets.

Changes Affecting the Synchronization Server

  • Added the ability to synchronize certain password policy state information from Active Directory to the Ping Identity Directory Server, including account disabled state and the password changed time.
  • Fixed an issue that could prevent synchronizing changes to entries that have multiple attributes with the same base attribute type but different sets of attribute options, particularly if any of the attributes have more values than the replace-all-attr-values limit defined in the associated Sync Class.
  • Added the ability to apply rate limiting when synchronizing changes to PingOne.
  • Fixed an issue in which the max-rate-per-second property was not properly applied when running the resync tool.

Changes Affecting the Metrics Engine

  • Fixed an issue that could prevent dashboard icons from being properly displayed.

New Cipher Stream Providers

The encryption settings database holds a set of definitions that include the keys used for data encryption. The encryption settings database is itself encrypted, and we use a component called a cipher stream provider for reading and writing that encrypted content. We already offered several cipher stream provider implementations, including:

  • Generate the encryption key with a passphrase read from a file.
  • Generate the encryption key with a passphrase provided interactively during server startup.
  • Protect the encryption key with AWS Key Management Service (KMS).
  • Generate the encryption key with a passphrase retrieved from AWS Secrets Manager.
  • Generate the encryption key with a passphrase retrieved from a HashiCorp Vault instance.
  • Use the Server SDK to develop your own custom cipher stream providers.

In the 9.0.0.0 release, we are introducing support for three new types of cipher stream providers:

  • Wrap the encryption key with a certificate read from a PKCS #11 token, like a Hardware Security Module (HSM). Note that because of the limitations in Java’s support for key wrapping, only certificates with RSA key pairs can be used for this purpose.
  • Generate the encryption key with a passphrase retrieved from Azure Key Vault.
  • Generate the encryption key with a passphrase retrieved from a CyberArk Conjur instance.

New Passphrase Providers

Passphrase providers offer a means of obtaining clear-text secrets that the server may need for things like accessing protected content in a certificate key store or authenticating to an external service. We already offered several passphrase provider implementations, including:

  • Read the secret from a file, which may optionally be encrypted with a key from the server’s encryption settings database.
  • Read the secret from an obscured value stored in the server’s configuration.
  • Read the secret from an environment variable.
  • Read the secret from AWS Secrets Manager.
  • Read the secret from a HashiCorp Vault instance.
  • Use the Server SDK to develop your own custom passphrase providers.

In the 9.0.0.0 release, we are introducing support for two new types of passphrase providers:

  • Read the secret from Azure Key Vault.
  • Read the secret from a CyberArk Conjur instance.

Password Storage Schemes for External Services

Password storage schemes are used to protect passwords held in the server. We already offered a variety of password storage schemes, including:

  • Schemes using salted 256-bit, 384-bit, and 512-bit SHA-2 digests. SHA-1 support is also available for legacy purposes, but is not recommended.
  • Schemes using more resource-intensive, brute-force-resistant algorithms like PBKDF2, bcrypt, scrypt, and Argon2.
  • A scheme that reversibly encrypts passwords with a 256-bit AES key obtained from the encryption settings database.
  • Schemes that reversibly encrypt passwords with legacy keys stored in the topology registry.

In the 9.0.0.0 release, we are introducing support for new password storage schemes that allow users to authenticate with passwords stored in external secret stores, including:

  • AWS Secrets Manager
  • Azure Key Vault
  • CyberArk Conjur
  • HashiCorp Vault

In these cases, the storage scheme is configured with the information needed to connect and authenticate to the external service, and the encoded representation of the password contains a JSON object with the information needed to identify the specific secret in that service to use as the password for the associated user.

These password storage schemes can be used to authenticate with both LDAP simple authentication and SASL mechanisms that use a password. However, these schemes are read-only: users can authenticate with a password stored in the associated external service, but password changes need to be made through that service rather than over LDAP.

Extended Operations for Certificate Management

We have added support for a set of extended operations that can be used to remotely manage certificates in server instances, including replacing listener and inter-server certificates and purging information about retired certificates from the topology registry. These operations are especially useful for managing certificates in instances running in Docker or in other cases where command-line access may not be readily available to run the replace-certificate tool.

When replacing certificates, the new key store can be obtained in several ways:

  • It can be read from a file that is already available to the server (for example, one that has been copied to the server or placed on a shared filesystem).
  • The raw bytes that make up the new key store file can be included directly in the extended request.
  • The individual certificates and private key can be provided in the extended request, in either PEM or DER form.

Many safeguards are in place to prevent these extended operations from being inappropriately used. These include:

  • The extended operation handler providing support for these operations is not enabled by default. It must be enabled before they can be used.
  • The extended operations will only be allowed over secure connections.
  • The extended operations can only be requested by a user with the permit-replace-certificate-request privilege. No users have this privilege by default (not even root users or topology administrators).
  • You can indicate which of the individual types of operations are allowed, and you can define connection and request criteria to further restrict the circumstances under which they may be used.
  • By default, it will only allow reading certificates from a file on the server filesystem. You have to specifically enable the option to allow providing the new certificate information from a remote client.
  • The server will generate administrative alerts for all successful and failed attempts to process these operations.

These extended operations can be invoked programmatically (support for them is included in the UnboundID LDAP SDK for Java). They can also be used through new subcommands in the replace-certificate command-line tool.

Redacting Sensitive Values in Configuration Changes

We have added a new redact-sensitive-values-in-config-logs global configuration property that can be used to indicate that the server should redact the values of sensitive configuration properties when generating the dsconfig representation for that configuration change, including the representation that is written to the config-audit.log file and included in alerts to notify administrators of the change.

By default, the values of sensitive configuration properties are obscured in a way that allows the server to obtain the clear-text value, but that is not readily apparent to an observer. This helps protect the values of these secrets while still allowing the config-audit.log file to be replayed. However, a determined user with access to this obfuscated representation may be able to determine the clear-text value that it represents.

If the redact-sensitive-values-in-config-logs property is set to true, then the values of sensitive configuration properties will be redacted rather than obscured. This prevents someone with access to the dsconfig representation of the change from being able to obtain the clear-text value of the secret, but it does mean that the config-audit.log file may no longer be replayable.

Original Requester Details for Mirrored Configuration Changes

When making configuration changes, log messages (including those written to the server’s access log and the config-audit.log file) include the DN of the user that requested the change and the IP address of the client system. However, for changes affecting mirrored configuration (including in the topology registry or cluster configuration), these values do not accurately reflect the DN and address of the original requester, but instead reflect either the details of an internal connection or of a connection from another server instance that has forwarded change to the topology master.

To address this, we have updated the server so that the DN and IP address of the original requester are included as part of changes to mirrored configuration. Records for these configuration changes that are written to config-audit.log and the server’s access log will now provide these values in the original-requester-dn and original-requester-ip fields.

New TLS Configuration Properties

We have updated the crypto manager configuration to add support for four new properties for configuring TLS communication:

  • outbound-ssl-protocol — Can be used to specify the set of TLS protocols that may be used for outbound connections (e.g., those used for pass-through authentication or for synchronization with remote servers).
  • outbound-ssl-cipher-suite — Can be used to specify the set of TLS cipher suites that may be used for outbound connections.
  • enable-sha-1-cipher-suites — Can be used to enable the use of TLS cipher suites that rely on the SHA-1 digest algorithm, which is no longer considered secure and is disabled by default.
  • enable-rsa-key-exchange-cipher-suites — Can be used to enable the use of TLS cipher suites that rely on the RSA key exchange algorithm, which does not provide support for forward secrecy and is disabled by default.

Pluggable Pass-Through Authentication

We have updated the Directory Server to add support for pluggable pass-through authentication. Previously, the server provided support for passing through simple bind attempts to another LDAP server or to PingOne. It is now possible to support pass-through authentication to other types of services, and the Server SDK has been updated to add support for creating custom pass-through authentication handlers.

This implementation includes an LDAP pass-through authentication handler that allows the new pluggable pass-through authentication plugin to be used as an alternative to the former LDAP-specific pass-through authentication plugin. The new implementation offers several advantages over the former one, including:

  • Better default configuration properties (especially for the override-local-password property).
  • The ability to indicate whether to attempt pass-through authentication for accounts in an usable password policy state (for example, those that are locked or that have expired passwords).
  • The ability to set timeouts for interaction with the external LDAP servers.
  • Improved diagnostic information about pass-through authentication attempts, including support for the password policy request control and password expired response control.
  • A new monitor entry with metrics about the processing performed by the plugin.

Preserving Secret Keys for Instances Removed From the Topology

Previously, when a server was removed from the topology (for example, by using the remove-defunct-server tool), secret keys associated with that instance could be lost. This is unlikely to cause any problems in most cases because these keys are no longer used for most purposes. However, it could be an issue if the server is configured to use a legacy password storage scheme that protects passwords with reversible encryption. These schemes encrypt passwords with keys from the topology registry, and if a server was removed from the topology, then keys specific to that instance were also removed. This could prevent remaining servers from being able to decrypt passwords that were initially encrypted by the instance that was removed. To address this, we now preserve any secret keys that are associated with an instance before removing that instance from the topology.

Affected password storage schemes include AES, Blowfish, RC4, and 3DES. The newer AES256 password storage scheme is not affected by this issue.

Size Limits for Add and Modify Requests

We have added new maximum-attributes-per-add-request and maximum-modifications-per-modify-request properties to the global configuration. The former can be used to limit the number of attributes that may be included in an add request, and the latter can be used to limit the number of modifications that may be included in a modify request. Neither of these properties affects the number of values that individual attributes may have.

These limits can help avoid potential denial-of-service attacks that use specially crafted add and modify requests. By default, add requests are limited to 1000 attributes, and modify requests are limited to 1000 modifications, which should be plenty for virtually all real-world use cases.

New ldap-diff Features

We have updated the ldap-diff tool to provide several new features. These include:

  • We have added an option to perform byte-for-byte comparisons when identifying differences. By default, the tool uses schema-aware matching, which may not flag differences in values that are logically equivalent but not identical (for example, values that differ only in capitalization for an attribute configured to use case-insensitive matching).
  • You can now use a properties file to provide default values for some or all of the command-line arguments.
  • We improved support for SASL authentication.

New migrate-ldap-schema Features

We have updated the migrate-ldap-schema tool to provide several new features. These include:

  • We have added more flexibility when securing communication with servers over TLS, including the ability to use different key and trust managers for the source and destination servers.
  • We have added support for SASL authentication.
  • We have added support for using a properties file to obtain default values for some or all of the command-line arguments.
  • We have added better validation for migrated attribute types and object classes.