Ping Identity Directory Server 8.2.0.0

We have just released version 8.2.0.0 of the Ping Identity Directory Server, with several big new features, as well as a number of other enhancements and fixes. The release notes provide a pretty comprehensive overview of the changes, but I’ll summarize the changes here. I’ll first provide a TL;DR version of the changes, followed by more detail about some of the more significant updates.

Summary of New Features and Enhancements

  • Added single sign-on support to the Administration Console (more information)
  • Added a new ds-pwp-modifiable-state-json operational attribute (more information)
  • Added support for password validation during bind (more information)
  • Added support for a recent login history (more information)
  • Added sample dsconfig batch files (more information)
  • Added JSON-formatted variants for the audit, HTTP operation, and synchronization log files (more information)
  • Added support for logging to standard output or standard error (more information)
  • Added support for rotating the logs/server.out log file (more information)
  • Improved support for logging to syslog servers (more information)
  • Added support for the OAUTHBEARER SASL mechanism (more information)
  • Added support for the ($attr.attrName) macro ACI (more information)
  • Added a remove-attribute-type-from-schema tool (more information)
  • Added a validate-ldap-schema tool (more information)
  • Added a number of security-related improvements to setup (more information)
  • Added a number of improvements to the manage-profile tool (more information)
  • Added a number of improvements to the parallel-update tool (more information)
  • Added various improvements to several other command-line tools, including ldappasswordmodify, ldapcompare, ldifsearch, ldifmodify, ldif-diff, ldap-diff, and collect-support-data (more information)
  • Added various usability improvements to command-line tools (more information)
  • Added several improvements to the dictionary password validator (more information)
  • Added a new AES256 password storage scheme (more information)
  • Added an export-reversible-passwords tool (more information)
  • Create a 256-bit AES encryption settings definition in addition to the 128-bit AES definition
  • Added password information about password quality requirements to the ds-pwp-state-json virtual attribute (more information)
  • Added the ability to augment the default set of crypto manager cipher suites (more information)
  • Improvements in delaying bind responses (more information)
  • Require a minimum ephemeral Diffie-Hellman key size of 2048-bits
  • Switched to using /dev/urandom as a source of secure random data (more information)
  • Improved the way we generate self-signed certificates and certificate signing requests (more information)
  • Added support for using elliptic curve keys in JWTs
  • Added new administrative alert types for account status notification events, privilege assignment, and rejection of insecure requests (more information)
  • Added improvements to monitor data and the way it is requested by the status tool (more information)
  • Added identity mapper improvements, including filter support, an aggregate identity mapper, and an out-of-the-box mapper for administrative users (more information)
  • Improved uniqueness control conflict prevention and detection (more information)
  • Added support for re-sending an internal replication message if there is no response to a dsreplication initialize request
  • Added a --force argument to dsreplication initialize that can be used to force initialization even if the source server is in lockdown mode
  • Added options to customize the response code and body in availability servlets
  • Increased the number of RDN components that a DN may have from 50 to 100
  • Updated the SCIM servlet to leverage a VLV index (if available) to support paging through search result sets larger than the lookthrough limit
  • Added an --adminPasswordFile argument to the manage-topology add-server command
  • Added a password policy configuration property to indicate whether the server should return the password expiring or password expired based on whether the client also provided the password policy request control
  • Updated support for the CRAM-MD5 and DIGEST-MD5 SASL mechanisms so they are no longer considered secure
  • Improved Directory Proxy Server support for several SASL mechanisms (more information)
  • Improved Directory Proxy Server support for the LDAP join control
  • Updated manage-topology add-server to add support for configuring failover between Synchronization Server instances
  • Added a Synchronization Server configuration property for customizing the sync pipe queue size
  • Added a Synchronization Server configuration property for processing changes with REPLACE modifications rather than ADD and DELETE modifications

Summary of Bug Fixes

  • Fixed an issue that could prevent the installer from removing information about the instance from the topology registry
  • Fixed an issue that could cause replication to miss changes if a backend was reverted to an earlier state without reverting the replication database
  • Fixed an issue in which a replica could enter lockdown mode after initialization
  • Fixed an issue that could allow some non-LDAP clients to inappropriately issue requests without the server in lockdown mode
  • Fixed an issue in which restoring an incremental backup could cause dependencies to be restored out of order, leading to an incomplete intermediate database file
  • Fixed a backup retention issue in which the process of purging old backups could cause old backups to be removed out of order
  • Fixed an issue in which the server could leak a small amount of memory upon closing a JMX connection
  • Fixed an issue that could cause the server.status file to be corrupted on Windows systems after an unplanned reboot if the server is configured to run as a Windows service
  • Fixed an issue that could cause the server to return a password expired response control in a bind response when the user’s account is expired but the client provided incorrect credentials
  • Fixed an issue in which a search that relied on a virtual attribute provider for efficient processing could omit object classes from search result entries
  • Fixed an issue in which the server did not properly handle the matched values control that used an extensible match filter with both an attribute type and a matching rule (for example, in conjunction with the jsonObjectFilterExtensibleMatch matching rule)
  • Fixed an issue in which the server could incorrectly log an error message at startup if it was configured with one or more ACIs that grant or deny permissions based on the use of SASL mechanisms
  • Fixed an issue in which the remove-defunct-server tool to fail to remove certain replication attributes when the tool was run with a topology JSON file
  • Fixed an issue in which manage-profile replace-profile could fail to apply changes if the profile included dsconfig batch files without a “.dsconfig” extension
  • Fixed an issue in which the server could raise an internal error and terminate the connection if a client attempted to undelete a non-soft-deleted entry
  • Fixed an issue that could cause the REST API to fail to decode certain types of credentials when using basic authentication
  • Fixed an issue in which the encryption-settings tool could leave the server without a preferred definition after importing a set of definitions with the --set-preferred argument but none of the imported definitions is marked preferred
  • Fixed an issue in which the manage-profile generate-profile command could run out of memory when trying to generate a profile containing large files
  • Fixed an issue in which the manage-profile generate-profile command could display a spurious message when generating the profile in an existing directory
  • Fixed an issue that could interfere with cursoring through paged search results when using the REST API and the results included entries with long DNs
  • Fixed an issue that could cause an exception in SCIM 1.1 processing as a result of inappropriate DN escaping
  • Fixed an issue that could cause the isMemberOf and isDirectMemberOf virtual attributes to miss updates if the same group is updated concurrently by multiple clients
  • Fixed an issue that could cause the server to return an objectClassViolation result code instead of the more appropriate attributeOrValueExists result code when attempting to add an object class value to an entry that already has that object class
  • Fixed an issue that could cause loggers to consume more CPU processing time than necessary in an idle server
  • Fixed an issue in which the stats collector plugin could generate unnecessary I/O when it is used exclusively for sending metrics to a StatsD endpoint
  • Fixed an issue in which the periodic stats logger could include duplicate column headers
  • Fixed an issue that could cause the server to periodically log an error message if certain internal backends are disabled
  • Fixed a typo in the default template that the multi-part email account status notification handler uses to warn about an upcoming password expiration
  • Fixed an issue in which the dsconfig list command could omit certain requested properties
  • Fixed an issue in which the dsreplication tool could incorrectly suppress LDAP SDK debug messages even if debugging was requested
  • Fixed an issue that could cause the Directory Proxy Server to log information about an internal exception if an entry-balanced search encountered a timeout when processing one or more backend sets
  • Fixed an issue in which the Synchronization Server could get stuck when attempting to retry failed operations when it already has too many other operations queued up for processing
  • Fixed an issue in which Synchronization Server loggers were not properly closed during the server shutdown process
  • Fixed an issue in which the synchronization server could fail to synchronize certain delete operations from an Oracle Unified Directory because of variations in the format of the targetUniqueID attribute

Single Sign-On for the Administration Console

The Administration Console now supports authenticating with an OpenID Connect token, and it can provide single sign-on across instances when using that authentication mechanism. At present, this is only supported when using a token minted by the PingOne service, and there must be corresponding records for the target user in both the PingOne service and the local server. General support for SSO with ID tokens from any OpenID Connect provider should be available soon in an upcoming release.

The new ds-pwp-modifiable-state-json Operational Attribute

We have added a new “Modifiable Password Policy State” plugin that provides support for the ds-pwp-modifiable-state-json operational attribute. The value of this attribute is a JSON object with fields that provide information about a limited set of password policy state properties, including:

  • password-changed-time — The time the user’s password was last changed
  • account-activation-time — The user’s account activation time
  • account-expiration-time — The user’s account expiration time
  • password-expiration-warned-time — The time the user was first warned about an upcoming password expiration
  • account-is-disabled — Whether the account is administratively disabled
  • account-is-failure-locked — Whether the account is locked as a result of too many failed authentication attempts
  • must-change-password — Whether the user will be required to choose a new password before they are permitted to request any operations

These elements of the password policy state can now be updated by modifying the value of the ds-pwp-modifiable-state-json attribute to set a new value that is a JSON object that holds the desired new values for the appropriate fields. Any fields not included in the provided object will remain unchanged.

This operational attribute is intended to work in conjunction with the existing ds-pwp-state-json virtual attribute, which provides read-only access to a wide range of information about the user’s password policy state and the configuration of the password policy that governs their account.

Both of these attributes can be accessed over either LDAP or the Directory REST API, and the UnboundID LDAP SDK for Java provides convenient support for both the ds-pwp-state-json and ds-pwp-modifiable-state-json attributes. The password policy state extended operation is still available and provides support for manipulating a broader set of password policy state properties, but it can only be used over LDAP (either programmatically or through the manage-account command-line tool).

Password Validation During Bind

The server has always provided a lot of options for validating passwords at the time that they’re set, whether in a self-change or an administrative reset. It now also provides the ability to perform validation in the course of processing bind operations in which the server has access to the user’s clear-text password.

This feature can help provide better support for detecting issues with passwords that were acceptable at the time they were set but have since become undesirable (for example, if someone uses the same password across multiple sites, and one of them suffers a data breach that exposes the password). As such, you may want to periodically check the password against services like Pwned Passwords or against a regularly updated dictionary of banned passwords.

Although you can configure the server to perform password validation for every bind attempt, the server also offers the ability to only check them periodically (for example, if it’s been at least a week since the last validation). This may be useful if it is necessary to interact with external services while performing the validation.

You can also configure the behavior that the server should exhibit when the password provided to a bind operation fails validation, including:

  • The server can allow the bind to proceed but place the user’s account in a “must change password” state so all other requests from that user will fail until they choose a new password.
  • The server can reject the bind attempt, requiring an administrative password reset before they can use their account.
  • The server can generate an account status notification that may be used to notify the user about the need to choose a new password (or to take some other action as determined by a custom account status notification handler).

Recent Login History

The server has always provided support for maintaining a last login time and IP address for each user, but that only reflects information about the most recent successful authentication attempt. And if account lockout is enabled, then the server can maintain a list of failed authentication attempts. However, that will only be updated until the account is locked, and it will be cleared by a successful login, so this is really only useful for the purpose of account lockout.

In the 8.2.0.0 release, we are adding support for maintaining a recent login history, stored in the password policy state for each account. For both successful and failed authentication attempts, the server will store the time, client IP address, and the type of authentication attempted. For a failed attempt, the server will also record a general reason for the failure.

The recent login history is available in a few different ways:

  • It is available through the ds-pwp-state-json operational attribute.
  • It is available through the password policy state extended operation and through the manage-account command-line tool that uses that operation behind the scenes.
  • Information about previous login attempts can be obtained on a successful bind by including the new “get recent login history” request control in the bind request. The ldapsearch and ldapmodify tools have been updated with a --getRecentLoginHistory argument that can be used to include this control in the bind request, and the UnboundID LDAP SDK for Java provides support for using this control programmatically.

The recent login history can be enabled in the password policy configuration, and you can configure it to keep up to a specified number of attempts or keep a record for up to a specified duration. This can be configured separately for successful and failed attempts. You can also configure the server to collapse information about similar authentication attempts (that is, attempts from the same IP address with the same result, authentication type, and failure reason) into a single record so, the history doesn’t fill up too quickly from clients that frequently try to authenticate to the server. When collapsing information about multiple similar attempts into a single record, the timestamp of that record will reflect the most recent successful attempt, and it will also include the number of additional similar attempts that have been collapsed into that record.

Sample dsconfig Batch Files

We have added a config/sample-dsconfig-batch-files directory with several well commented dsconfig batch files that demonstrate how to enable, disable, or configure various functionality in the server. Many of these batch files are for changes that can help improve server security, including configuring TLS protocols and cipher suites, managing password policy functionality, disabling insecure communication, enabling two-factor authentication options, and managing administrative users. They help serve as additional documentation and, in many cases, provide information about changes that we recommend but do not have in the out-of-the-box configuration for some reason.

In many cases, these batch files require a certain amount of editing before they can be used (for example, to indicate which TLS protocols to use, which password policy to update, etc.). In such cases, you shouldn’t edit the provided batch file directly but should instead make a copy and customize that as needed. This will help ensure that if we make changes to the existing batch files in the future, those changes will be applied when you update the server to that new version.

Additional JSON-Formatted Loggers

Our default access and error loggers use a legacy space-delimited text format that matches what is used by other types of directory servers. However, in the 6.0.0.0 release, we added support for formatting access and error log messages as JSON objects so that they can be more easily parsed by a wider variety of log management and analysis software.

In the 8.2.0.0 release, we are adding JSON support for additional types of loggers. These include:

  • A JSON-formatted audit log publisher, which provides a record of all changes to data in the server. In the past, audit logging has always used the standard LDIF format to provide the contents of the change and LDIF comments to capture information about other metadata (including the message timestamp, the connection and operation ID, the requester DN and IP address, etc.). The JSON-formatted audit logger still uses the standard LDIF format for representing information about changes, but all of the other metadata is now available through separate JSON fields to make it easier to access in a generic manner.
  • A JSON-formatted HTTP operation log publisher, which provides a record of HTTP requests received by and HTTP responses returned by the server. We already offered HTTP operation log publishers using the W3C common log format and a space-delimited format similar to our default access and audit loggers, but we now offer support for these messages formatted as JSON objects.
  • JSON-formatted Sync log publisher and Sync failed ops log publisher options, which provide information about processing performed by the Synchronization Server. In the past, these messages have used a less-structured format that is primarily intended for reading by a person. When attempting to parse these messages in an automated manner, the new JSON format is a much better option.

Each of these JSON-formatted logger types (including the existing JSON-formatted access and error loggers) includes a logType field in each log message that indicates the type of logger that generated the message. This makes it easier to identify the type of log message in cases where log messages from different sources are intermingled (for example, writing their log files to the same directory). While this is useful when writing these messages to files, it is even more important when logging to the console or to syslog, as will be described below.

We also fixed an issue in our existing support for JSON-formatted access and error log messages that caused the timestamps to be generated in a format that was not completely compliant with the ISO 8601 format described in RFC 3339. These timestamps incorrectly omitted the colon between the hour and minute components of the time zone offset.

Also, while the stats logger plugin has provided an option to write messages as JSON objects since the 8.0.0.0 release, you had to specifically create an instance of the plugin if you wanted to enable that support. In the 8.2.0.0 release, we have added a definition in the out-of-the-box configuration that can generate JSON-formatted output, so you merely need to enable it if you want to take advantage of this feature. This provides better parity with the CSV-formatted output that the plugin can also generate.

Logging to Standard Output and Standard Error

In container-based environments like Docker, it is relatively common to have applications write log messages to the container’s standard output or standard error stream, which will then be captured and funneled into log management software. We are now providing first-class support for that capability. In the 8.2.0.0 release, we are providing console-based variants of each of the following:

  • JSON-formatted access log messages
  • JSON-formatted error log messages
  • JSON-formatted audit log messages
  • JSON-formatted HTTP operation log messages
  • JSON-formatted Sync log messages
  • JSON-formatted Sync failed ops log messages

Each of these can be sent to either standard output or standard error, and you can mix messages from different types of loggers in the same stream. For example, if desired, you could send all log messages for all of these types of loggers to standard output and none of them to standard. Because these JSON objects include the logType field, you can easily identify the type of logger that generated each message.

You can also now configure the server to prevent it from logging messages in non-JSON format during startup. By default, the server will continue to write error log messages to standard error in the legacy space-delimited text format. It will also write information about administrative alerts to standard error, and you can either disable this entirely or have those messages formatted as JSON objects. With these changes, it is now possible to have all of the data written to standard output or standard error be formatted as JSON objects, and you don’t have to worry about JSON-formatted output intermingled with non-JSON output.

Note that we only recommend using console-based logging when running the server with the --nodetach argument, which prevents it from detaching from the console (and is the preferred mode for running in a container like Docker anyway). When running in the default daemon mode, any data written by these console-based loggers will be sent to the logs/server.out file, which is less efficient and less customizable than using the file-based JSON loggers. When running in --nodetach mode, the console-based loggers will only write to the JVM’s original standard output or standard error stream.

Rotation Support for server.out

With the exception of the new console-based loggers when the server is running in --nodetach mode, anything that would have normally gone to standard output or standard error is written into the logs/server.out file. Previously, the server would create a new server.out file when it started up and would continue writing to that same file for the entire life of the server. It would also only keep one prior copy of the server.out file (in server.out.previous).

The server generally doesn’t write much to standard output or standard error, so it’s really not much of a problem to have all data sent to the same file, but it can accumulate over time, and it’s possible that the file could grow large if the server runs for a very long time. To address that, we have added support for rotating this file. Rotation is only available based on the amount of data written, and retention is only available based on the number of files. By default, the server will rotate the file when it reaches 100 megabytes, and it will retain up to 10 rotated copies of the file. This behavior can be customized through the maximum-server-out-log-file-size and maximum-server-out-log-file-count properties in the global configuration.

Syslog Logging Improvements

The server has provided support for writing access and error log messages (in the legacy text-based formats) to syslog since the 2.1.0 release, but there were a number of limitations. Logging used an old version of the syslog protocol (as described in RFC 3164), and messages could only be sent as unencrypted UDP packets.

In the 8.2.0.0 release, we are significantly updating our support for logging over syslog. The former syslog-based access and error loggers are still available for the purpose of backward compatibility, but we have also added several new types of syslog-based loggers with new capabilities, including:

  • Access loggers using either JSON or the legacy text-based format
  • Error loggers using either JSON or the legacy text-based format
  • Audit loggers using the JSON format
  • HTTP operation loggers using the JSON format
  • Sync loggers using the JSON format
  • Sync failed ops loggers using the JSON format

These new loggers will use an updated version of the syslog protocol (as described in RFC 5424), and they also offer the option to communicate over UDP or TCP. When using TCP, you can optionally encrypt the data with TLS so that it can’t be observed over the network. When using TCP, you can also configure the logger with information about multiple syslog servers for the purpose of redundancy (this is not available when using the UDP-based transport, as the server has no way of knowing whether the messages were actually received).

The OAUTHBEARER SASL Mechanism

We have added support for the OAUTHBEARER SASL mechanism as described in RFC 7628. This makes it possible to authenticate to the server with an OAuth 2.0 bearer token, using an access token validator to verify its authenticity, map it to a local user in the server, and identify the associated set of OAuth scopes. The oauthscope ACI bind rule can be used to make access control decisions based on the scopes associated with the access token.

This ultimately makes it possible to authenticate to the server in a variety of ways that had not previously been available, especially when users are interacting with Web-based applications that use the Directory Server behind the scenes. For example, this could be used to authenticate clients with a FIDO security key.

In addition to the support for OAuth 2.0 bearer tokens as described in the official specification, our OAUTHBEARER implementation also supports authenticating with an OpenID Connect ID token, either instead of or in addition to an OAuth 2.0 bearer token. To assist with this, we have added support for ID token validators, including streamlined support for validating ID tokens issued by the PingOne service. If you want to authenticate with only an OpenID Connect token, you can do that with a standard OAUTHBEARER bind request by merely providing the OpenID Connect ID token in place of the OAuth access token. If you want to provide both an OAuth access token and an OpenID Connect ID token, you’ll need to include a proprietary pingidentityidtoken key-value pair in the encoded SASL credentials. The UnboundID LDAP SDK for Java provides an easy way to do this.

The ($attr.attrName) Macro ACI

We have updated our access control implementation to add support for the ($attr.attrName) macro ACI. This macro can be included in the value of the userdn and groupdn bind rules to indicate that the macro should be dynamically replaced with the value of the specified attribute in the authorized user’s entry.

This macro is particularly useful in multi-tenant deployments, in which a single server holds information about multiple different organizations. If a similar rule needs to be enforced across all tenants, you can create one rule for the entire server rather than a separate rule that is customized for each tenant. For example, if a user requesting an operation has an o attribute with a value of “XYZ Corp”, a bind rule of groupdn="ldap:///cn=Administrators,ou=Groups,o=($attr.o),dc=example,dc=com" would be interpreted as groupdn="ldap:///cn=Administrators,ou=Groups,o=XYZ Corp,dc=example,dc=com".

Our Delegated Administration application has been updated to support this macro ACI to provide an improved experience in multi-tenant environments.

Safely Removing Attribute Types From the Server Schema

In general, we don’t recommend removing attribute types (or other types of elements) from the server schema. It’s safe to do for elements that have never been used, but if an attribute has been used in the past (even if it’s not currently being used in any entries), there may still be references to it in the database (for example, in the entry compaction dictionary used to minimize the amount of space required to store it). In that case, the server will prevent you from removing the attribute type from the schema unless you first export and re-import the data to clear any references to it from the database. Further, if the database contains a reference to an attribute type that is not defined in the server schema, it will log a warning message on startup to notify administrators of the problem.

In the 8.2.0.0 release, we’re adding a new remove-attribute-type-from-schema tool that can be used to safely remove a previously used attribute type from the server schema without requiring an LDIF export and re-import. This tool will first ensure that the attribute type is not currently in use (including referenced by other schema elements or in the server configuration), and it will then clean up any lingering references to that attribute from the database before removing it from the schema.

This processing can also be automated through the new “remove attribute type” administrative task. The UnboundID LDAP SDK for Java provides convenient support for this task.

Validating Server Schema Definitions

We have added a new validate-ldap-schema tool that can be used to examine the server schema and report on any issues that it finds. This includes things like

  • Definitions that can’t be parsed
  • Elements that refer to other elements that aren’t defined in the schema
  • Multiple definitions with the same name or OID
  • Schema elements with an invalid name or OID
  • Schema elements without a name
  • Schema elements with an empty description
  • Attribute types without a syntax or an equality matching rule
  • Schema characteristics that some servers do not support (collective attributes, object classes with multiple superior classes)

The UnboundID LDAP SDK for Java also offers a SchemaValidator class to allow you to perform this validation programmatically.

setup Security Improvements

We’ve updated the server setup process to make it easier and more convenient to create a more secure installation.

It was already possible to use non-interactive setup to create an instance that only allowed TLS-encrypted connections (by simply not specifying a port to use for non-secure LDAP connections), but this was not an option for interactive setup. When running interactive setup, the resulting server would always accept unencrypted connections from LDAP clients. Now, the interactive setup process will allow you to create an instance that only accepts connections from LDAP clients that are encrypted with TLS. Alternatively, it is possible to enable support for accepting unencrypted LDAP connections but require those clients to use the StartTLS extended operation to secure communication before they are allowed to issue other requests. And we have redesigned the flow so that you’ll still get the same number of prompts when running setup.

We have also added a couple of new arguments to non-interactive setup (also available in manage-profile setup) to make it possible to impose additional security-related constraints for clients:

  • We have added a new --rejectInsecureRequests argument that can be used to ensure that the server will reject requests from clients communicating with the server over an unencrypted connection. This makes it possible to enable support for unencrypted LDAP connections but require that those clients use the StartTLS extended operation to secure those connections before any other requests are allowed.
  • We have added a new --rejecUnauthenticatedRequests argument to configure the server to reject any request received from unauthenticated clients.

We have also updated non-interactive setup (and manage-profile setup) so that you can provide the password for the initial root user in pre-encoded form (using the PBKDF2, SSHA256, SSHA384, or SSHA512 storage scheme). This eliminates the need to have access to the clear-text password when setting up the server. The pre-encoded password can be provided directly through the --rootUserPassword argument or in a file specified through the --rootUserPasswordFile argument.

manage-profile Tool Improvements

We have made several improvements to the manage-profile tool that can be used to set up or update a server instance. They include:

  • We updated manage-profile replace-profile to better support updating the server’s certificate key and trust store files. When the profile includes the --generateSelfSignedCertificate argument in the setup-arguments.txt file, the original key and trust store files will be retained. Otherwise, manage-profile will replace the server’s key and trust stores with those from the new profile.
  • When applying configuration changes from a dsconfig batch file with the server online, manage-profile replace-profile can now obtain connection arguments from the target server’s config/tools.properties file.
  • We updated manage-profile replace-profile so that if the new profile includes new encryption settings definitions, the new profile’s preferred definition will be used as the resulting instance’s preferred definition.
  • We fixed an issue in which manage-profile setup could complete with an exit code of zero even if the attempt to start the server failed.
  • We fixed an issue in which manage-profile replace-profile could fail when applying changes from a dsconfig batch file when using Boolean connection arguments like --useSSL.
  • We updated the tool to ensure that each instance gets a unique cluster name. We do not recommend having multiple servers in the same cluster for instances managed by manage-profile.
  • We made it possible to obtain debug information about how long each step of the setup or replace-profile process takes.

parallel-update Tool Improvements

The parallel-update tool can be used to apply changes read from an LDIF file in parallel with multiple concurrent threads. It behaves much like ldapmodify, but the parallelism can make it significantly faster to process large numbers of changes. The ldapmodify tool does offer a broader range of functionality than parallel-update, but we have added new capabilities to parallel-update to help narrow that gap. Some of these improvements include:

  • We have added support for several additional controls, including proxied authorization, manage DSA IT, ignore NO-USER-MODIFICATION, password update behavior, operation purpose, name with entryUUID, assured replication, replication repair, suppress operational attribute update, and suppress referential integrity updates. We have also added support for including arbitrary controls in add, bind, delete, modify, and modify DN requests.
  • We have made its communication more robust. The tool would previously only establish its connections when it was first started. If any of those connections became invalid over the course of processing requests, then subsequent operations attempted on that connection would fail. The tool will now attempt to detect when a connection is no longer valid and will attempt to establish a new connection if it becomes invalid, re-attempting any failed operation on that new connection.
  • We have added support for specifying multiple server addresses and ports. If this option is used, then the tool can automatically fail over to an alternative server if the first server becomes unavailable.
  • We have improved the ability to determine whether all operations were processed successfully. Previously, the tool would always use an exit code of zero if it was able to attempt all of the changes, even if some of them did not complete successfully. This is still the default behavior to preserve backward compatibility, but you can now use the --useFirstRejectResultCodeAsExitCode argument to indicates that if any operation fails, the result code from the first rejected operation will be used as the parallel-update exit code.
  • It is now possible to apply changes from an LDIF file that has been encrypted with a user-supplied passphrase or with a definition from the server’s encryption settings database.
  • Previously, if you didn’t provide a value for the --numThreads argument, the tool would use one thread by default. It now defaults to using eight threads.
  • The tool now offers a --followReferrals argument to allow it to automatically follow any referrals that are returned.
  • If you run the tool without any arguments, it will now start in an interactive mode instead of exiting with an error.

Other Command-Line Tool Improvements

We have made a variety of other command-line tool improvements.

We replaced the ldappasswordmodify tool with a new version that offers more functionality, including support for additional controls, support for multiple password change methods (the password modify extended operation, a regular LDAP modify operation, or an Active Directory-specific modify operation), and the ability to generate the new password on the client.

We replaced the ldapcompare tool with a new version that offers more functionality, including support for multiple compare assertions, following referrals, additional controls, and multiple output formats (including tab-delimited text, CSV, and JSON).

We replaced the ldifsearch, ldifmodify, and ldif-diff tools with more full-featured and robust implementations.

We updated the ldap-diff tool so that it no longer wraps long lines by default, as that interfered with certain types of text processing that clients may wish to perform on the output. However, the tool does offer a --wrapColumn argument that can be used to indicate that long lines should be wrapped at the specified column if that is desired.

We updated the collect-support-data tool to make it possible to specify how much data should be captured from the beginning and end of each log file to be included in the support data archive. This option is also available when invoking the tool through an administrative task or an extended operation.

Usability Improvements in Command-Line Tools

We have updated the server’s command-line framework to make it easier and more convenient to communicate over a secure connection when no trust-related arguments are provided. Most non-interactive tools will now see if the peer certificate can be trusted using the server’s default trust store or the topology registry before prompting the user about whether to trust it.

We have also updated interactive tools like dsconfig and dsreplication so that they will prefer establishing secure connections over insecure connections. The new default behavior will be to use the most secure option available, with a streamlined set of options that does not require any additional prompts than choosing to use an insecure connection.

We have also updated setup (whether operating interactive or non-interactive mode) to make it possible to generate a default tools.properties file that tools can use to provide default values for arguments that were not provided on the command line. If requested, this will include at least arguments needed to establish a connection (secure if possible), but it may also include properties for authentication if desired.

Dictionary Password Validator Improvements

The dictionary password validator can be used to reject passwords that are found in a dictionary file containing prohibited passwords. We have always offered this password validator in the server, and we include a couple of dictionaries, including one with words from various languages (including over 730,000 words in the latest release) and another with known commonly used passwords taken from data breaches and other sources (with over 640,000 passwords).

In the 8.2.0.0 release, we have made several improvements to the dictionary password validator. They include:

  • We now offer the ability to ignore non-alphabetic characters that appear at the beginning or end of the proposed password before checking the dictionary. This can help detect things like a simple dictionary word preceded or followed by digits or symbols. For example, if the dictionary contains the word “password”, then the validator could proposed passwords like “password123!” or “2020password”.
  • We now offer the ability to strip characters of diacritical marks (accents, cedillas, circumflexes, diaereses, tildes, umlauts, etc.) before checking the dictionary. For example, if a user proposes a password of “áçcèñt”, then the server would check to see if the provided dictionary contains the word “accent”.
  • We now offer the ability to define character substitutions that can be used when checking for alternative versions of the provided password. For example, you could indicate that the number “0” might map to the letter “o”, the number “1” or the symbol “!” might map to the letters “l” or “i”, that the number “7” might map to the letter “t”, and that the number “3” might map to the letter “e”. In that case, then the validator could reject a proposed password of “pr0h1b!73d” if the dictionary contains the word “prohibited”.
  • We now offer the ability to reject a proposed password if a word from the provided dictionary makes up more than a specified percentage of that password. For example, if you configure the validator to ban passwords in which over 70% of the value matches a word from the dictionary, then the server could reject “mypassword” if the dictionary contains “password” because the word “password” makes up 80% of “mypassword”.

The AES256 Password Storage Scheme

In general, we strongly recommend encoding passwords with non-reversible schemes, like those using a salted SHA-2 digest, or better yet, a more expensive algorithm like PBKDF2 or Argon2. However, in some environments, there may be a legitimate need to store passwords in a reversible form so the server can access the clear-text value (which may be needed for processing binds for some legacy SASL mechanisms like CRAM-MD5 or DIGEST-MD5), or so the clear-text password can be used for other purposes (e.g., for sending it to some other external service).

In the past, the best reversible storage scheme that we offered was AES, which used a 128-bit AES cipher with a key that was shared privately among instances of the replication topology, but that was otherwise not available outside the server (including to the Directory Proxy Server, which might need access to the password for processing certain types of SASL bind requests). This means that it could be used for cases in which the server itself needed the clear-text password, but not when it might be needed outside the server. It was also not a simple matter to rotate the encryption key if the need should arise.

To address these issues, and also to provide support for a stronger cipher, we are adding a new AES256 password storage scheme. As the name implies, it uses AES with a 256-bit key (rather than the 128-bit key for the older cipher). It also generates the encryption key from an encryption settings definition, and if a client knows the password used to create that encryption settings definition, and if it has access to the encoded password, then it can decrypt that encoded password to get the clear-text representation. The UnboundID LDAP SDK for Java provides a convenient way to decrypt an AES256-encoded password given the appropriate passphrase, and the class-level Javadoc provides enough information for clients using other languages or APIs to implement their own support for decrypting the password.

The export-reversible-passwords Tool

While the new AES256 password storage scheme gives you much better control over the key that is used when encrypting passwords, it does not provide direct support for rotating that encryption key should the need arise. You can reconfigure the storage scheme to change the encryption settings definition to use when encoding new passwords, but that does not affect existing passwords that have already been encoded with the earlier definition.

To help facilitate encryption key rotation for AES256-encoded passwords, or to facilitate migrating passwords that use a reversible encoding to use a different scheme, we have provided a new export-reversible-passwords command-line tool. It allows you to export data to LDIF with clear-text representations of any reversibly encoded passwords.

In most environments, you probably won’t actually need or want to use this tool, and you might be concerned about its potential for misuse. To help protect against that, there are a number of safeguards in place to help ensure that it cannot be used inappropriately or covertly. These include:

  • If you don’t encode passwords in a reversible form, then you don’t have anything to worry about. The server can’t get the clear-text representation of passwords encoded using non-reversible schemes.
  • The tool initiates the export by communicating with the server over a TLS-encrypted LDAP connection established over a loopback interface. It cannot be invoked with the server offline, and it cannot be invoked from a remote system.
  • The export can only be requested with someone who has the new permit-export-reversible-passwords privilege. This privilege is not granted to anyone (not even root users or topology administrators) by default, and an administrative alert will be generated whenever this privilege is assigned (as with any other privilege assignment) so that other administrators will be aware of it.
  • The export can only be requested if the server is configured with an instance of the “export reversible passwords” extended operation handler. This extended operation handler is not available as part of the out-of-the-box configuration, and an administrative alert will be generated whenever it is created and enabled (as with any other type of configuration change).
  • The output will be written to a file on the server file system, and that file must be placed beneath the server instance root. As such, it can only be accessed by someone with permission to access other files associated with the server instance.
  • The entire output file will itself be encrypted, either using a key generated from a user-supplied passphrase or from a server encryption settings definition. The encrypted file may be directly imported back into the server using the import-ldif tool, or it may be used as an input file for the ldapmodify or parallel-update tools. It can also be decrypted with the encrypt-file tool or accessed programmatically using the PassphraseEncryptedInputStream class in the UnboundID LDAP SDK for Java.
  • The server will generate an administrative alert whenever it begins the export process.

The output will be formatted as LDIF entries. By default, it will only include entries that contain reversibly encoded passwords, and the entries that are exported will only include the entry’s DN and the clear-text password. You can also use it to perform a full LDIF export, including entries that do not have passwords or that have passwords with a non-reversible encoding, and including other attributes in those entries.

Password Quality Requirements Information in the ds-pwp-state-json Virtual Attribute

We have updated the ds-pwp-state-json virtual attribute provider to add support for a new password-quality-requirement field whose value is a JSON object that provides information about the requirements that passwords will be required to satisfy. This information was already available through the “get password quality requirements” extended request or the “password validation details” request control, but including it in the ds-pwp-state-json attribute makes it easier to access by generic LDAP clients that aren’t written with the UnboundID LDAP SDK for Java, and it also makes it available to non-LDAP clients (for example, those using the Directory REST API).

The value of the password-quality-requirement field will be an array of JSON objects, each of which describes a requirement that passwords will be required to satisfy. These JSON objects may include the following fields:

  • description — A human-readable description for the requirement.
  • client-side-validation-type — A string that identifies the type of validation being performed. In some cases, clients may be able to use this field, along with information in the client-side-validation-properties field to pre-validate passwords to determine whether it is acceptable before sending it to the server, in order to give a better end-user experience. For example, if passwords may be required to meet certain length requirements, the client-side-validation-type value will be “length”, and it can use the min-password-length and max-password-length properties to convey the minimum and maximum (if defined) lengths for the password. Of course, some types of validation (for example, checking a server-side dictionary of prohibited passwords) can’t readily be performed by clients.
  • client-side-validation-properties — An array of properties that provide additional information about the requirement that may be useful in client-side validation. Each element of the array will be a JSON object with the following fields:

    • name — The name for the property.
    • value — The value for the property. Note that this will always be formatted as a JSON string, even if the value actually represents some other data type, like a number or Boolean value.
  • applies-to-add — Indicates whether this requirement will be imposed for new entries that are added with the same password policy as the entry to which the ds-pwp-state-json value is associated.
  • applies-to-self-change — Indicates whether this requirement will be imposed when the user is changing their own password.
  • applies-to-administrative-reset — Indicates whether this requirement will be imposed when the user’s password is being reset by an administrator.
  • applies-to-bind — Indicates whether this requirement may be checked during bind operations.

Augmenting Default Crypto Manager Cipher Suites

By default, the server automatically chooses an appropriate set of TLS cipher suites to enable. It tries to maintain a good balance between making it possible to use the strongest available security for clients that support it but still allowing reasonably secure alternatives for older clients that may not support the most recent options. However, it is also possible to override this default configuration in case you want to customize the set of enabled cipher suites (for example, if you know that you won’t need to support older clients, you can have only the strongest suites enabled).

The set of enabled cipher suites can be customized for each connection handler (for example, if you want to have different suites available for LDAP clients than those using HTTP). For any other communication that the server may use (for example, to secure replication communication between instances), that may be customized in the crypto manager configuration.

It has always been possible to explicitly state which TLS cipher suites should be enabled, using the ssl-cipher-suite property in either the connection handler or crypto manager configuration. In that case, you can simply list the names of the suites that should be enabled. For connection handlers, we have also offered the ability to augment the default set of suites to mostly use those values, but to also enable or disable certain suites by prefixing the name with a plus or minus sign. For example, specifying a value of “-TLS_RSA_WITH_AES_256_CBC_SHA256” indicates that the server should omit the “TLS_RSA_WITH_AES_256_CBC_SHA256” suite from the default set of cipher suites that would otherwise be available. However, we overlooked providing the ability to add or remove individual suites in the crypto manager configuration in previous releases. That has now been fixed.

Improvements in Delaying Bind Responses

In the 7.2.0.0 release, we made it possible to delay the response to any failed LDAP bind request as a means of rate-limiting password guessing attacks. This provides a great alternative to locking an account after too many failed bind attempts because it can dramatically impede how quickly clients can make guesses while also preventing the opportunity for malicious clients to intentionally lock out legitimate users by using up all of the allowed failures.

In the 8.1.0.0 release, we added the ability to configure the password policy to take some alternative action instead of locking the account after the specified number of failed attempts, and one of the available options is to delay the bind response. Although similar to the earlier feature, this is a little better for a couple of reasons:

  • It will only start delaying bind responses after a few failed attempts. This can help avoid any visible impact for users who simply mis-type their password one or two types. The delay will only kick in after the configured failure count threshold has been reached.
  • It will also delay the response to the first successful bind after the failure count threshold has been reached. This makes the delay even more effective for deterring malicious clients undertaking guessing attacks because it prevents them from detecting when the response has been delayed, terminating the connection, and establishing the new one to try again without waiting for the full configured delay period to elapse.

In the 8.1.0.0 release, this ability was limited to LDAP clients because the server can efficiently impose this delay for LDAP clients without holding up the worker thread being used to process the operation (thereby making it available to immediately start processing another request). This is unfortunately not possible for non-LDAP clients, like those using HTTP (for example, via SCIM or the Directory REST API).

In the 8.2.0.0 release, we are adding support for delaying the responses to non-LDAP clients. This provides better protection for servers in which clients may communicate through means other than LDAP, but we still can’t do it without tying up the thread processing the request, so you have to opt into it by setting the allow-blocking-delay property to true in the configuration for the delay bind response failure lockout action. As a means of mitigating the potential for tying up a worker thread, we recommend bumping up the number of HTTP request handler threads so that it’s less likely that they will all be consumed by delayed responses.

To improve security for administrative accounts, we are also updating the password policy that is used by default for root users and topology administrators so that it will impose a default delay of one second after five failed authentication attempts. This can help rate-limit password guessing attacks for these high-value accounts. This default will not be imposed for other users by default, but it is possible to enable this feature in other password policies if desired.

Using /dev/urandom for Secure Random Data

Being able to obtain secure random data (that is, data that is as unpredictable as possible) is critical to many types of cryptographic operations. By default, on Linux and other UNIX-based systems, the Java Virtual Machine relies on /dev/random to get this secure random data. This is indeed an excellent source of secure random data, and it relies on a pool of entropy that is managed by the underlying operating system and can be replenished by unpredictable inputs from a variety of sources.

Unfortunately, attempts to read from /dev/random will block if the operating system doesn’t have enough entropy available. This means that if the server experiences a surge of cryptographic processing (for example, if it needs to conduct TLS negotiation for a lot of clients all at once), it can run out of entropy, and attempts to retrieve additional secure random data can block until the pool is replenished, causing the associated cryptographic operations to stall for a noticeable period of time.

To address this, we are updating the server to use /dev/urandom instead of /dev/random as the source of secure random data. /dev/urandom behaves much like /dev/random in that it can be used to obtain secure random data, but attempts to read from /dev/urandom will never block if the pool of entropy runs low, and therefore cryptographic operations that rely on access to secure random data will never block.

While there is a common misconception that using /dev/urandom is notably less secure than /dev/random, this is really not true in any meaningful way. See https://www.2uo.de/myths-about-urandom/ for a pretty thorough refutation of this myth.

Improvements to Self-Signed Certificates and Certificate Signing Requests

If you enable secure communication when setting up the server, you’re given the option to use an existing certificate key store or to generate a self-signed certificate. When creating a self-signed certificate, we previously used a twenty-year lifetime so that administrators wouldn’t need to worry about it expiring. However, some TLS clients (and especially web browsers, as may be used to access the Administration Console, Delegated Administration, or other HTTP-based content) are starting to balk at certificates with long lifetimes and are making it harder to access content on servers that use them. As such, we are reducing the default lifetime for client-facing self-signed certificates from twenty years to one year.

When an installed certificate is nearing expiration, or if you need to replace it for some other reason, you can use the replace-certificate tool. If you choose to replace the certificate with a new self-signed certificate, that will also now have a default validity of one year. However, you can also generate a certificate signing request (CSR) so that you can get a certificate issued by a certification authority (CA), and we have updated that process, too.

When generating a CSR, you need to specify the subject DN for the desired certificate. The tool will list a number of attributes that are commonly used in subject DNs, but it previously only stated the subject DN should include at least the CN attribute. In accordance with CA/Browser Forum guidelines, we have updated this recommendation, and we now suggest including at least the CN, O, and C attributes in the requested subject DN. We have also updated the logic used to select host names and IP addresses for inclusion in the subject alternative name extension, including:

  • When obtaining the default set of DNS names to include in the subject alternative name extension, we no longer include unqualified names, and we will warn about the attempt to add any unqualified host names.
  • When obtaining the default set of DNS names to include in the subject alternative name extension, we no longer include names that are associated with a loopback address.
  • When obtaining the default set of IP addresses to include in the subject alternative name extension, we no longer include any IP addresses from IANA-reserved ranges (including loopback addresses or those reserved for private-use networks) and will warn about attempts to add any such addresses.

New Administrative Alert Types

We have defined new types of administrative alerts that can be used to notify administrators of certain events that occur within the server. These include:

  • We have added a new admin alert account status notification handler. If this is enabled and associated with a password policy, then the server can generate an administrative alert any time one of the selected account status notification events occurs. This is primarily intended for use with high-value accounts, like root users or topology administrators. For example, you can have the server generate an alert if a root user’s password is updated or if there are too many consecutive failed authentication attempts. This account status notification handler will use a different alert type for each type of account status notification (for example, it will use an alert type of password-reset-account-status-notification for the alert generated if a user’s password is reset by an administrator).
  • We have added a new privilege-assigned alert type that will be raised whenever any privileges are assigned. This includes creating a new entry with one or more explicitly defined privileges, updating an existing entry to add one or more explicitly defined privileges, creating a new root user or topology administrator that inherits a default set of root privileges, updating the default set of root privileges, or creating a new virtual attribute that assigns values for the ds-privilege-name attribute.
  • We added a new insecure-request-rejected alert type that will be raised if the server rejects a request received over an insecure connection as a result of the reject-insecure-requests global configuration property.

Monitor Improvements

We have updated the system information monitor entry to restrict the set of environment variables that it may contain. Previously, this monitor entry would include the names and values of all environment variables that are available to the server process, but this has the potential to leak sensitive information because some deployments use environment variables to hold sensitive information (for example, credentials or secret keys used by automated processes). To avoid leaking this information to clients with permission to access server monitor data, we now only include information about a predefined set of environment variables that are expected to be useful for troubleshooting purposes and is not expected to contain sensitive information.

We have also added a new isDocker attribute to the server information monitor entry to indicate whether the server is running in a Docker container.

The JVM memory usage monitor entry has been updated to fix an issue that could prevent it from reporting the total amount of memory held by all memory consumers (that is, components of the server that may be expected to consume a significant amount of data), and to fix an issue that could cause it to use an incomplete message for consumers without a defined maximum size. We have also added an additional memory-consumer-json attribute whose values are JSON objects with data that can be more easily be consumed by automated processes.

We have updated the status tool to optimize some of the searches that it uses to retrieve monitor data.

Identity Mapper Improvements

Identity mappers can be used to map a username or other identifier string to the entry for the user to which it refers. We have made the following identity mapper-related improvements in the server:

  • We have updated the exact match and regular expression identity mappers to make it possible to include only entries that match a given filter.
  • We have added an aggregate identity mapper that can be used to merge the results of other identity mappers with Boolean AND or OR combinations.
  • We have added new identity mapper instances in the out-of-the-box configuration for matching only root users, only topology administrators, and either root users or topology administrators.

Uniqueness Control Improvements

The uniqueness request control may be included in an add, modify, or modify DN request to request that the server should not allow the operation if it would result in multiple entries with the same value for a specified attribute (or a set of values for a set of attributes). It operates in two main phases:

  • In the pre-commit phase, the server can search for entries that already exist and conflict with the requested change. If a conflict is found in the pre-commit phase, then the operation will be rejected so that it is not allowed to create the conflict.
  • In the post-commit phase, the server can once again search for conflicts. This can help detect any conflicts that may have arisen as a result of multiple changes being processed concurrently on the same or different servers.

In the 8.2.0.0 release, we have added two enhancements to our support for the uniqueness request control:

  • It is now possible to indicate that the server should create a temporary conflict prevention details entry before beginning pre-commit processing. This is a hidden entry that won’t be visible to most clients, but that can make the server more likely to detect conflicts arising from concurrent operations so that they can be rejected before the conflict arises. While this dramatically increases the chance that concurrent conflicts will be prevented, it does incur an additional performance penalty, and it is disabled by default.
  • It is now possible to indicate that the server should raise an administrative alert if a uniqueness conflict is detected during post-commit processing. If a conflict is identified in the post-commit phase, then it’s too late to prevent it, but it is now at least possible to make it easier to be brought to the attention of server administrators so they can take any corrective action that may be necessary. This is enabled by default.

Improved Directory Proxy Server SASL Support

We have updated the Directory Proxy Server to provide better support for the PLAIN, UNBOUNDID-DELIVERED-OTP, UNBOUNDID-TOTP, and UNBOUNDID-YUBIKEY-OTP SASL mechanisms.

Previously, the Directory Proxy Server attempted to process these operations in the same way as the backend Directory Servers. It would retrieve the target user’s entry from the backend server, including the encoded password and any additional data needed in the course of processing the bind (for example, the user’s TOTP secret when processing an UNBOUNDID-TOTP bind). However, this would fail if the server backend server was configured to block external access to this information. Now, the Directory Proxy Server has been updated to simply forward the bind request to the appropriate backend server for processing, which eliminates the potential for problems in environments in which the Directory Proxy Server cannot get access to all necessary data in the target user’s entry.