Ping Identity Directory Server 9.1.0.0

We have just released version 9.1.0.0 of the Ping Identity Directory Server and related products, including Directory Proxy Server, Synchronization Server, and Metrics Engine. See the release notes for a complete overview of changes, but here’s my summary:

Known Issues

  • When updating an 8.3 or 9.0 server to the 9.1 release, the Bouncy Castle library that the server uses for certain cryptographic processing will be updated. If it later becomes necessary to revert the update and return to the older version, the newer version of the Bouncy Castle libraries will remain in place. Although this will not cause any actual problems with the server, it may cause the server to log a message during startup about multiple versions of the library in the classpath.

Summary of New Features and Enhancements

  • Updated the Directory REST API to add support for a variety of request and response controls. [more information]
  • Added the ability to streamline the process for replacing a listener certificate if the new certificate is signed by the same issuer as the current certificate. [more information]
  • Updated the replace-certificate tool to make it possible to replace the server’s listener certificate even after it has expired. [more information]
  • Added support for sanitizing access log messages as they are logged. [more information]
  • Added support for generifying message strings in access and error log messages. [more information]
  • Updated the Synchronization Server to improve support for synchronizing changes to and from PingOne, including synchronizing using custom attributes, including multi-valued attributes and JSON-formatted attributes.
  • Improved the result that the server returns when using assured replication if it detects that the operation would ultimately be reverted as a result of a replication conflict.
  • Updated the sanitize-log tool so that its sanitization processing better aligns with the server’s new support for sanitized access logging. [more information]
  • Updated the summarize-access-log tool to support JSON-formatted access log files.
  • Added support for JSON-formatted controls in LDAP requests and responses. [more information]
  • Added a docker-pre-start-config command-line tool that can be used to speed up the startup process when running in a Docker container.
  • Updated manage-profile replace-profile to add a --skipValidation argument that can be used to skip the final validation process to reduce the time required for the update to complete.
  • Updated manage-profile generate-profile to add an --excludeSetupArguments argument that can generate a profile without the setup-arguments.txt file.
  • Improved the way that the Directory REST API processes PUT operations that alter an entry’s DN in conjunction with changes to other attributes in the entry.
  • Updated the active operation monitor provider to use millisecond precision in operation timestamps and to enable parsing the string representations of the operations using the LDAP SDK’s access log API.
  • Updated the status --fullVersion output to include the version of the collect-support-data tool.
  • Updated several libraries shipped with the server to improve functionality, address defects, and improve security.

Summary of Bug Fixes

  • Fixed an issue that could cause certain replication protocol messages to be dropped.
  • Fixed an issue that could cause a server to report missing changes and go into lockdown mode if it was restarted immediately after completing dsreplication initialize processing.
  • Fixed an issue that could prevent certain password policy functionality (including account status notification handlers) from being applied to an add operation in which an alternative policy should have been assigned using a virtual attribute rather than a real attribute.
  • Fixed an issue that could cause privileges assigned by virtual attribute to be overlooked in some cases (for example, when accessing topology-related functionality in the Admin Console).
  • Updated the server to create the esTokenizer.ping file if it does not exist but is needed. This file would not have been automatically created when upgrading a server with data encryption enabled from a pre-7.0 version to a later release with support for encrypted indexes.
  • Fixed an issue that could have incorrectly applied minimum and maximum password age constraints to users without a password.
  • Updated the JSON-formatted access logger to include the requester IP address field in disconnect, security negotiation, and client certificate log messages when appropriate.
  • Fixed an issue that prevented the server from refreshing monitor data used to detect and warn about upcoming certificate expiration. This could cause the server to continue to warn about an expiring certificate even after that certificate had been replaced. [more information]
  • Fixed issues that could prevent using the Amazon Secrets Manager, CyberArk Conjur, or HashiCorp Vault passphrase providers to obtain key and trust store PINs.
  • Fixed an issue that could cause the server to report a negative processing time in the access log for certain types of operations.
  • Updated the server to prevent inappropriate updates to the ds-pwp-modifiable-state-json operational attribute when the Modifiable Password Policy State plugin is not enabled.
  • Updated the server to prevent a user from updating their own password policy state using the ds-pwp-modifiable-state-json operational attribute.
  • Updated the server to prevent using the ds-pwp-modifiable-state-json attribute to alter a user’s password policy state in the same operation that also reset that user’s password.
  • Fixed an issue in which the dsreplication tool failed to properly normalize base DN values.
  • Fixed an issue that could prevent the Directory REST API from retrieving entries containing an attribute using the generalized time syntax whose value did not match an expected format.
  • Fixed an issue that could cause manage-profile replace-profile to fail with an error about merging configuration.
  • Updated manage-profile setup and manage-profile replace-profile to ensure that a pre-populated encryption settings database can only be provided in the post-setup files rather than in the pre-setup files, which fixes issues when using a customized cipher stream provider.
  • Updated the manage-topology add-server command to be more consistent when adding additional Synchronization Servers into a failover topology.
  • Fixed an issue in which the server could ignore certain indexes that it incorrectly believed to be redundant when evaluating search criteria.
  • Fixed an issue in which a SCIM request could return a less-appropriate error code in cases where an update violated a unique attribute constraint.
  • Fixed an issue that could cause the server to incorrectly reject a request containing a non-critical control that the requester was not allowed to use. The server will now process the operation as if that control had not been requested.
  • Fixed an issue that could allow the password policy state extended operation to create duplicate authentication failure time or grace login use time values in a user’s entry.
  • Fixed an issue that could adversely affect backward compatibility when attempting to use the legacy --useSSL or --useStartTLS arguments with the migrate-ldap-schema tool (as opposed to the newer arguments allowing you to independently specify security options for the source and target servers).
  • Fixed an issue that could prevent the server from generating an administrative alert to indicate that an outstanding alarm condition had been resolved.
  • Fixed an issue that could cause the server to report an internal error when attempting to obtain database statistics for a read-only backend.
  • Fixed an issue in the export-reversible-passwords tool that could cause it to report a timeout error when waiting for a response from the server.
  • Updated the export-reversible-passwords tool to automatically cancel an in-progress export if the tool used to invoke the export is terminated.
  • Fixed an issue that prevented the encode-password tool from working properly if the AES256 password storage scheme is enabled.
  • Updated the server to disable the index cursor entry limit by default, which is very unlikely to be needed and may interfere with the ability to effectively process certain requests.

Controls in the Directory REST API

It is now possible to use certain LDAP request and response controls through the Directory REST API. Controls should be provided as an array of JSON objects, and the following fields may be used in the JSON representation of the control:

  • oid — A mandatory string field that holds the object identifier for the control.
  • control-name — An optional string field that holds a user-friendly name for the control. Note that this is only intended for informational purposes and won’t be used in the course of actually decoding the control.
  • criticality — A mandatory Boolean field that indicates whether the control should be considered critical.
  • value-base64 — An optional string field whose values is the base64-encoded representation of the raw LDAP encoding for the value. This may be used for any type of control, including controls that the server supports but doesn’t offer a JSON-specific value encoding.
  • value-json — An optional JSON object field whose value contains zero or more fields that are specific to the type of control being represented.

At most one of the value-base64 and value-json fields may be present, and both may be omitted if the control doesn’t take a value.

For example, the following is a JSON representation of the Ping-proprietary join request control:

{ "oid":"1.3.6.1.4.1.30221.2.5.9",
  "control-name":"Join Request Control",
  "criticality":false,
  "value-json":{ "join-rule":{ "type":"dn",
                               "attribute":"manager" },
                 "base-dn-type":"use-custom-base-dn",
                 "base-dn-value":"ou=People,dc=example,dc=com",
                 "scope":"wholeSubtree",
                 "size-limit":10,
                 "attributes":[ "givenName",
                                "sn",
                                "cn",
                                "mail" ],
                 "require-match":false } }

For controls that take a value, the specific JSON encoding for that value varies from one control to another, and you should consult the documentation for a given control to understand how to properly encode and parse it.

Controls that may now be used in the Directory REST API include:

Improvements in Certificate Management

We have updated the replace-certificate tool and the corresponding support in the topology registry to add new options to make things easier when periodically replacing certificates, and especially listener certificates used for security communication with clients.

First, we have added a few new subcommands to the tool, including:

  • list-topology-registry-listener-certificates — Displays a list of the listener certificates associated with a specified instance in the topology registry.
  • list-topology-registry-inter-server-certificates — Displays a list of the inter-server certificates associated with a specific instance in the topology registry.
  • add-topology-registry-listener-certificate — Adds a new certificate to the set of listener certificates for a specified instance in the topology registry, but does not alter any certificate trust stores.

Next, we have updated the server’s use of topology registry listener certificates so that a server’s listener certificate may be trusted for inter-server communication if either the listener certificate itself or one of its issuers is found in the topology registry. Previously, the listener certificate itself was required to be present in the topology registry for that certificate to be trusted, but we have now added support for trusting issuer certificates. This makes it easier to an existing certificate with a new one signed by the same issuer because you can just update the key store without the need to update the topology registry, and there is less chance that a temporary communication problem between instances could arise if a server starts using a new listener certificate before the topology registry updates have propagated to all other instances in the topology.

If you decide to use this issuer-based trust, then there are two ways that you can get the issuer certificate into the topology registry:

  • Using the new replace-certificate add-topology-registry-listener-certificate command referenced above.
  • Using the replace-certificate replace-listener-certificate command with the new --topology-registry-update-type and --trust-store-update-type arguments, which make it possible to configure which certificate(s) in the trust chain will be added to the topology registry and the local trust store, respectively.

We also addressed an oversight in the replace-certificate replace-listener-certificate command that prevented it from being used to replace a listener certificate that had already expired, as the replace certificate tool would no longer trust that expired certificate and would therefore not establish a connection to the server to make the necessary updates. While we strongly recommend always replacing certificates before they actually expire, you can now use the --ignore-current-listener-certificate-validity-window argument to indicate that processing should proceed even if the presented listener certificate is outside its validity time frame. The listener certificate or one of its issuers must still be found in the topology registry.

Finally, we fixed an issue that prevented certificate information presented in the server’s monitor data from being updated. Since there is a gauge in place that uses this monitor information to warn about an upcoming certificate expiration, this bug prevented it from detecting when the certificate had been replaced, and the server would continue to warn about the upcoming expiration even after that certificate was no longer in use. That issue has been fixed, and certificate rotation should now be automatically detected.

Sanitizing Access Log Messages

The server’s access and error logs provide an invaluable resource for understanding how the server is used and for troubleshooting problems. However, they may also sometimes contain sensitive or personally identifiable information that you don’t want to have lying around on the filesystem. In the past, we have offered a sanitize-log tool that can be used to examine log files and produce a scrubbed copy that redacts or obscures potentially sensitive information, and that tool is still available (and improved). However, we have also added the ability to sanitize log messages as they are being written to help ensure that this information isn’t recorded in the logs in the first place.

For access logging, the new sanitization processing is based largely around log field syntaxes. Each log field is now associated with one of the following syntaxes:

  • String
  • Boolean
  • DN
  • Filter
  • Integer
  • Floating-point number
  • Generalized time
  • RFC 3339 timestamp
  • JSON object
  • Comma-delimited string list

We also offer a variety of sanitization types that may be used when determining how to handle a given log field, including:

  • preserve — The field will be included as-is, without any alteration or obfuscation.
  • omit — The field will be omitted from the log message entirely, without either the field name or its value present.
  • redact-entire-value — The field name will be included in the log message, but the entire value will be redacted such that all values for a given syntax will be the same. We actually try to preserve the original syntax when doing this when possible, using the following behavior:

    • String, Boolean, and string list fields will have their values replaced with the text “{REDACTED}”.
    • DN fields will have their values replaced with the text “redacted={REDACTED}”.
    • Filter fields will have their values replaced with the text “(redacted={REDACTED})”.
    • Integer fields will have their values replaced with the text “-999999999999999999”.
    • Floating-point fields will have their values replaced with the text “-999999.999999”.
    • Generalized time fields will have their values replaced with the text “99990101000000.000Z”.
    • RFC 3339 timestamp fields will have their values replaced with the text “9999-01-01T00:00:00.000Z”.
    • JSON object fields will have their values replaced with the text “{ "redacted":"{REDACTED}" }”.
  • redact-value-components — This behavior is similar to redact-entire-value, but it applies specifically to DN, filter, and JSON object values. In such cases, the redaction will be only applied to components within the value, rather than to the entire value itself. For DNs and filters, this means that only attribute values will be redacted while the rest of the text (including attribute names and punctuation) will be preserved, and for JSON objects it means that only field values will be redacted (and field names and punctuation will be preserved). For example, a DN of “uid=jdoe,ou=People,dc=example,dc=com” would become “uid={REDACTED},ou={REDACTED},dc={REDACTED},dc={REDACTED}”. You can optionally even configure this on a per-attribute or per-JSON-field basis, so that only certain attributes or fields (or all but a specified set of attributes or fields) will have their values redacted, and other attributes or fields will have their values preserved.
  • tokenize-entire-value — The field name will be included in the log message, but the value will be replaced with a token that is generated from the value but cannot be directly reversed back to its original value. This is similar to redaction, but different values will result in different tokens, and the same value will consistently result in the same token. This means that it’s possible to identify cases in which the same value appears across multiple log messages, even if you can’t tell what that value actually is. As with redaction, we attempt to preserve the original field syntax when possible, as follows:

    • String, Boolean, and string list fields will have their values tokenized with the text “{TOKENIZED:tokenValue}”, where tokenValue will be generated from the original value.
    • DN fields will have their values replaced with “tokenized={TOKENIZED:tokenValue}”.
    • Filter fields will have their values replaced with “(tokenized={TOKENIZED:tokenValue})”.
    • Integer fields will have their values replaced with a number that starts with “-999999999” and is followed by a sequence of nine additional digits generated from the original value.
    • Floating-point fields will have their values replaced with a number that starts with “-999999.” and is followed by a sequence of six additional digits generated from the original value.
    • Generalized time and RFC 3339 timestamp values will have their values replaced with a timestamp that uses the year 8888, but with other components of the timestamp generated from the original timestamp.
    • JSON object fields will have their values replaced with the text “{ "tokenized":"{TOKENIZED:tokenValue}" }”.
  • tokenize-value-components — This behavior is similar to redact-value-components, but the individual components will be tokenized rather than redacted. As with redact-value-components, this primarily applies to DNs, filters, and JSON objects.

The server now provides a log field syntax configuration object for each of the aforementioned syntaxes, and that object will include a default-behavior field that allows you to indicate that all values with that syntax should be treated in a given way. For example, if you configure the DN log field syntax with a default-behavior value of tokenize-value-components, then all DNs written to a text-formatted or JSON-formatted access log will automatically have their attribute values tokenized while the rest of the DN will remain intact.

For even more control over how individual log fields are treated, you can use log field behavior configuration objects. These objects allow you to choose the behavior to use on a field-by-field basis, and you can configure different behaviors for different loggers if desired.

If you’d rather still have access log messages written with all of the information intact, but still have the ability to sanitize those logs after the fact, then you can continue to use the sanitize-log tool to accomplish this, and we have updated this tool to provide improved behavior that is more closely aligned with the server’s support for sanitized logging. This includes:

  • It now knows about all predefined log fields that the server may use, and we have preselected default behaviors for each of those fields in a manner that we feel provides the best balance between privacy and usefulness.
  • It now uses the same syntax-aware redaction and tokenization logic that the server uses so that field values are more likely to conform to their original syntax.
  • You can customize the behavior that the tool exhibits on a per-syntax basis, or you can point it at a log field behavior configuration object.

Generified Log Message Strings

Sanitized logging and the sanitize-log tool provide very powerful mechanisms for removing or obscuring sensitive or identifiable information in log messages, but there are some access log fields that may have such a wide range of values that it’s not necessarily feasible to treat them in a “one size fits all” kind of way, especially while retaining the utility of that field. These kinds of fields include:

  • The diagnostic message that is returned to the client in an LDAP response.
  • An additional info message that is meant to be recorded in the access log for an operation but not returned to the client.
  • An authentication failure reason that is meant to explain why an authentication attempt failed.
  • A disconnect reason that is meant to provide the reason that a client connection was closed.

It’s possible that the values of these fields could occasionally include sensitive or identifiable information, but their content can also be very helpful in troubleshooting problems or understanding the server’s behavior in specific situations. As such, you would lose substantial benefit by configuring those fields to be redacted.

Further, while a field-based approach works well for sanitizing access log messages for the most part, it doesn’t really make any sense for error log messages, as the core content of an error log message appears in the same field for all messages.

To address these concerns, we have introduced the ability to log generified values of these strings, using the generify-message-strings-when-possible property in the configuration for each logger. If this property is set to true, then the server will log the generic format string for the associated message rather than the processed message that it would normally include.

For example, if a simple bind attempt fails because the target user doesn’t exist, the access log message for that operation might include an authentication failure reason like:

Unable to bind to the Directory Server as user uid=jdoe,ou=People,dc=example,dc=com because no such user exists in the server.

However, if the logger is configured to report the generic version of the message, then it would instead log:

Unable to bind to the Directory Server as user %s because no such user exists in the server.

The generic version of the message doesn’t specifically identify the user, thereby offering better privacy than the un-generified version, but at least it’s still useful enough to understand the reason that the authentication attempt failed.

JSON-Formatted LDAP Request and Response Controls

The Ping Identity Directory Server provides support for a very wide range of request and response controls. This includes not only standard controls defined in RFCs and Internet Drafts, but also many controls that we’ve defined ourselves for enhanced functionality. These controls can be easily accessed through the UnboundID LDAP SDK for Java, but they aren’t as readily available for use in applications written with other LDAP APIs, especially in languages that don’t run on the JVM. Even though the LDAP SDK documentation describes the format of the encoded values for these controls, generating or decoding those values may still be a substantial challenge, typically requiring a library for working with ASN.1 BER and knowing how to use it properly.

To help address that, we’ve introduced the ability to encode LDAP request controls as JSON objects, and to indicate that LDAP response controls should also be encoded as JSON objects. JSON is much easier to work with than ASN.1 BER across a variety of programming languages, and it doesn’t require as much expertise to understand how to properly generate or parse JSON objects.

The primary interface to this functionality is through the JSON-formatted request control, which has an OID of “1.3.6.1.4.1.30221.2.5.64” and a value that is the string representation of a JSON object that encapsulates the JSON representations of the request controls to send to the server (or you can omit the value if you don’t have any request controls to send but want all response controls to be returned using a JSON encoding). If the request includes a JSON-formatted request control, and if there are any response controls to be returned, then the server will return them in a JSON-formatted response control, which has an OID of “1.3.6.1.4.1.30221.2.5.65” and a value that uses the same format as the request control. Note that while you don’t need to use the UnboundID LDAP SDK for Java to generate or parse these controls, its Javadoc documentation (especially the documentation for the toJSONControl method for the controls in question) may be useful in understanding what fields may be used in these objects.

The value of a JSON-formatted request or response control is simply a JSON object that has a single controls field whose value is an array of the JSON objects that make up the individual request or response controls. For example, the following JSON object represents the encoded value for a JSON-formatted request control that embeds both a retain identity request control and a get authorization entry request control:

{ "controls":[
  { "oid":"1.3.6.1.4.1.30221.2.5.3",
    "control-name":"Retain Identity Request Control",
    "criticality":true },
  { "oid":"1.3.6.1.4.1.30221.2.5.6",
    "control-name":"Get Authorization Entry Request Control",
    "criticality":false,
    "value-json":{
      "include-authentication-entry":true,
      "include-authorization-entry":true,
      "attributes":[ "uid", "givenName", "sn", "cn", "mail" ] } } ] }

For controls that are supported in both LDAP and the Directory REST API, the JSON encoding that we use is the same in either case. However, the Directory Server supports a number of additional controls for LDAP requests and responses that it doesn’t currently support through the Directory REST API. In additional to all of the controls we currently support in the Directory REST API, we provide JSON encodings for each of the following additional controls in LDAP requests:

  • Account usable request and response controls
  • Administrative operation request control
  • Authorization identity request and response controls
  • Extended schema info request control
  • Get authorization entry request and response controls
  • Get backend set ID request and response controls
  • Get password policy state issues request and response controls
  • Get recent login history request and response controls
  • Get server ID request and response controls
  • Get user resource limits request and response controls
  • Hard delete request control
  • Override search limits request control
  • Password policy request and response controls
  • Replication repair request control
  • Retain identity request control
  • Return conflict entries request control
  • Route to backend set request control
  • Route to server request control
  • Server-side sort request and response controls
  • Simple paged results control
  • Soft delete request control
  • Soft-deleted entry access request control
  • Subentries request control
  • Subtree delete request control
  • Suppress operational attribute update request control
  • Undelete request control
  • Virtual list view request and response controls

Ping Identity Directory Server 9.0.0.0

We have just released version 9.0.0.0 of the Ping Identity Directory Server. The release notes provide a pretty comprehensive overview of what’s included, but here’s my summary.

Ping Identity Directory Server Products Do Not Use log4j

Recently, a very serious security issue (CVE-2021-44228) was identified in the Apache log4j library, which is used by many Java applications to provide logging support. None of the Ping Identity Directory Server, Directory Proxy Server, Synchronization Server, and Metrics Engine products make use of this library in any way, and it is not included as part of the server. Some of the libraries that we include with the server do have support for logging to log4j, but that functionality is not used, and the log4j library is not included as part of the server.

The standalone version of the Admin Console does include the log4j library, but it is included only as a transitive dependency of one of the other libraries used by the console, and the log4j library is not used in any way by the Admin Console. Because this vulnerability was disclosed very late in the release cycle for the Ping Identity server products, we have chosen to update to a non-vulnerable version of the library rather than remove it entirely, as that requires less testing. Again, even though the log4j library is included with the standalone Admin Console, it is not used in any way, so even if you are using an older version of the console with an older version of the log4j library, you are not vulnerable to the security issue.

The UnboundID LDAP SDK for Java does not include any third-party dependencies at all (other than a Java SE runtime environment at Java version 7 or later). It does not include or interact with log4j in any way.

Changes Affecting All Server Products

  • Added cipher stream providers for PKCS #11 tokens, Azure Key Vault, and CyberArk Conjur. [more information]
  • Added passphrase providers for Azure Key Vault and CyberArk Conjur. [more information]
  • Added password storage schemes for authenticating with passwords stored in external services, including AWS Secrets Manager, Azure Key Vault, CyberArk Conjur, and HashiCorp Vault. [more information]
  • Added extended operations for managing server certificates. [more information]
  • Added the ability to redact the values of sensitive configuration properties when constructing the dsconfig representation for a configuration change. [more information]
  • Included the original requester DN and client IP address in log messages for mirrored configuration changes. [more information]
  • Added TLS configuration properties for outbound connections. [more information]
  • Updated the Admin Console to support using PKCS #12 and BCFKS trust stores.
  • Updated the file servlet to support authenticating with OAuth 2.0 access tokens and OpenID Connect ID tokens, which makes it possible to download collect-support-data archives and server profiles generated through the Admin Console when authenticated with SSO.
  • Fixed an issue that could cause degraded performance and higher CPU utilization for some clients using TLSv1.3.
  • Fixed an issue that prevented the manage-profile replace-profile tool from working properly for servers running in FIPS 140-2-compliant mode.
  • Updated export-ldif to always base64-encode attribute values containing any ASCII control characters. Previously, only the null, line feed, and carriage return control characters caused values to be base64-encoded.
  • Fixed an issue in which some tools that operate on the server’s configuration did not use the correct matching rule for attribute types configured to use case-sensitive matching. If a config entry had an attribute with multiple values differing only in capitalization, all but one of the values could be lost.
  • Updated the Directory REST API to add support for attribute options.
  • Added the ability to recognize JVM builds from Eclipse Foundation, Eclipse Adoptium, and BellSoft.
  • Removed “-XX:RefDiscoveryPolicy=1” from the default set of options used to launch the JVM. In some cases, this option has been responsible for JVM crashes.

Changes Affecting the Directory Server

  • Added support for pluggable pass-through authentication. [more information]
  • Fixed an issue that could prevent authenticating with certain types of reversibly encrypted passwords that were encrypted on an instance that was subsequently removed from the topology. [more information]
  • Fixed an issue that prevented decoding the value of a proxied authorization v2 request control when the authorization identity had a specific length.
  • Fixed an issue that could cause sporadic failures when attempting to back up a backend with data encryption enabled. In such cases, the backup would likely succeed if re-attempted.
  • Added a replica-partial-backlog attribute to the replication summary monitor entry to provide information about how each replica contributes to the overall replication backlog.
  • Fixed an issue in which the server could use incorrect resource limit values (including size limit, time limit, lookthrough limit, and idle time limit) for users with custom limits who authenticated via pass-through authentication.
  • Fixed an issue in which the server did not properly update certain password policy state information for simple bind attempts targeting users without a password.
  • Fixed an issue in which the server may not handle other controls properly when processing an operation that includes the join request control. The server may have overlooked a control immediately following the join request control in the operation request, and it may have omitted appropriate non-join result controls from the response.
  • Fixed an issue in which a newly initialized server could go into lockdown mode with a warning about missing changes if it was restarted immediately after initialization completed.
  • Fixed an issue that could prevent changes applied to non-RDN attributes in the course of processing a modify DN operation from being replicated.
  • Fixed an issue that could prevent composed attribute values from being properly updated for operations that are part of a muti-update extended operation.
  • Improved performance for modify operations that need to update a composite index to add an entry ID to the middle of a very large ID set.
  • Added limits for the maximum number of attributes in an add request and the maximum number of modifications in a modify request. [more information]
  • Updated the dsreplication initialize-all command to support initializing multiple replicas in parallel.
  • Updated remove-defunct-server to add a --performLocalCleanup option that can be used to remove replication metadata from a server that is offline.
  • Added an option to the mirror virtual attribute provider to make it possible to bypass access control evaluation for the internal searches that it performs to retrieve data from other entries.
  • Fixed an issue in which an entry added with a createTimestamp attribute could lose the original formatting for that attribute when replicated to other servers.
  • Fixed an issue that could lead to long startup times in large topologies with data encryption enabled.
  • Updated the ldap-diff tool to add several new features. [more information]
  • Updated the migrate-ldap-schema tool to add several new features. [more information]

Changes Affecting the Directory Proxy Server

  • Fixed an issue that could cause certain internal operations initiated in the Directory Proxy Server to fail when forwarded to a backend Directory Server whose default password policy was configured in a way that interfered with the account used to authorize internal operations.
  • Improved the logic used to select the best error result to return to the client for operations broadcast to all backend sets. Previously, the server could have incorrectly returned a result indicating that the target entry did not exist when the operation failed for some other reason.
  • Updated the entry counter, hash DN, and round-robin placement algorithms to support excluding specific backend sets.

Changes Affecting the Synchronization Server

  • Added the ability to synchronize certain password policy state information from Active Directory to the Ping Identity Directory Server, including account disabled state and the password changed time.
  • Fixed an issue that could prevent synchronizing changes to entries that have multiple attributes with the same base attribute type but different sets of attribute options, particularly if any of the attributes have more values than the replace-all-attr-values limit defined in the associated Sync Class.
  • Added the ability to apply rate limiting when synchronizing changes to PingOne.
  • Fixed an issue in which the max-rate-per-second property was not properly applied when running the resync tool.

Changes Affecting the Metrics Engine

  • Fixed an issue that could prevent dashboard icons from being properly displayed.

New Cipher Stream Providers

The encryption settings database holds a set of definitions that include the keys used for data encryption. The encryption settings database is itself encrypted, and we use a component called a cipher stream provider for reading and writing that encrypted content. We already offered several cipher stream provider implementations, including:

  • Generate the encryption key with a passphrase read from a file.
  • Generate the encryption key with a passphrase provided interactively during server startup.
  • Protect the encryption key with AWS Key Management Service (KMS).
  • Generate the encryption key with a passphrase retrieved from AWS Secrets Manager.
  • Generate the encryption key with a passphrase retrieved from a HashiCorp Vault instance.
  • Use the Server SDK to develop your own custom cipher stream providers.

In the 9.0.0.0 release, we are introducing support for three new types of cipher stream providers:

  • Wrap the encryption key with a certificate read from a PKCS #11 token, like a Hardware Security Module (HSM). Note that because of the limitations in Java’s support for key wrapping, only certificates with RSA key pairs can be used for this purpose.
  • Generate the encryption key with a passphrase retrieved from Azure Key Vault.
  • Generate the encryption key with a passphrase retrieved from a CyberArk Conjur instance.

New Passphrase Providers

Passphrase providers offer a means of obtaining clear-text secrets that the server may need for things like accessing protected content in a certificate key store or authenticating to an external service. We already offered several passphrase provider implementations, including:

  • Read the secret from a file, which may optionally be encrypted with a key from the server’s encryption settings database.
  • Read the secret from an obscured value stored in the server’s configuration.
  • Read the secret from an environment variable.
  • Read the secret from AWS Secrets Manager.
  • Read the secret from a HashiCorp Vault instance.
  • Use the Server SDK to develop your own custom passphrase providers.

In the 9.0.0.0 release, we are introducing support for two new types of passphrase providers:

  • Read the secret from Azure Key Vault.
  • Read the secret from a CyberArk Conjur instance.

Password Storage Schemes for External Services

Password storage schemes are used to protect passwords held in the server. We already offered a variety of password storage schemes, including:

  • Schemes using salted 256-bit, 384-bit, and 512-bit SHA-2 digests. SHA-1 support is also available for legacy purposes, but is not recommended.
  • Schemes using more resource-intensive, brute-force-resistant algorithms like PBKDF2, bcrypt, scrypt, and Argon2.
  • A scheme that reversibly encrypts passwords with a 256-bit AES key obtained from the encryption settings database.
  • Schemes that reversibly encrypt passwords with legacy keys stored in the topology registry.

In the 9.0.0.0 release, we are introducing support for new password storage schemes that allow users to authenticate with passwords stored in external secret stores, including:

  • AWS Secrets Manager
  • Azure Key Vault
  • CyberArk Conjur
  • HashiCorp Vault

In these cases, the storage scheme is configured with the information needed to connect and authenticate to the external service, and the encoded representation of the password contains a JSON object with the information needed to identify the specific secret in that service to use as the password for the associated user.

These password storage schemes can be used to authenticate with both LDAP simple authentication and SASL mechanisms that use a password. However, these schemes are read-only: users can authenticate with a password stored in the associated external service, but password changes need to be made through that service rather than over LDAP.

Extended Operations for Certificate Management

We have added support for a set of extended operations that can be used to remotely manage certificates in server instances, including replacing listener and inter-server certificates and purging information about retired certificates from the topology registry. These operations are especially useful for managing certificates in instances running in Docker or in other cases where command-line access may not be readily available to run the replace-certificate tool.

When replacing certificates, the new key store can be obtained in several ways:

  • It can be read from a file that is already available to the server (for example, one that has been copied to the server or placed on a shared filesystem).
  • The raw bytes that make up the new key store file can be included directly in the extended request.
  • The individual certificates and private key can be provided in the extended request, in either PEM or DER form.

Many safeguards are in place to prevent these extended operations from being inappropriately used. These include:

  • The extended operation handler providing support for these operations is not enabled by default. It must be enabled before they can be used.
  • The extended operations will only be allowed over secure connections.
  • The extended operations can only be requested by a user with the permit-replace-certificate-request privilege. No users have this privilege by default (not even root users or topology administrators).
  • You can indicate which of the individual types of operations are allowed, and you can define connection and request criteria to further restrict the circumstances under which they may be used.
  • By default, it will only allow reading certificates from a file on the server filesystem. You have to specifically enable the option to allow providing the new certificate information from a remote client.
  • The server will generate administrative alerts for all successful and failed attempts to process these operations.

These extended operations can be invoked programmatically (support for them is included in the UnboundID LDAP SDK for Java). They can also be used through new subcommands in the replace-certificate command-line tool.

Redacting Sensitive Values in Configuration Changes

We have added a new redact-sensitive-values-in-config-logs global configuration property that can be used to indicate that the server should redact the values of sensitive configuration properties when generating the dsconfig representation for that configuration change, including the representation that is written to the config-audit.log file and included in alerts to notify administrators of the change.

By default, the values of sensitive configuration properties are obscured in a way that allows the server to obtain the clear-text value, but that is not readily apparent to an observer. This helps protect the values of these secrets while still allowing the config-audit.log file to be replayed. However, a determined user with access to this obfuscated representation may be able to determine the clear-text value that it represents.

If the redact-sensitive-values-in-config-logs property is set to true, then the values of sensitive configuration properties will be redacted rather than obscured. This prevents someone with access to the dsconfig representation of the change from being able to obtain the clear-text value of the secret, but it does mean that the config-audit.log file may no longer be replayable.

Original Requester Details for Mirrored Configuration Changes

When making configuration changes, log messages (including those written to the server’s access log and the config-audit.log file) include the DN of the user that requested the change and the IP address of the client system. However, for changes affecting mirrored configuration (including in the topology registry or cluster configuration), these values do not accurately reflect the DN and address of the original requester, but instead reflect either the details of an internal connection or of a connection from another server instance that has forwarded change to the topology master.

To address this, we have updated the server so that the DN and IP address of the original requester are included as part of changes to mirrored configuration. Records for these configuration changes that are written to config-audit.log and the server’s access log will now provide these values in the original-requester-dn and original-requester-ip fields.

New TLS Configuration Properties

We have updated the crypto manager configuration to add support for four new properties for configuring TLS communication:

  • outbound-ssl-protocol — Can be used to specify the set of TLS protocols that may be used for outbound connections (e.g., those used for pass-through authentication or for synchronization with remote servers).
  • outbound-ssl-cipher-suite — Can be used to specify the set of TLS cipher suites that may be used for outbound connections.
  • enable-sha-1-cipher-suites — Can be used to enable the use of TLS cipher suites that rely on the SHA-1 digest algorithm, which is no longer considered secure and is disabled by default.
  • enable-rsa-key-exchange-cipher-suites — Can be used to enable the use of TLS cipher suites that rely on the RSA key exchange algorithm, which does not provide support for forward secrecy and is disabled by default.

Pluggable Pass-Through Authentication

We have updated the Directory Server to add support for pluggable pass-through authentication. Previously, the server provided support for passing through simple bind attempts to another LDAP server or to PingOne. It is now possible to support pass-through authentication to other types of services, and the Server SDK has been updated to add support for creating custom pass-through authentication handlers.

This implementation includes an LDAP pass-through authentication handler that allows the new pluggable pass-through authentication plugin to be used as an alternative to the former LDAP-specific pass-through authentication plugin. The new implementation offers several advantages over the former one, including:

  • Better default configuration properties (especially for the override-local-password property).
  • The ability to indicate whether to attempt pass-through authentication for accounts in an usable password policy state (for example, those that are locked or that have expired passwords).
  • The ability to set timeouts for interaction with the external LDAP servers.
  • Improved diagnostic information about pass-through authentication attempts, including support for the password policy request control and password expired response control.
  • A new monitor entry with metrics about the processing performed by the plugin.

Preserving Secret Keys for Instances Removed From the Topology

Previously, when a server was removed from the topology (for example, by using the remove-defunct-server tool), secret keys associated with that instance could be lost. This is unlikely to cause any problems in most cases because these keys are no longer used for most purposes. However, it could be an issue if the server is configured to use a legacy password storage scheme that protects passwords with reversible encryption. These schemes encrypt passwords with keys from the topology registry, and if a server was removed from the topology, then keys specific to that instance were also removed. This could prevent remaining servers from being able to decrypt passwords that were initially encrypted by the instance that was removed. To address this, we now preserve any secret keys that are associated with an instance before removing that instance from the topology.

Affected password storage schemes include AES, Blowfish, RC4, and 3DES. The newer AES256 password storage scheme is not affected by this issue.

Size Limits for Add and Modify Requests

We have added new maximum-attributes-per-add-request and maximum-modifications-per-modify-request properties to the global configuration. The former can be used to limit the number of attributes that may be included in an add request, and the latter can be used to limit the number of modifications that may be included in a modify request. Neither of these properties affects the number of values that individual attributes may have.

These limits can help avoid potential denial-of-service attacks that use specially crafted add and modify requests. By default, add requests are limited to 1000 attributes, and modify requests are limited to 1000 modifications, which should be plenty for virtually all real-world use cases.

New ldap-diff Features

We have updated the ldap-diff tool to provide several new features. These include:

  • We have added an option to perform byte-for-byte comparisons when identifying differences. By default, the tool uses schema-aware matching, which may not flag differences in values that are logically equivalent but not identical (for example, values that differ only in capitalization for an attribute configured to use case-insensitive matching).
  • You can now use a properties file to provide default values for some or all of the command-line arguments.
  • We improved support for SASL authentication.

New migrate-ldap-schema Features

We have updated the migrate-ldap-schema tool to provide several new features. These include:

  • We have added more flexibility when securing communication with servers over TLS, including the ability to use different key and trust managers for the source and destination servers.
  • We have added support for SASL authentication.
  • We have added support for using a properties file to obtain default values for some or all of the command-line arguments.
  • We have added better validation for migrated attribute types and object classes.

Ping Identity Directory Server 8.3.0.0

We have just released version 8.3.0.0 of the Ping Identity Directory Server. The release notes provide a pretty comprehensive overview of what’s included, but here’s my summary followed by more detailed explanations for many of the changes.

Summary of Deprecated Functionality

Summary of New Features and Enhancements

Summary of Bug Fixes

  • Fix an issue that could allow users in a “must change password” state to issue requests [more information]
  • Prevent warning messages for unrecognized JVM vendors [more information]
  • Fix an issue that could prevent ds-pwp-modifiable-state-json changes from being replicated right away [more information]
  • Improve the logic for maintaining the entry-balancing global index [more information]
  • Fix an issue that could prevent setting up the server on old JVMs without support for 256-bit AES
  • Fix an issue that could interfere with manage-profile replace-profile when using a StatsD monitoring endpoint [more information]
  • Avoid entering lockdown mode when incorrectly believing that there were missed replication changes [more information]
  • Improve replication for dependent changes that may be received out of order [more information]
  • Fix an issue with incorrectly reporting that certain filters were not indexed [more information]
  • Prevent dsreplication status from listing offline servers under incorrect domains [more information]
  • Allow configuring cipher stream providers in Directory Proxy Server, Synchronization Server, and Metrics Engine [more information]
  • Fix an issue preventing manage-profile replace-profile from updating mirrored configuration [more information]
  • Prevent offline config change warnings when using manage-profile replace-profile [more information]
  • Update manage-profile replace-profile to preserve setup logs [more information]
  • Improve validation and behavior when configuring an explicit set of TLS cipher suites [more information]
  • Improve manage-profile replace-profile detection of changes to files not included in the server profile [more information]
  • Fix an issue when trying to update a topology server group with a server that already exists in that group
  • Fix issues with import-ldif with --addMissingRDNAttributes [more information]
  • Fix an issue with dsjavaproperties with --initialize and --jvmTuningParameter [more information]
  • Fix an issue that could prevent Sync failed ops log publishers from being removed
  • Improve the result code when trying to add an entry through the Directory Proxy Server when no backend servers are available or when adding entries with missing parents [more information]
  • Fix a potentially incorrect warning about duplicate jar files detected during startup
  • Fix an issue that could prevent Server SDK plugins from seeing all content in an add operation [more information]
  • Avoid a potential reverse DNS warning message during setup
  • Fix an issue that could cause the server to provide an incorrect estimate for the number of entries matching a filter using a composite index [more information]
  • Improve prompts when using dsreplication in interactive mode [more information]

Deprecate TLSv1 and TLSv1.1

TLS version 1.0 became a standard over twenty years ago in 1999, and TLSv1.1 became a standard over fifteen years ago (in 2006). While they were undoubtedly improvements over the former SSL protocols, they are pretty ancient in the world of computer security and are no longer considered secure. RFC 8996 officially declared them to be historic protocols that should no longer be used. As such, these protocols will no longer be enabled by default in the Directory Server or related products.

Newer, more secure protocols have been around for many years (TLSv1.2 nearly 13 years ago in 2008, and TLSv1.3 almost 3 years ago in 2018), and these will be the only TLS protocol versions enabled by default. This shouldn’t cause any problems unless you have ancient clients that haven’t been updated in over a decade. You can look at SECURITY-NEGOTIATION messages in the server’s access log to identify which TLS protocol versions clients are using, and we strongly recommend updating any clients still using TLSv1 or TLSv1.1. However, if necessary, you can manually enable support for the legacy protocols in the connection handler configuration.

Changes to TLS Cipher Suite Selection

We have deprecated support for TLS cipher suites that use the SHA-1 message digest algorithm. SHA-1 is no longer considered secure, and we had previously only enabled support for these cipher suites to allow for their use in TLSv1 and TLSv1.1. As we have disabled support for those legacy TLS protocols by default, we have also disabled support for TLS cipher suites that rely on SHA-1.

We have also deprecated support for TLS cipher suites that rely on RSA key exchange. RSA key exchange does not support forward secrecy, which means that if the server certificate’s private key is compromised, then any data transferred over a TLS session negotiated with RSA key exchange using that certificate can be decrypted. Note that this does not prevent using certificates with RSA key pairs, as key agreement algorithms like ECDHE and DHE can still be used with RSA certificates.

Even though we no longer enable support for these cipher suites by default, you can manually enable them if necessary. You can also customize the set of enabled cipher suites on a per-connection-handler basis (for example, if you want to allow different cipher suites for HTTPS connections than LDAPS connections).

And in cases in which you do manually configure the set of TLS cipher suites, we have improved the validation that the server performs for that configuration. Previously, if you manually configured the set of cipher suites, but none of those suites were supported by the JVM, then the server would log a message for each of the unsupported suites and just fall back on a default set of suites. It will not reject an attempt to configure custom suites if none of those suites is available in the underlying JVM. For the sake of preserving compatibility (in cases where you update the server, or if you update the JVM to a version with a different set of supported cipher suites), the server will still allow you to include suites that the JVM doesn’t support, as long as at least one of the configured suites is supported, and it will continue to log a message about each unsupported suite.

Deprecate Incremental Backups

The server has always offered the ability to create incremental backups, which contained only database files that had changed since the previous backup. Unfortunately, there have been numerous problems with this over the years, and we have decided to deprecate this functionality rather than continuing to offer a potentially unreliable backup mechanism.

In the 8.3.0.0 release, the functionality will remain available and with all known issues addressed, but if you run the backup command with the --incremental argument, it will display a warning on the command line indicating that the functionality has been deprecated and will likely be removed in a future release. If you have the server itself initiate an incremental backup (through the tasks interface), then it will log a warning message and generate an administrative alert to provide a more visible warning.

As an alternative to performing incremental backups, we recommend LDIF exports. LDIF data compresses very well, and compressed and encrypted LDIF files are substantially smaller than full backups, allowing you to take them more frequently without consuming more disk space. LDIF exports are also more portable and more useful than backups, allowing you to do things like restore individual entries and perform offline analysis of the data.

Another option is to use the server’s data recovery log in addition to LDIF exports or full backups. The data recovery log has been available since the 7.2.0.0 release, and it is a compressed, encrypted audit log that provides a record of recent changes in reversible form so that they can be easily replayed or reverted if necessary. You can use the extract-data-recovery-log-changes tool to write changes matching desired criteria (for example, all changes recorded since the time of a backup or LDIF export) that can then be replayed using a tool like ldapmodify or parallel-update.

Add a FIPS 140-2-Compliant Mode

FIPS 140-2 is a U.S. government specification that defines requirements around the use of cryptography. U.S. government agencies, and organizations that work with those government agencies, may be required to abide by this specification. In the 8.3.0.0 release, we will allow you to set up the server to operate in a FIPS 140-2-compliant mode, in which only cryptographic operations allowed by the specification will be permitted.

Note that servers operating in FIPS 140-2-compliant mode are not directly compatible with servers not running in FIPS-compliant mode. It is not possible to have a mix of compliant and non-compliant servers in the same topology, nor can you directly replicate between them (although you can use the Synchronization Server to keep a FIPS-compliant topology in sync with a non-compliant topology). You will also not be allowed to update an existing non-FIPS-compliant instance to operate in FIPS 140-2-compliant mode.

When installed in this mode, the server uses the FIPS 140-2-certified Bouncy Castle BCFIPS provider in approved-only mode, along with the FIPS-compliant BCJSSE provider for TLS processing. The server will require secure communication, and certificates must be stored in either BCFKS key stores (which the manage-certificates tool now supports) or PKCS #11 tokens. Data encryption must also be enabled during setup, with at least one encryption settings definition. Because of requirements around the use of PBKDF2 in FIPS-compliant mode, passwords for root users and topology administrators will be required to be at least 14 characters long.

Also note that the Bouncy Castle FIPS 140-2-compliant SecureRandom implementation is very entropy-hungry. If the underlying system does not have enough entropy available, attempts to start the server or launch tools requiring the use of secure random numbers may block for extended periods of time. To avoid this, we strongly recommend installing a hardware random number generator or using an entropy-supplementing daemon like rngd.

Add Support for Passphrase Providers

In some cases, the server may require access to clear-text secrets for use in its processing. For example, it may need clear-text credentials for authenticating to external services or for accessing certificate key and trust stores. Historically, the server has allowed you to provide those secrets by either storing an obscured version of the secret directly in the configuration or by storing it in a file.

In the 8.3.0.0 release, we are introducing a new passphrase provider framework that provides an extensible framework for obtaining access to these kinds of secrets. Initially, we will allow obtaining secrets through the following mechanisms:

  • From an obscured representation of the secret stored directly in the configuration
  • From a file contained on the local filesystem, which may optionally be encrypted with a key from the server’s encryption settings database
  • From an environment variable
  • From a HashiCorp Vault server
  • From the Amazon AWS Secrets Manager service

In addition, the Server SDK provides support for creating custom passphrase provider implementations that use other services or methods for obtaining secrets.

Improved Auditability for SCIM2 Requests

By default, the server authorizes SCIM 2 requests through a combination of two mechanisms:

  • With an authorization identity of “cn=SCIM2 Servlet,cn=Root DNs,cn=config”. Although this is a root account, it does not inherit any of the default root privileges, and it is not assigned any privileges by default. Global ACIs grant that account a minimal level of access to use certain request controls (like server-side sort and virtual list view) and operational attribute (like createTimestamp and modifyTimestamp).
  • With any access control rights granted (via the oauthscope ACI bind rule) for scopes included in the OAuth bearer token used to authenticate to the server.

In the past, the server has not attempted to map the access token used to authorize SCIM2 requests to a local user in the server. As such, the access log would always report that the operations were requested by the “cn=SCIM2 Servlet,cn=Root DNs,cn=config” user, and that DN would also appear in the creatorsName and modifiersName attributes of created and updated entries. This is still the default behavior, but in the 8.3.0.0 release, we’re making it possible to map the access token to a local account and use that account as the authorization identity for any requested operations (along with rights granted to scopes included in the token).

Note that if you use this feature, then you will need to ensure that those mapped users have permission to the same request controls and operational attributes that are available to the SCIM2 Servlet user by default. The easiest way to do this is to ensure that the “scim2” scope is included in any access tokens used to authorize the requests, as this scope will be granted the same default access rights as the SCIM2 Servlet user. Otherwise, you will need to define additional access control rules to grant the necessary rights to the mapped users.

Join Virtual Attribute Types

The Directory Server has provided support for an LDAP join request control for several years. This control allows you to request that the server return “related” entries along with entries that match the search criteria. For example, if you use the manager attribute to hold the DN of a user’s manager, then you can use the join request control to return a user’s manager entry along with the user’s own entry. This is a powerful and useful feature, but because it requires an LDAP control, it’s not been available to non-LDAP clients (like those using SCIM or the Directory REST API). Further, because that control uses a proprietary encoding, it’s not all that convenient to use in LDAP clients unless you’re using the UnboundID LDAP SDK for Java.

In the 8.3.0.0 release, we’re adding support for three new virtual attribute types that make it possible to use some of the power afforded by the join request control but without the need to actually use a control. This makes the content accessible from any LDAP client regardless of the API used to implement it, and also to non-LDAP clients.

The new virtual attribute types include:

  • The DN join virtual attribute allows you to join an entry with zero or more other entries whose DNs are contained in the value of a specified attribute. For example, you could use this to join a user with their supervisor through the “manager” attribute in the user’s entry.
  • The reverse DN join virtual attribute allows you to join an entry with zero or more other entries that contain the DN of the source user’s entry in the value of a specified attribute. For example, you could use this to join a manager with their direct reports.
  • The equality join virtual attribute allows you to join an entry with zero or more other entries that are linked by a common attribute value (that is, the source user entry has a value for one attribute that matches the value of the same or a different attribute in other entries).

In each of these cases, values of the virtual attribute will be a JSON object containing a specified set of attributes from the joined entries.

Admin Console Improvements

In the 8.2.0.0 release, we introduced support for single sign-on authentication for the Admin Console, but this was only supported when authenticating with an OpenID Connect ID token from the PingOne service. In the 8.3.0.0 release, we have expanded that to make it possible to authenticate with tokens from any OpenID Connect provider that the server has been configured to accept.

We have also updated the Admin Console so that it’s possible to invoke either the collect-support-data tool or the manage-profile generate-profile tool against the target server through the web-based interface. The resulting support data archive or server profile will be sent to the client as a zip file download. Note that at present, this is only available when authenticating to the console through basic authentication; it is not yet allowed when authenticating via SSO.

Finally, we have updated the Admin Console so that if it’s running in a separate web application container (rather than as part of the Directory Server itself), it will write its log messages to standard output so that they will appear in the container’s console log by default. When the console is running in the same JVM as the Directory Server, its log output will appear in the logs/webapps/console.log file.

Fixes and Improvements for Accounts in a “Must Change Password” State

When the server receives a request to process an operation under an alternate authorization identity (for example, using the proxied authorization or intermediate client request control), it will make sure that the target account is in a usable state. If the account can’t be used for some reason (for example, because it’s been administratively disabled, the password has expired, or it’s been locked as a result of too many failed attempts), then the server will not allow it to be used as an alternate authorization identity. However, prior to the 8.3.0.0 release, the server did not check to see if an account was in a “must change password” state, which could have incorrectly allowed the server to process an operation as a user via an alternate authorization identity even though it would have been rejected when requested on a connection authenticated as that user. Unfortunately, this also affects requests passing through the Directory Proxy Server, as it uses the intermediate client request control to authorize requests as the end user in backend servers. This has been corrected in the 8.3.0.0 release, and we will be fixing it in updates to older releases in the near future.

We have also added support for a new must-change-password account status notification type. This can be used to trigger some action whenever a user successfully authenticates to the server with an account that is in a “must change password” state. For example, you can have the server send the user an email message indicating that they will need to choose a new password before they will be allowed to request any other operations. While it was already possible to do this in many cases through the existing password-reset account status notification type (which will be triggered if an administrator resets a user’s password), the new notification type allows you to take action even if that previous warning was not heeded, or if the “must change password” state was not triggered by an administrative reset.

In addition, we have updated the server so that when a user successfully binds with an account in a “must change password” state, the bind response will now include a diagnostic message that indicates that the user must choose a new password. This may be beneficial for some clients (although many clients don’t check for or use a diagnostic message in a successful response), but the primary benefit is that it makes it easier to spot this condition in the server’s access log.

Fixes and Improvements for the ds-pwp-modifiable-state-json Operational Attribute

In the 8.2.0.0 release, we introduced a new plugin that makes it possible for clients to change certain aspects of a user’s password policy state through a regular modify operation that provides a JSON-formatted value to the ds-pwp-modifiable-state-json operational attribute. While it was already possible to accomplish this using the password policy state extended operation, this operation uses a proprietary encoding that makes it less convenient for LDAP clients not written with the UnboundID LDAP SDK for Java, and non-LDAP clients (for example, those using SCIM or the Directory REST API) couldn’t request it at all. The server’s support for the ds-pwp-modifiable-state-json attribute addresses all of these issues and makes it easier for appropriately authorized clients to manipulate a user’s password policy state.

Unfortunately, there was a bug in the implementation of this plugin in the 8.2.0.0 release that prevented updates to this operational attribute from being replicated right away. The change would be applied locally, but it would only be replicated to other servers in the topology after another update to the entry. This has been fixed in the 8.3.0.0 release. It should also be addressed in an upcoming 8.2.x patch release, but for now, the best way to work around the problem in 8.2 servers is to ensure that you modify some other attribute in the entry after updating the ds-pwp-modifiable-state-json attribute.

There was also an issue with the out-of-the-box configuration for the plugin that prevented it from being invoked for operations that are part of a multi-update extended operation. This has been corrected in the 8.3.0.0 release. The best way to address this in the 8.2.0.0 release is to remove and re-add the Modifiable Password Policy State plugin, and it will be automatically created with the appropriate configuration.

Further, the implementation we used for the plugin in the 8.2.0.0 release included a couple of limitations that restricted the set of modifications in which it could be updated. Those restrictions include:

  • The server would not allow the ds-pwp-modifiable-state-json attribute to be updated in the same modify request as any other attributes.
  • The server would not allow the ds-pwp-state-json attribute to be updated in a modify operation that was part of an LDAP transaction or an atomic multi-update operation.

Both of those were intentional restrictions based on the way we chose to implement this functionality in the 8.2.0.0 release. But we redesigned the implementation in the 8.3.0.0 release so that these restrictions are no longer imposed.

Cipher Stream Provider Improvements

The server uses cipher stream providers to protect the contents of the encryption settings database. Previous versions of the server included several cipher stream provider implementations, including:

  • Read an encryption passphrase from a file contained on the server filesystem
  • Wait for an administrator to interactively supply the passphrase during the server startup process or when launching a tool that required access to the encryption settings database
  • Obtain an encryption passphrase from a HashiCorp Vault instance
  • Derive an encryption key through interaction with the Amazon Key Management Service
  • Use custom logic implemented in a Server SDK extension

In the 8.3.0.0 release, we have added support for an additional cipher stream provider implementation that obtains an encryption passphrase from the Amazon AWS Secrets Manager service.

We have also updated the Directory Proxy Server, Synchronization Server, and Metrics Engine to expose options for configuring cipher stream providers in those products. Previously, cipher stream provider configuration options were only provided for the Directory Server because they were initially only used for encrypting entries before storing them in the database. However, their usage has expanded to other products, and it is now possible to better protect the contents of the encryption settings database when using those products.

Improvements in PKCS #11 Support

The Directory Server has always provided support for a PKCS #11 key manager provider that makes it possible to access listener certificates in a PKCS #11 token, like an HSM. In the past, it was necessary to pre-configure the JVM with an appropriate security provider (typically by editing its java.security configuration file) with the information needed to access the token. While this worked just fine, it’s not ideal for cases in which it’s not convenient to alter the JVM configuration (for example, if the JVM is provided by the underlying operating system, or if the same installation is shared across multiple applications rather than being dedicated to the Directory Server).

In the 8.3.0.0 release, we’ve updated the server to make it possible to enable PKCS #11 support in a stock JVM by dynamically loading the provider. In such cases, you will likely need to provide a configuration file that provides the information the JVM needs to interact with the PKCS #11 token (for example, the path to the native library that implements support for interacting with the token, and possibly other configuration like the slot to use). You can provide this configuration file during setup if you want to enable PKCS #11 support out of the box, or you can specify it in the PKCS #11 key manager provider configuration. If you’ve already got a JVM that has been manually configured with a PKCS #11 security provider, then it will continue to work as before.

We have also updated the manage-certificates tool so that it can interact with certificates in PKCS #11 key stores. In such cases, the --keystore argument must be provided and must point to the provider configuration file needed to configure the JVM to interact with the PKCS #11 token, and the --keystore-type argument must be provided with a value of PKCS11. You will likely also need to use one of the --keystore-password, --keystore-password-file, and --prompt-for-keystore-password arguments to supply the user PIN for the PKCS #11 token.

Providing Initial TLS Certificates via PEM Files

When setting up the server, if you want to be able to accept secure client communication, then you will need to have a listener certificate to use during TLS negotiation. In the past, we have provided two options for this:

  • You can provide setup with information about existing key and trust stores to use.
  • You can have setup generate a self-signed certificate.

In the 8.3.0.0 release, we are adding support for an additional option: you can provide the listener certificate chain, the listener certificate’s private key, and any trusted certificates in PEM files using the --certificateChainPEMFile, --certificatePrivateKeyPEMFile, and --trustedCertificatePEMFile arguments. This can be useful when migrating from a server that uses PEM files rather than PKCS #12 files or Java key/trust stores. It can also be useful when setting up the server in FIPS 140-2-compliant mode, as you would otherwise need to have an existing BCFKS key store or PKCS #11 token with the desired certificate.

Fixes and Improvements for StatsD Monitoring

The server has provided support for a StatsD monitoring endpoint since the 8.0.0.0 release. In the 8.3.0.0 release, we’re adding the ability to include custom tags (as comma-separated key-value pairs) in each metric message. This can help better differentiate messages from separate instances that are going to the same StatsD endpoint.

We have also fixed an issue that could cause a failure to occur when using manage-profile replace-profile on an instance that is configured with a StatsD monitoring endpoint.

Command-Line Tool Fixes and Improvements

We have made many improvements to command-line tools in the 8.3.0.0 release. This includes:

  • We have updated the default JVM configuration for tools so that most of them will attempt to use less memory.
  • We have added a new oid-lookup tool that can be used to obtain information about a given object identifier that may be used by the server (for example, to identify a schema element, control, extended operation, SNMP trap, etc.), or to obtain the object identifier for something with a given name.
  • We have added a new remove-object-class-from-schema tool that can be used to safely remove an object class from the server schema, even if it has previously been (but is no longer) in use. This is similar to the remove-attribute-type-from-schema tool that was introduced in the 8.2.0.0 release, but for object classes rather than attribute types.
  • We made several improvements to manage-profile replace-profile, including:

    • It is now better able to detect changes in files referenced in the setup-arguments.txt file that reside outside the server profile.
    • The server will no longer warn about offline configuration changes the first time it is started after using manage-profile replace-profile to alter the configuration.
    • We fixed an issue that could cause certain setup log files, which might contain useful troubleshooting information about problems encountered during replace-profile processing, to be replaced by the versions in place before the attempt to run replace-profile.
    • We made it more efficient to apply changes that may require administrative actions.
    • We fixed an issue that could prevent replace-profile from updating information in mirrored configuration.
  • We fixed issues with the import-ldif tool when used in conjunction with the --addMissingRDNAttributes argument. In particular, it could fail to add missing RDN values to an entry if the attribute was required by any of the entry’s object classes, and it could have also resulted in entries with multiple values for single-valued attribute types.
  • We fixed an issue with the dsjavaproperties tool that could prevent it from changing the JVM tuning configuration when the --initialize and --jvmTuningParameter arguments were used together.
  • We have made several improvements to the manage-certificates tool, including:

    • We added a new copy-keystore subcommand to make it possible to copy some or all of the certificates in one key store to another key store (creating it if necessary) of the same or a different type.
    • We added support for BCFKS and PKCS11 key store types.
    • We updated the generate-self-signed-certificate subcommand to add support for optional --output-file and --output-format arguments that can be used to write a PEM-formatted or DER-formatted representation of the generated certificate to a specified file.
    • We updated the list-certificates subcommand display the key store type.
  • We have made several improvements to the dbtest tool, including:

    • We updated the dump-database-container subcommand to provide additional information about an entry’s encoding when dumping the id2entry database, including whether the entry is compressed, whether it is encrypted (and if so, with which encryption settings definition), whether there is a digest, and which attributes (if any) the entry may have. It is also possible to restrict the output to only include entries matching a specified filter.
    • We updated the dump-database-container subcommand to provide more useful information when dumping the state database, including the index name and a user-friendly representation of the trust state.
    • We have updated the dump-database-container subcommand to provide more useful information when dumping the contents of the recent-changes database, including an LDIF representation of the change, the change time, the replication CSN, and information about the original client request.
    • We added new dump-attribute-tokens, dump-object-class-tokens, and dump-dn-tokens subcommands that can display information that the server uses in the course of compacting entry data.
    • We added a new dump-metadata subcommand that can display information from the backend’s metadata database.
  • We updated the ldap-result-code tool to add support for additional output formats, including JSON, CSV, and tab-delimited text.

Index Fixes and Improvements

We also made a number of improvements in index management. These include:

  • We fixed an issue that could cause the server to incorrectly infer that a given search filter might not be indexed in a backend if it was covered by a composite index but not by any attribute indexes. The composite index would actually be used to process applicable searches, but this issue could potentially interfere with certain types of valid configuration changes.
  • We have updated import-ldif to increase the number of threads used to flush intermediate index files to disk, which can dramatically improve import performance. We have also made changes to help reduce the number of intermediate index files that the server generates during an LDIF import, which can help avoid running out of file descriptors when importing very large data sets in deployments with a large number of indexes.
  • We have dramatically improved the performance when removing entry IDs from very large composite indexes.
  • We have dramatically reduced the performance impact incurred when a large exploded index exceeds the index entry limit for a key.
  • We fixed an issue that could cause the server to return an incorrect estimate of the number of entries matching a composite index key with a large ID set after an unclean shutdown.

Performance Improvements

Besides those things already mentioned elsewhere above, we have made the following performance improvements to the server and associated tools:

  • We have updated the server’s support for the get user resource limits request control to make it possible to omit information about the target user’s group membership. The Directory Proxy Server now uses this option when forwarding bind requests to backend servers, which can dramatically improve bind performance in servers with a large number of dynamic groups.
  • We have updated the way the server processes searches that target the isMemberOf virtual attribute when the target entry is a dynamic group. This improvement is particularly significant when the client will be paging through the results rather than retrieving all entries in response to a single request.
  • We have updated setup to provide an --optionCacheDirectory argument that can be used to specify the path to cache files with pre-computed information about which JVM options can be used. This can help improve setup performance, as a significant amount of time during setup is spent automatically selecting an appropriate set of JVM options.
  • We have updated the server to make it possible to minimize the content of conflict prevention details entries that may be generated in response to the uniqueness request control. This can help improve write performance when updating large entries in a request that includes uniqueness constraints.
  • We have updated the purge expired data plugin to allow using multiple concurrent threads when deleting expired entries. The plugin is still single-threaded by default, and we recommend keeping that default configuration unless the plugin isn’t able to keep up with the rate at which entries should be purged.

Reduced DN Escaping

RFC 4514 specifies a minimum set of required escaping that must be used for attribute values used in DNs and RDNs. However, it also allows implementations to optionally escape any other characters as desired. In the Ping Identity Directory Server, we have historically also escaped all ASCII control characters and all non-ASCII characters.

As of the 8.3.0.0 release, we have reduced the amount of escaping that we use for non-ASCII characters. By default, we will no longer escape non-ASCII characters that we believe to be printable. This includes characters from the Unicode letter, number, space, punctuation, and symbol classes. We will continue to escape non-ASCII characters from other Unicode classes, as well as non-ASCII data that does not represent a valid UTF-8 encoding. And we will also continue to escape ASCII control characters.

For example, if an entry has a cn attribute value of “José Núñez 🇩🇴”, it would have previously been encoded as “cn=Jos\c3\a9 N\c3\ba\c3\b1ez \f0\9f\87\a9\f0\9f\87\b4” when returned in search result entries. It will now be returned as “cn=José Núñez 🇩🇴”. Note that LDIF requires all non-ASCII values to be base64-encoded, so if you’re looking at that entry in LDIF (like in the output from ldapsearch), then you’ll now see it as a base64-encoded value, but it will actually be returned by the server in unencoded form.

Improved Logging for Multi-Update Operations

The multi-update extended operation allows you to send multiple add, delete, modify, and modify DN operations in a single request. When you want to apply changes to multiple entries, this can help improve performance by reducing the number of round trips required in communicating with the server, and you can also optionally process the entire multi-update operation as a single atomic unit so that if any of the requests cannot be processed, then no changes will be applied.

In the past, the server’s access logging wasn’t as helpful as it should have been when recording information about operations processed as part of a multi-update extended operation. In particular, messages about the individual operations processed as part of the multi-update were only available if you enabled support for logging internal operations, and the extended result log message did not always include useful information about failures encountered during processing. We have addressed both of these issues in the 8.3.0.0 release so that the server’s processing for the individual operations that are part of a multi-update request will be logged without needing to enable internal operation logging, and if any failure is encountered during processing, then the extended result will provide the result code and diagnostic message from the first failure.

Include the Bouncy Castle Library by Default

When operating in non-FIPS-compliant mode, the Directory Server can use the Bouncy Castle cryptographic library for certain optional functionality, including support for the bcrypt, scrypt, and Argon2 password storage schemes. Previously, the server did not ship with the Bouncy Castle library out of an abundance of caution with regard to U.S. export control laws. If you wanted to use any of this functionality, you needed to download the appropriate jar file from the Bouncy Castle website and copy it into the server’s lib directory.

As of the 8.3.0.0 release, this is no longer necessary. The server now ships with both the FIPS 140-2-compliant and non-FIPS-compliant versions of the Bouncy Castle libraries. Note, however, that these libraries are not compatible with each other, so either one or the other will be activated when setting up the server. When running in FIPS 140-2-compliant mode, features that require the non-FIPS-compliant Bouncy Castle library (namely, the aforementioned bcrypt, scrypt, and Argon2 password storage schemes) will not be available.

Request Control Updates

The 8.3.0.0 includes the following updates in our support for request controls:

  • The assured replication request control (which can be used to delay the response to a change until the server has received confirmation that the change has been replicated) is now allowed through the Directory Proxy Server by default without requiring any special configuration. It was previously only possible to allow the control to be passed through the Directory Proxy Server by adding its OID to the supported-control-oid property in the proxying request processor configuration.
  • We have updated the server to allow the operation purpose request control (which the client can use to provide additional context about the request) to be included in a transaction or an atomic multi-update operation.
  • We have updated the default set of global ACIs to allow the LDAP assertion and permissive modify controls by default. These controls do not allow clients to do anything that they wouldn’t be able to do without them, but they can make processing safer and more efficient.

Unrecognized JVM Vendor Warnings

The server setup process validates the selected JVM to ensure that it is suitable for use. In particular, it examines the Java version (we currently support running on either Java 8 or Java 11) and the Java vendor. setup will refuse to run with an unsupported Java version, but it will merely display a warning message if it encounters a JVM with a supported version but from an unrecognized vendor.

Some Linux distributions, like Red Hat and Ubuntu, have overridden the vendor information for the JVMs that are included as part of their operating systems so that they use vendor strings that we did not previously recognize (and in at least one case, Ubuntu used a vendor of “Private Build”). We have updated the set of recognized vendors so that it will now recognize these vendors, and it will no longer display a warning when setting up the server when using one of those JVMs.

Entry Balancing Global Index Updates

When using the Directory Proxy Server with entry balancing (to shard the data across multiple replicated backend sets for improved performance with very large data sets), it maintains a global index that can be used to help it determine which backend set might have a given attribute value.

We have identified and corrected a number of cases in which the server might not correctly update the global index in response to some types of operations. These include:

  • In some cases, the server could have incorrectly populated the global index when processing an add operation. This could have led it to a scenario in which the Directory Proxy Server might only return a subset of entries containing a given attribute value in response to a search. Each global index configuration object now provides a guaranteed-unique configuration property that can be used to indicate that only a single entry is expected to have any one value for the associated attribute type. The global index will now be updated for add operations only for indexes in which guaranteed-unique is set to true.
  • We fixed an issue that could arise for add operations that used an indexed attribute value in the RDN but not in the content for the entry. In such cases, the attribute value from the RDN would not be populated in the global index for the associated attribute type. This has been corrected, and the global index will be populated for such add operations as long as that index is configured with guaranteed-unique set to true.
  • We fixed an issue that prevented the server from updating the global index for operations that were part of a successful transaction or an atomic multi-update operation.
  • We fixed an issue that could prevent global attribute indexes from being updated as a result of modifications used to remove specific attribute values (as opposed to the entire attribute) from an entry. This now will be properly handled for indexes that are configured with guaranteed-unique set to true.
  • Global attribute indexes were previously not updated for delete operations. We have updated the server so that if the server maintains a global index for any attribute used in a deleted entry’s RDN, and if that index is configured with guaranteed-unique set to true, then that global attribute index will be updated to reflect that the value has been removed.

Replication Improvements

The 8.3.0.0 release includes the following replication-related changes:

  • We fixed an issue that could cause the server to infer an incorrect replication enabled time if no value is provided in a message received from a remote replica. That would cause the server to believe that replication had been enabled in 1970 (the beginning of the epoch used for many OS timekeeping systems), and it would cause the server to enter lockdown mode because it believed it had missed changes from that replica.
  • We improved the processing that the server will perform if it receives interdependent changes in an out-of-order sequence. For example, if a client adds an entry and then immediately modifies it, and if a replica receives the add and modify operations from two different other replicas, then it’s possible that it could try to process the modify operation before the add. This could have previously caused replication of the modify operation to fail.
  • We fixed an issue that could cause dsreplication status to display information about offline servers below all replication domains, even if that server only participated in a subset of those domains. It will now only list them under domains in which they participate.
  • We improved the messages displayed when running dsreplication enable and dsreplication initialize in interactive mode to better highlight the servers and base DNs that will be affected by the operation.

Improved Result Codes through the Directory Proxy Server

We fixed a couple of issues that could cause the Directory Proxy Server to return less desirable result codes in certain cases when using an entry-balanced configuration. These include:

  • When attempting to add an entry when all servers in at least one of the backend sets were unavailable, the server could have returned a response with an inappropriate result code of 81 (server down). It will now use a result code of 52 (unavailable).
  • When attempting to add an entry more than one level below the balancing point, and when the ancestor immediately below the balancing point does not exist, then the server could have returned a response with an inappropriate result code of 80 (other) or 81 (server down). It will now use a result code of 32 (no such object).

Fix for Server SDK Plugins Targeting Add Operations

We fixed an issue that could prevent pre-operation add plugins written with the Server SDK from seeing changes made to the add operation by other plugins that are shipped with the server. For example, a Server SDK pre-operation add plugin was previously unable to see any attribute values generated by the composed attribute plugin.

UnboundID LDAP SDK for Java 6.0.0

We have just released version 6.0.0 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

One of the biggest changes that we’ve made in this release is that we’ve deprecated support for the TLSv1 and TLSv1.1 protocol versions in accordance with RFC 8996. By default, the LDAP SDK will prefer using TLSv1.3, but it can fall back to using TLSv1.2 if the newer protocol is not supported by the client JVM or by the directory server. The older TLSv1 and TLSv1.1 protocol versions can still be enabled if necessary (either programmatically or by setting system properties), but given that they are no longer considered secure, and given that TLSv1.2 became an official standard over twelve years ago, the far better option would be to use a directory server release from sometime in the last decade.

We have also updated the set of TLS cipher suites that the LDAP SDK will use by default. The default set of enabled cipher suites no longer includes those that rely on the SHA-1 message digest algorithm (which is no longer considered secure) or those that rely on RSA key exchange (which doesn’t support forward secrecy and could allow an observer to decrypt the communication if the server certificate’s private key becomes compromised; note that deprecating RSA key exchange doesn’t affect the ability to interact with servers that use certificates with RSA key pairs). If necessary, you can override the set of cipher suites that the LDAP SDK uses by default, either programmatically or with system properties.

You can find the complete release notes at https://docs.ldap.com/ldap-sdk/docs/release-notes.html. Other notable changes in this release include:

  • We fixed an issue that could cause the LDAP SDK to use the set of TLS cipher suites enabled in the JVM by default rather than a recommended set identified by the LDAP SDK itself. This could potentially result in using weaker encryption for secure connections.
  • We updated the logic that the LDAP SDK uses when deciding which characters to escape when generating the string representation of a DN or RDN. Previously, it would always escape all non-ASCII characters. Now, the LDAP SDK will no longer escape non-ASCII characters that it believes are displayable (including the Unicode letter, number, punctuation, and symbol character types). If desired, you can override this behavior either programmatically or with a system property.
  • We updated the logic that the LDAP SDK uses when deciding which data should be base64-encoded when generating the LDIF representation of an entry. Previously, it would not always base64-encode data with ASCII control characters (other than NUL, LF, and CR, which must always be base64-encoded). Now, it will always base64-encode values with ASCII control characters by default. It can also be configured to optionally not base64-encode values with non-ASCII characters (which technically violates the LDIF specification but may be useful when displaying to an end user). You can override the LDAP SDK’s base64-encoding strategy either programmatically or with a system property.
  • We updated the LDIF reader to make it possible to disable support for parsing LDAP controls. By default, the LDAP SDK supports LDIF change records that include LDAP controls as described in RFC 2849. However, this can cause a problem in a rare corner case if a record represents an entry rather than a change record and the first attribute in the LDIF representation of that entry is named “control”. If you attempt to read that record as a generic LDIF record or as a change record with defaultAdd set to true (rather than reading it specifically as an entry), then the LDIF reader will attempt to parse that attribute as an LDIF control. If you have LDIF records that represent entries in which the first attribute may be named “control”, if you are reading them as generic LDIF records or as LDIF change records with defaultAdd set to true, and if you don’t have any LDIF change records that legitimately do include LDAP controls, then you can update the LDIF reader to disable support for controls so that it will interpret a leading “control” element as an attribute rather than a change record with a control.
  • We updated PKCS11KeyManager to make it easier to use certificate chains stored in PKCS #11 tokens without needing to alter the JVM configuration. Previously, if you wanted to use PKCS #11, you either had to modify a configuration file within the JVM installation (which may not always be feasible), or you had to write your own code to load the provider before trying to use the key manager. You can now supply a provider configuration file when creating a PKCS #11 key manager, and it will ensure that the necessary provider is loaded and registered with the JVM.
  • We updated the manage-certificates tool to support interacting with PKCS #11 tokens. Previously, the tool only supported certificates in JKS, PKCS #12, and BCFKS key stores. When using a PKCS #11 token, you must use the --keyStore argument with a value that is the path to the provider configuration file and the --keyStoreFormat argument with a value of PKCS11.
  • We updated the manage-certificates tool to add a new copy-keystore subcommand with support for copying some or all of the information in one key store to another key store of the same or a different type. This can allow you to merge key stores or convert a key store from one type to another (for example, JKS to PKCS #12).
  • We updated the manage-certificates tool to add optional --output-file and --output-format arguments to the generate-self-signed-certificate subcommand. This allows you to generate and export a self-signed certificate in one step rather than requiring a separate command to export a certificate after generating it.
  • We updated the manage-certificates tool to allow interacting with BCFKS key stores even when the LDAP SDK is not operating in FIPS 140-2-compliant mode. Note that the necessary FIPS-compliant Bouncy Castle libraries must already be in the classpath.
  • We updated the manage-certificates tool to display the key store type when using the list-certificates subcommand.
  • We updated the in-memory-directory-server command-line tool to add a new --doNotGenerateOperationalAttributes argument that will prevent the server from maintaining operational attributes like entryDN, entryUUID, subschemaSubentry, creatorsName, createTimestamp, modifiersName, and modifyTimestamp.
  • We updated the FileArgument class to provide better support for interacting with files that are potentially encrypted or compressed. The getFileLines, getNonBlankFileLines, and getFileBytesMethods have been updated so that they can transparently handle reading from gzip-compressed files. Further, for tools that are running as part of a Ping Identity Directory Server installation, they can transparently handle reading from files that are encrypted with a key from the server’s encryption settings database. Also, a new getFileInputStream method has been provided that can retrieve an input stream to use when reading from the target file, including cases in which the file is compressed or encrypted.
  • We added a new ThreadLocalSecureRandom class that can be used to maintain a set of per-thread SecureRandom instances that can be used without concerns around synchronization or contention.
  • We updated the documentation to include the latest revisions of the draft-coretta-x660-ldap, draft-ietf-kitten-password-storage, and draft-melnikov-scram-2fa drafts in the set of LDAP-related specifications.

Changes specific to running in FIPS 140-2-compliant mode include:

  • We have updated the LDAP SDK so that it will use the Bouncy Castle FIPS-compliant SecureRandom instance in hybrid mode, which helps reduce the chance that it will encounter severe performance issues as a result of depleted entropy on the underlying system. However, in some cases, it may still be necessary to either use a hardware random number generator or a software entropy supplementing daemon (like rngd) to prevent blocking due to a lack of entropy.
  • We have updated the LDAP SDK to make it possible to customize the set of providers that will be allowed when running in FIPS 140-2-compliant mode. You can perform this customization programmatically or with a system property.
  • We have updated the command-line tool framework to check whether the LDAP SDK is running in FIPS 140-2-compliant mode upon invoking the tool constructor. This can help prevent cases in which the tool may inadvertently perform operations with a non-FIPS-compliant provider.

Changes specific to using the LDAP SDK in conjunction with the Ping Identity Directory Server include:

  • We updated the collect-support-data tool to allow using the --keyStoreFormat and --trustStoreFormat arguments when invoking the server-side version of the tool. Previously, you could only use these arguments in conjunction with the --useRemoteServer argument. This change only applies when using the 8.3.0.0 or later release of the Ping Identity Directory Server.
  • We added client-side support for a new administrative task that can be used to safely remove an object class definition from the server schema. The task will ensure that the object class is not in use before attempting to remove it, and it will clean up any references to the object class that may no longer be necessary (for example, in a backend’s entry compaction dictionary).

Ping Identity Directory Server 8.2.0.0

We have just released version 8.2.0.0 of the Ping Identity Directory Server, with several big new features, as well as a number of other enhancements and fixes. The release notes provide a pretty comprehensive overview of the changes, but I’ll summarize the changes here. I’ll first provide a TL;DR version of the changes, followed by more detail about some of the more significant updates.

Summary of New Features and Enhancements

  • Added single sign-on support to the Administration Console (more information)
  • Added a new ds-pwp-modifiable-state-json operational attribute (more information)
  • Added support for password validation during bind (more information)
  • Added support for a recent login history (more information)
  • Added sample dsconfig batch files (more information)
  • Added JSON-formatted variants for the audit, HTTP operation, and synchronization log files (more information)
  • Added support for logging to standard output or standard error (more information)
  • Added support for rotating the logs/server.out log file (more information)
  • Improved support for logging to syslog servers (more information)
  • Added support for the OAUTHBEARER SASL mechanism (more information)
  • Added support for the ($attr.attrName) macro ACI (more information)
  • Added a remove-attribute-type-from-schema tool (more information)
  • Added a validate-ldap-schema tool (more information)
  • Added a number of security-related improvements to setup (more information)
  • Added a number of improvements to the manage-profile tool (more information)
  • Added a number of improvements to the parallel-update tool (more information)
  • Added various improvements to several other command-line tools, including ldappasswordmodify, ldapcompare, ldifsearch, ldifmodify, ldif-diff, ldap-diff, and collect-support-data (more information)
  • Added various usability improvements to command-line tools (more information)
  • Added several improvements to the dictionary password validator (more information)
  • Added a new AES256 password storage scheme (more information)
  • Added an export-reversible-passwords tool (more information)
  • Create a 256-bit AES encryption settings definition in addition to the 128-bit AES definition
  • Added password information about password quality requirements to the ds-pwp-state-json virtual attribute (more information)
  • Added the ability to augment the default set of crypto manager cipher suites (more information)
  • Improvements in delaying bind responses (more information)
  • Require a minimum ephemeral Diffie-Hellman key size of 2048-bits
  • Switched to using /dev/urandom as a source of secure random data (more information)
  • Improved the way we generate self-signed certificates and certificate signing requests (more information)
  • Added support for using elliptic curve keys in JWTs
  • Added new administrative alert types for account status notification events, privilege assignment, and rejection of insecure requests (more information)
  • Added improvements to monitor data and the way it is requested by the status tool (more information)
  • Added identity mapper improvements, including filter support, an aggregate identity mapper, and an out-of-the-box mapper for administrative users (more information)
  • Improved uniqueness control conflict prevention and detection (more information)
  • Added support for re-sending an internal replication message if there is no response to a dsreplication initialize request
  • Added a --force argument to dsreplication initialize that can be used to force initialization even if the source server is in lockdown mode
  • Added options to customize the response code and body in availability servlets
  • Increased the number of RDN components that a DN may have from 50 to 100
  • Updated the SCIM servlet to leverage a VLV index (if available) to support paging through search result sets larger than the lookthrough limit
  • Added an --adminPasswordFile argument to the manage-topology add-server command
  • Added a password policy configuration property to indicate whether the server should return the password expiring or password expired based on whether the client also provided the password policy request control
  • Updated support for the CRAM-MD5 and DIGEST-MD5 SASL mechanisms so they are no longer considered secure
  • Improved Directory Proxy Server support for several SASL mechanisms (more information)
  • Improved Directory Proxy Server support for the LDAP join control
  • Updated manage-topology add-server to add support for configuring failover between Synchronization Server instances
  • Added a Synchronization Server configuration property for customizing the sync pipe queue size
  • Added a Synchronization Server configuration property for processing changes with REPLACE modifications rather than ADD and DELETE modifications

Summary of Bug Fixes

  • Fixed an issue that could prevent the installer from removing information about the instance from the topology registry
  • Fixed an issue that could cause replication to miss changes if a backend was reverted to an earlier state without reverting the replication database
  • Fixed an issue in which a replica could enter lockdown mode after initialization
  • Fixed an issue that could allow some non-LDAP clients to inappropriately issue requests without the server in lockdown mode
  • Fixed an issue in which restoring an incremental backup could cause dependencies to be restored out of order, leading to an incomplete intermediate database file
  • Fixed a backup retention issue in which the process of purging old backups could cause old backups to be removed out of order
  • Fixed an issue in which the server could leak a small amount of memory upon closing a JMX connection
  • Fixed an issue that could cause the server.status file to be corrupted on Windows systems after an unplanned reboot if the server is configured to run as a Windows service
  • Fixed an issue that could cause the server to return a password expired response control in a bind response when the user’s account is expired but the client provided incorrect credentials
  • Fixed an issue in which a search that relied on a virtual attribute provider for efficient processing could omit object classes from search result entries
  • Fixed an issue in which the server did not properly handle the matched values control that used an extensible match filter with both an attribute type and a matching rule (for example, in conjunction with the jsonObjectFilterExtensibleMatch matching rule)
  • Fixed an issue in which the server could incorrectly log an error message at startup if it was configured with one or more ACIs that grant or deny permissions based on the use of SASL mechanisms
  • Fixed an issue in which the remove-defunct-server tool to fail to remove certain replication attributes when the tool was run with a topology JSON file
  • Fixed an issue in which manage-profile replace-profile could fail to apply changes if the profile included dsconfig batch files without a “.dsconfig” extension
  • Fixed an issue in which the server could raise an internal error and terminate the connection if a client attempted to undelete a non-soft-deleted entry
  • Fixed an issue that could cause the REST API to fail to decode certain types of credentials when using basic authentication
  • Fixed an issue in which the encryption-settings tool could leave the server without a preferred definition after importing a set of definitions with the --set-preferred argument but none of the imported definitions is marked preferred
  • Fixed an issue in which the manage-profile generate-profile command could run out of memory when trying to generate a profile containing large files
  • Fixed an issue in which the manage-profile generate-profile command could display a spurious message when generating the profile in an existing directory
  • Fixed an issue that could interfere with cursoring through paged search results when using the REST API and the results included entries with long DNs
  • Fixed an issue that could cause an exception in SCIM 1.1 processing as a result of inappropriate DN escaping
  • Fixed an issue that could cause the isMemberOf and isDirectMemberOf virtual attributes to miss updates if the same group is updated concurrently by multiple clients
  • Fixed an issue that could cause the server to return an objectClassViolation result code instead of the more appropriate attributeOrValueExists result code when attempting to add an object class value to an entry that already has that object class
  • Fixed an issue that could cause loggers to consume more CPU processing time than necessary in an idle server
  • Fixed an issue in which the stats collector plugin could generate unnecessary I/O when it is used exclusively for sending metrics to a StatsD endpoint
  • Fixed an issue in which the periodic stats logger could include duplicate column headers
  • Fixed an issue that could cause the server to periodically log an error message if certain internal backends are disabled
  • Fixed a typo in the default template that the multi-part email account status notification handler uses to warn about an upcoming password expiration
  • Fixed an issue in which the dsconfig list command could omit certain requested properties
  • Fixed an issue in which the dsreplication tool could incorrectly suppress LDAP SDK debug messages even if debugging was requested
  • Fixed an issue that could cause the Directory Proxy Server to log information about an internal exception if an entry-balanced search encountered a timeout when processing one or more backend sets
  • Fixed an issue in which the Synchronization Server could get stuck when attempting to retry failed operations when it already has too many other operations queued up for processing
  • Fixed an issue in which Synchronization Server loggers were not properly closed during the server shutdown process
  • Fixed an issue in which the synchronization server could fail to synchronize certain delete operations from an Oracle Unified Directory because of variations in the format of the targetUniqueID attribute

Single Sign-On for the Administration Console

The Administration Console now supports authenticating with an OpenID Connect token, and it can provide single sign-on across instances when using that authentication mechanism. At present, this is only supported when using a token minted by the PingOne service, and there must be corresponding records for the target user in both the PingOne service and the local server. General support for SSO with ID tokens from any OpenID Connect provider should be available soon in an upcoming release.

The new ds-pwp-modifiable-state-json Operational Attribute

We have added a new “Modifiable Password Policy State” plugin that provides support for the ds-pwp-modifiable-state-json operational attribute. The value of this attribute is a JSON object with fields that provide information about a limited set of password policy state properties, including:

  • password-changed-time — The time the user’s password was last changed
  • account-activation-time — The user’s account activation time
  • account-expiration-time — The user’s account expiration time
  • password-expiration-warned-time — The time the user was first warned about an upcoming password expiration
  • account-is-disabled — Whether the account is administratively disabled
  • account-is-failure-locked — Whether the account is locked as a result of too many failed authentication attempts
  • must-change-password — Whether the user will be required to choose a new password before they are permitted to request any operations

These elements of the password policy state can now be updated by modifying the value of the ds-pwp-modifiable-state-json attribute to set a new value that is a JSON object that holds the desired new values for the appropriate fields. Any fields not included in the provided object will remain unchanged.

This operational attribute is intended to work in conjunction with the existing ds-pwp-state-json virtual attribute, which provides read-only access to a wide range of information about the user’s password policy state and the configuration of the password policy that governs their account.

Both of these attributes can be accessed over either LDAP or the Directory REST API, and the UnboundID LDAP SDK for Java provides convenient support for both the ds-pwp-state-json and ds-pwp-modifiable-state-json attributes. The password policy state extended operation is still available and provides support for manipulating a broader set of password policy state properties, but it can only be used over LDAP (either programmatically or through the manage-account command-line tool).

Password Validation During Bind

The server has always provided a lot of options for validating passwords at the time that they’re set, whether in a self-change or an administrative reset. It now also provides the ability to perform validation in the course of processing bind operations in which the server has access to the user’s clear-text password.

This feature can help provide better support for detecting issues with passwords that were acceptable at the time they were set but have since become undesirable (for example, if someone uses the same password across multiple sites, and one of them suffers a data breach that exposes the password). As such, you may want to periodically check the password against services like Pwned Passwords or against a regularly updated dictionary of banned passwords.

Although you can configure the server to perform password validation for every bind attempt, the server also offers the ability to only check them periodically (for example, if it’s been at least a week since the last validation). This may be useful if it is necessary to interact with external services while performing the validation.

You can also configure the behavior that the server should exhibit when the password provided to a bind operation fails validation, including:

  • The server can allow the bind to proceed but place the user’s account in a “must change password” state so all other requests from that user will fail until they choose a new password.
  • The server can reject the bind attempt, requiring an administrative password reset before they can use their account.
  • The server can generate an account status notification that may be used to notify the user about the need to choose a new password (or to take some other action as determined by a custom account status notification handler).

Recent Login History

The server has always provided support for maintaining a last login time and IP address for each user, but that only reflects information about the most recent successful authentication attempt. And if account lockout is enabled, then the server can maintain a list of failed authentication attempts. However, that will only be updated until the account is locked, and it will be cleared by a successful login, so this is really only useful for the purpose of account lockout.

In the 8.2.0.0 release, we are adding support for maintaining a recent login history, stored in the password policy state for each account. For both successful and failed authentication attempts, the server will store the time, client IP address, and the type of authentication attempted. For a failed attempt, the server will also record a general reason for the failure.

The recent login history is available in a few different ways:

  • It is available through the ds-pwp-state-json operational attribute.
  • It is available through the password policy state extended operation and through the manage-account command-line tool that uses that operation behind the scenes.
  • Information about previous login attempts can be obtained on a successful bind by including the new “get recent login history” request control in the bind request. The ldapsearch and ldapmodify tools have been updated with a --getRecentLoginHistory argument that can be used to include this control in the bind request, and the UnboundID LDAP SDK for Java provides support for using this control programmatically.

The recent login history can be enabled in the password policy configuration, and you can configure it to keep up to a specified number of attempts or keep a record for up to a specified duration. This can be configured separately for successful and failed attempts. You can also configure the server to collapse information about similar authentication attempts (that is, attempts from the same IP address with the same result, authentication type, and failure reason) into a single record so, the history doesn’t fill up too quickly from clients that frequently try to authenticate to the server. When collapsing information about multiple similar attempts into a single record, the timestamp of that record will reflect the most recent successful attempt, and it will also include the number of additional similar attempts that have been collapsed into that record.

Sample dsconfig Batch Files

We have added a config/sample-dsconfig-batch-files directory with several well commented dsconfig batch files that demonstrate how to enable, disable, or configure various functionality in the server. Many of these batch files are for changes that can help improve server security, including configuring TLS protocols and cipher suites, managing password policy functionality, disabling insecure communication, enabling two-factor authentication options, and managing administrative users. They help serve as additional documentation and, in many cases, provide information about changes that we recommend but do not have in the out-of-the-box configuration for some reason.

In many cases, these batch files require a certain amount of editing before they can be used (for example, to indicate which TLS protocols to use, which password policy to update, etc.). In such cases, you shouldn’t edit the provided batch file directly but should instead make a copy and customize that as needed. This will help ensure that if we make changes to the existing batch files in the future, those changes will be applied when you update the server to that new version.

Additional JSON-Formatted Loggers

Our default access and error loggers use a legacy space-delimited text format that matches what is used by other types of directory servers. However, in the 6.0.0.0 release, we added support for formatting access and error log messages as JSON objects so that they can be more easily parsed by a wider variety of log management and analysis software.

In the 8.2.0.0 release, we are adding JSON support for additional types of loggers. These include:

  • A JSON-formatted audit log publisher, which provides a record of all changes to data in the server. In the past, audit logging has always used the standard LDIF format to provide the contents of the change and LDIF comments to capture information about other metadata (including the message timestamp, the connection and operation ID, the requester DN and IP address, etc.). The JSON-formatted audit logger still uses the standard LDIF format for representing information about changes, but all of the other metadata is now available through separate JSON fields to make it easier to access in a generic manner.
  • A JSON-formatted HTTP operation log publisher, which provides a record of HTTP requests received by and HTTP responses returned by the server. We already offered HTTP operation log publishers using the W3C common log format and a space-delimited format similar to our default access and audit loggers, but we now offer support for these messages formatted as JSON objects.
  • JSON-formatted Sync log publisher and Sync failed ops log publisher options, which provide information about processing performed by the Synchronization Server. In the past, these messages have used a less-structured format that is primarily intended for reading by a person. When attempting to parse these messages in an automated manner, the new JSON format is a much better option.

Each of these JSON-formatted logger types (including the existing JSON-formatted access and error loggers) includes a logType field in each log message that indicates the type of logger that generated the message. This makes it easier to identify the type of log message in cases where log messages from different sources are intermingled (for example, writing their log files to the same directory). While this is useful when writing these messages to files, it is even more important when logging to the console or to syslog, as will be described below.

We also fixed an issue in our existing support for JSON-formatted access and error log messages that caused the timestamps to be generated in a format that was not completely compliant with the ISO 8601 format described in RFC 3339. These timestamps incorrectly omitted the colon between the hour and minute components of the time zone offset.

Also, while the stats logger plugin has provided an option to write messages as JSON objects since the 8.0.0.0 release, you had to specifically create an instance of the plugin if you wanted to enable that support. In the 8.2.0.0 release, we have added a definition in the out-of-the-box configuration that can generate JSON-formatted output, so you merely need to enable it if you want to take advantage of this feature. This provides better parity with the CSV-formatted output that the plugin can also generate.

Logging to Standard Output and Standard Error

In container-based environments like Docker, it is relatively common to have applications write log messages to the container’s standard output or standard error stream, which will then be captured and funneled into log management software. We are now providing first-class support for that capability. In the 8.2.0.0 release, we are providing console-based variants of each of the following:

  • JSON-formatted access log messages
  • JSON-formatted error log messages
  • JSON-formatted audit log messages
  • JSON-formatted HTTP operation log messages
  • JSON-formatted Sync log messages
  • JSON-formatted Sync failed ops log messages

Each of these can be sent to either standard output or standard error, and you can mix messages from different types of loggers in the same stream. For example, if desired, you could send all log messages for all of these types of loggers to standard output and none of them to standard. Because these JSON objects include the logType field, you can easily identify the type of logger that generated each message.

You can also now configure the server to prevent it from logging messages in non-JSON format during startup. By default, the server will continue to write error log messages to standard error in the legacy space-delimited text format. It will also write information about administrative alerts to standard error, and you can either disable this entirely or have those messages formatted as JSON objects. With these changes, it is now possible to have all of the data written to standard output or standard error be formatted as JSON objects, and you don’t have to worry about JSON-formatted output intermingled with non-JSON output.

Note that we only recommend using console-based logging when running the server with the --nodetach argument, which prevents it from detaching from the console (and is the preferred mode for running in a container like Docker anyway). When running in the default daemon mode, any data written by these console-based loggers will be sent to the logs/server.out file, which is less efficient and less customizable than using the file-based JSON loggers. When running in --nodetach mode, the console-based loggers will only write to the JVM’s original standard output or standard error stream.

Rotation Support for server.out

With the exception of the new console-based loggers when the server is running in --nodetach mode, anything that would have normally gone to standard output or standard error is written into the logs/server.out file. Previously, the server would create a new server.out file when it started up and would continue writing to that same file for the entire life of the server. It would also only keep one prior copy of the server.out file (in server.out.previous).

The server generally doesn’t write much to standard output or standard error, so it’s really not much of a problem to have all data sent to the same file, but it can accumulate over time, and it’s possible that the file could grow large if the server runs for a very long time. To address that, we have added support for rotating this file. Rotation is only available based on the amount of data written, and retention is only available based on the number of files. By default, the server will rotate the file when it reaches 100 megabytes, and it will retain up to 10 rotated copies of the file. This behavior can be customized through the maximum-server-out-log-file-size and maximum-server-out-log-file-count properties in the global configuration.

Syslog Logging Improvements

The server has provided support for writing access and error log messages (in the legacy text-based formats) to syslog since the 2.1.0 release, but there were a number of limitations. Logging used an old version of the syslog protocol (as described in RFC 3164), and messages could only be sent as unencrypted UDP packets.

In the 8.2.0.0 release, we are significantly updating our support for logging over syslog. The former syslog-based access and error loggers are still available for the purpose of backward compatibility, but we have also added several new types of syslog-based loggers with new capabilities, including:

  • Access loggers using either JSON or the legacy text-based format
  • Error loggers using either JSON or the legacy text-based format
  • Audit loggers using the JSON format
  • HTTP operation loggers using the JSON format
  • Sync loggers using the JSON format
  • Sync failed ops loggers using the JSON format

These new loggers will use an updated version of the syslog protocol (as described in RFC 5424), and they also offer the option to communicate over UDP or TCP. When using TCP, you can optionally encrypt the data with TLS so that it can’t be observed over the network. When using TCP, you can also configure the logger with information about multiple syslog servers for the purpose of redundancy (this is not available when using the UDP-based transport, as the server has no way of knowing whether the messages were actually received).

The OAUTHBEARER SASL Mechanism

We have added support for the OAUTHBEARER SASL mechanism as described in RFC 7628. This makes it possible to authenticate to the server with an OAuth 2.0 bearer token, using an access token validator to verify its authenticity, map it to a local user in the server, and identify the associated set of OAuth scopes. The oauthscope ACI bind rule can be used to make access control decisions based on the scopes associated with the access token.

This ultimately makes it possible to authenticate to the server in a variety of ways that had not previously been available, especially when users are interacting with Web-based applications that use the Directory Server behind the scenes. For example, this could be used to authenticate clients with a FIDO security key.

In addition to the support for OAuth 2.0 bearer tokens as described in the official specification, our OAUTHBEARER implementation also supports authenticating with an OpenID Connect ID token, either instead of or in addition to an OAuth 2.0 bearer token. To assist with this, we have added support for ID token validators, including streamlined support for validating ID tokens issued by the PingOne service. If you want to authenticate with only an OpenID Connect token, you can do that with a standard OAUTHBEARER bind request by merely providing the OpenID Connect ID token in place of the OAuth access token. If you want to provide both an OAuth access token and an OpenID Connect ID token, you’ll need to include a proprietary pingidentityidtoken key-value pair in the encoded SASL credentials. The UnboundID LDAP SDK for Java provides an easy way to do this.

The ($attr.attrName) Macro ACI

We have updated our access control implementation to add support for the ($attr.attrName) macro ACI. This macro can be included in the value of the userdn and groupdn bind rules to indicate that the macro should be dynamically replaced with the value of the specified attribute in the authorized user’s entry.

This macro is particularly useful in multi-tenant deployments, in which a single server holds information about multiple different organizations. If a similar rule needs to be enforced across all tenants, you can create one rule for the entire server rather than a separate rule that is customized for each tenant. For example, if a user requesting an operation has an o attribute with a value of “XYZ Corp”, a bind rule of groupdn="ldap:///cn=Administrators,ou=Groups,o=($attr.o),dc=example,dc=com" would be interpreted as groupdn="ldap:///cn=Administrators,ou=Groups,o=XYZ Corp,dc=example,dc=com".

Our Delegated Administration application has been updated to support this macro ACI to provide an improved experience in multi-tenant environments.

Safely Removing Attribute Types From the Server Schema

In general, we don’t recommend removing attribute types (or other types of elements) from the server schema. It’s safe to do for elements that have never been used, but if an attribute has been used in the past (even if it’s not currently being used in any entries), there may still be references to it in the database (for example, in the entry compaction dictionary used to minimize the amount of space required to store it). In that case, the server will prevent you from removing the attribute type from the schema unless you first export and re-import the data to clear any references to it from the database. Further, if the database contains a reference to an attribute type that is not defined in the server schema, it will log a warning message on startup to notify administrators of the problem.

In the 8.2.0.0 release, we’re adding a new remove-attribute-type-from-schema tool that can be used to safely remove a previously used attribute type from the server schema without requiring an LDIF export and re-import. This tool will first ensure that the attribute type is not currently in use (including referenced by other schema elements or in the server configuration), and it will then clean up any lingering references to that attribute from the database before removing it from the schema.

This processing can also be automated through the new “remove attribute type” administrative task. The UnboundID LDAP SDK for Java provides convenient support for this task.

Validating Server Schema Definitions

We have added a new validate-ldap-schema tool that can be used to examine the server schema and report on any issues that it finds. This includes things like

  • Definitions that can’t be parsed
  • Elements that refer to other elements that aren’t defined in the schema
  • Multiple definitions with the same name or OID
  • Schema elements with an invalid name or OID
  • Schema elements without a name
  • Schema elements with an empty description
  • Attribute types without a syntax or an equality matching rule
  • Schema characteristics that some servers do not support (collective attributes, object classes with multiple superior classes)

The UnboundID LDAP SDK for Java also offers a SchemaValidator class to allow you to perform this validation programmatically.

setup Security Improvements

We’ve updated the server setup process to make it easier and more convenient to create a more secure installation.

It was already possible to use non-interactive setup to create an instance that only allowed TLS-encrypted connections (by simply not specifying a port to use for non-secure LDAP connections), but this was not an option for interactive setup. When running interactive setup, the resulting server would always accept unencrypted connections from LDAP clients. Now, the interactive setup process will allow you to create an instance that only accepts connections from LDAP clients that are encrypted with TLS. Alternatively, it is possible to enable support for accepting unencrypted LDAP connections but require those clients to use the StartTLS extended operation to secure communication before they are allowed to issue other requests. And we have redesigned the flow so that you’ll still get the same number of prompts when running setup.

We have also added a couple of new arguments to non-interactive setup (also available in manage-profile setup) to make it possible to impose additional security-related constraints for clients:

  • We have added a new --rejectInsecureRequests argument that can be used to ensure that the server will reject requests from clients communicating with the server over an unencrypted connection. This makes it possible to enable support for unencrypted LDAP connections but require that those clients use the StartTLS extended operation to secure those connections before any other requests are allowed.
  • We have added a new --rejecUnauthenticatedRequests argument to configure the server to reject any request received from unauthenticated clients.

We have also updated non-interactive setup (and manage-profile setup) so that you can provide the password for the initial root user in pre-encoded form (using the PBKDF2, SSHA256, SSHA384, or SSHA512 storage scheme). This eliminates the need to have access to the clear-text password when setting up the server. The pre-encoded password can be provided directly through the --rootUserPassword argument or in a file specified through the --rootUserPasswordFile argument.

manage-profile Tool Improvements

We have made several improvements to the manage-profile tool that can be used to set up or update a server instance. They include:

  • We updated manage-profile replace-profile to better support updating the server’s certificate key and trust store files. When the profile includes the --generateSelfSignedCertificate argument in the setup-arguments.txt file, the original key and trust store files will be retained. Otherwise, manage-profile will replace the server’s key and trust stores with those from the new profile.
  • When applying configuration changes from a dsconfig batch file with the server online, manage-profile replace-profile can now obtain connection arguments from the target server’s config/tools.properties file.
  • We updated manage-profile replace-profile so that if the new profile includes new encryption settings definitions, the new profile’s preferred definition will be used as the resulting instance’s preferred definition.
  • We fixed an issue in which manage-profile setup could complete with an exit code of zero even if the attempt to start the server failed.
  • We fixed an issue in which manage-profile replace-profile could fail when applying changes from a dsconfig batch file when using Boolean connection arguments like --useSSL.
  • We updated the tool to ensure that each instance gets a unique cluster name. We do not recommend having multiple servers in the same cluster for instances managed by manage-profile.
  • We made it possible to obtain debug information about how long each step of the setup or replace-profile process takes.

parallel-update Tool Improvements

The parallel-update tool can be used to apply changes read from an LDIF file in parallel with multiple concurrent threads. It behaves much like ldapmodify, but the parallelism can make it significantly faster to process large numbers of changes. The ldapmodify tool does offer a broader range of functionality than parallel-update, but we have added new capabilities to parallel-update to help narrow that gap. Some of these improvements include:

  • We have added support for several additional controls, including proxied authorization, manage DSA IT, ignore NO-USER-MODIFICATION, password update behavior, operation purpose, name with entryUUID, assured replication, replication repair, suppress operational attribute update, and suppress referential integrity updates. We have also added support for including arbitrary controls in add, bind, delete, modify, and modify DN requests.
  • We have made its communication more robust. The tool would previously only establish its connections when it was first started. If any of those connections became invalid over the course of processing requests, then subsequent operations attempted on that connection would fail. The tool will now attempt to detect when a connection is no longer valid and will attempt to establish a new connection if it becomes invalid, re-attempting any failed operation on that new connection.
  • We have added support for specifying multiple server addresses and ports. If this option is used, then the tool can automatically fail over to an alternative server if the first server becomes unavailable.
  • We have improved the ability to determine whether all operations were processed successfully. Previously, the tool would always use an exit code of zero if it was able to attempt all of the changes, even if some of them did not complete successfully. This is still the default behavior to preserve backward compatibility, but you can now use the --useFirstRejectResultCodeAsExitCode argument to indicates that if any operation fails, the result code from the first rejected operation will be used as the parallel-update exit code.
  • It is now possible to apply changes from an LDIF file that has been encrypted with a user-supplied passphrase or with a definition from the server’s encryption settings database.
  • Previously, if you didn’t provide a value for the --numThreads argument, the tool would use one thread by default. It now defaults to using eight threads.
  • The tool now offers a --followReferrals argument to allow it to automatically follow any referrals that are returned.
  • If you run the tool without any arguments, it will now start in an interactive mode instead of exiting with an error.

Other Command-Line Tool Improvements

We have made a variety of other command-line tool improvements.

We replaced the ldappasswordmodify tool with a new version that offers more functionality, including support for additional controls, support for multiple password change methods (the password modify extended operation, a regular LDAP modify operation, or an Active Directory-specific modify operation), and the ability to generate the new password on the client.

We replaced the ldapcompare tool with a new version that offers more functionality, including support for multiple compare assertions, following referrals, additional controls, and multiple output formats (including tab-delimited text, CSV, and JSON).

We replaced the ldifsearch, ldifmodify, and ldif-diff tools with more full-featured and robust implementations.

We updated the ldap-diff tool so that it no longer wraps long lines by default, as that interfered with certain types of text processing that clients may wish to perform on the output. However, the tool does offer a --wrapColumn argument that can be used to indicate that long lines should be wrapped at the specified column if that is desired.

We updated the collect-support-data tool to make it possible to specify how much data should be captured from the beginning and end of each log file to be included in the support data archive. This option is also available when invoking the tool through an administrative task or an extended operation.

Usability Improvements in Command-Line Tools

We have updated the server’s command-line framework to make it easier and more convenient to communicate over a secure connection when no trust-related arguments are provided. Most non-interactive tools will now see if the peer certificate can be trusted using the server’s default trust store or the topology registry before prompting the user about whether to trust it.

We have also updated interactive tools like dsconfig and dsreplication so that they will prefer establishing secure connections over insecure connections. The new default behavior will be to use the most secure option available, with a streamlined set of options that does not require any additional prompts than choosing to use an insecure connection.

We have also updated setup (whether operating interactive or non-interactive mode) to make it possible to generate a default tools.properties file that tools can use to provide default values for arguments that were not provided on the command line. If requested, this will include at least arguments needed to establish a connection (secure if possible), but it may also include properties for authentication if desired.

Dictionary Password Validator Improvements

The dictionary password validator can be used to reject passwords that are found in a dictionary file containing prohibited passwords. We have always offered this password validator in the server, and we include a couple of dictionaries, including one with words from various languages (including over 730,000 words in the latest release) and another with known commonly used passwords taken from data breaches and other sources (with over 640,000 passwords).

In the 8.2.0.0 release, we have made several improvements to the dictionary password validator. They include:

  • We now offer the ability to ignore non-alphabetic characters that appear at the beginning or end of the proposed password before checking the dictionary. This can help detect things like a simple dictionary word preceded or followed by digits or symbols. For example, if the dictionary contains the word “password”, then the validator could proposed passwords like “password123!” or “2020password”.
  • We now offer the ability to strip characters of diacritical marks (accents, cedillas, circumflexes, diaereses, tildes, umlauts, etc.) before checking the dictionary. For example, if a user proposes a password of “áçcèñt”, then the server would check to see if the provided dictionary contains the word “accent”.
  • We now offer the ability to define character substitutions that can be used when checking for alternative versions of the provided password. For example, you could indicate that the number “0” might map to the letter “o”, the number “1” or the symbol “!” might map to the letters “l” or “i”, that the number “7” might map to the letter “t”, and that the number “3” might map to the letter “e”. In that case, then the validator could reject a proposed password of “pr0h1b!73d” if the dictionary contains the word “prohibited”.
  • We now offer the ability to reject a proposed password if a word from the provided dictionary makes up more than a specified percentage of that password. For example, if you configure the validator to ban passwords in which over 70% of the value matches a word from the dictionary, then the server could reject “mypassword” if the dictionary contains “password” because the word “password” makes up 80% of “mypassword”.

The AES256 Password Storage Scheme

In general, we strongly recommend encoding passwords with non-reversible schemes, like those using a salted SHA-2 digest, or better yet, a more expensive algorithm like PBKDF2 or Argon2. However, in some environments, there may be a legitimate need to store passwords in a reversible form so the server can access the clear-text value (which may be needed for processing binds for some legacy SASL mechanisms like CRAM-MD5 or DIGEST-MD5), or so the clear-text password can be used for other purposes (e.g., for sending it to some other external service).

In the past, the best reversible storage scheme that we offered was AES, which used a 128-bit AES cipher with a key that was shared privately among instances of the replication topology, but that was otherwise not available outside the server (including to the Directory Proxy Server, which might need access to the password for processing certain types of SASL bind requests). This means that it could be used for cases in which the server itself needed the clear-text password, but not when it might be needed outside the server. It was also not a simple matter to rotate the encryption key if the need should arise.

To address these issues, and also to provide support for a stronger cipher, we are adding a new AES256 password storage scheme. As the name implies, it uses AES with a 256-bit key (rather than the 128-bit key for the older cipher). It also generates the encryption key from an encryption settings definition, and if a client knows the password used to create that encryption settings definition, and if it has access to the encoded password, then it can decrypt that encoded password to get the clear-text representation. The UnboundID LDAP SDK for Java provides a convenient way to decrypt an AES256-encoded password given the appropriate passphrase, and the class-level Javadoc provides enough information for clients using other languages or APIs to implement their own support for decrypting the password.

The export-reversible-passwords Tool

While the new AES256 password storage scheme gives you much better control over the key that is used when encrypting passwords, it does not provide direct support for rotating that encryption key should the need arise. You can reconfigure the storage scheme to change the encryption settings definition to use when encoding new passwords, but that does not affect existing passwords that have already been encoded with the earlier definition.

To help facilitate encryption key rotation for AES256-encoded passwords, or to facilitate migrating passwords that use a reversible encoding to use a different scheme, we have provided a new export-reversible-passwords command-line tool. It allows you to export data to LDIF with clear-text representations of any reversibly encoded passwords.

In most environments, you probably won’t actually need or want to use this tool, and you might be concerned about its potential for misuse. To help protect against that, there are a number of safeguards in place to help ensure that it cannot be used inappropriately or covertly. These include:

  • If you don’t encode passwords in a reversible form, then you don’t have anything to worry about. The server can’t get the clear-text representation of passwords encoded using non-reversible schemes.
  • The tool initiates the export by communicating with the server over a TLS-encrypted LDAP connection established over a loopback interface. It cannot be invoked with the server offline, and it cannot be invoked from a remote system.
  • The export can only be requested with someone who has the new permit-export-reversible-passwords privilege. This privilege is not granted to anyone (not even root users or topology administrators) by default, and an administrative alert will be generated whenever this privilege is assigned (as with any other privilege assignment) so that other administrators will be aware of it.
  • The export can only be requested if the server is configured with an instance of the “export reversible passwords” extended operation handler. This extended operation handler is not available as part of the out-of-the-box configuration, and an administrative alert will be generated whenever it is created and enabled (as with any other type of configuration change).
  • The output will be written to a file on the server file system, and that file must be placed beneath the server instance root. As such, it can only be accessed by someone with permission to access other files associated with the server instance.
  • The entire output file will itself be encrypted, either using a key generated from a user-supplied passphrase or from a server encryption settings definition. The encrypted file may be directly imported back into the server using the import-ldif tool, or it may be used as an input file for the ldapmodify or parallel-update tools. It can also be decrypted with the encrypt-file tool or accessed programmatically using the PassphraseEncryptedInputStream class in the UnboundID LDAP SDK for Java.
  • The server will generate an administrative alert whenever it begins the export process.

The output will be formatted as LDIF entries. By default, it will only include entries that contain reversibly encoded passwords, and the entries that are exported will only include the entry’s DN and the clear-text password. You can also use it to perform a full LDIF export, including entries that do not have passwords or that have passwords with a non-reversible encoding, and including other attributes in those entries.

Password Quality Requirements Information in the ds-pwp-state-json Virtual Attribute

We have updated the ds-pwp-state-json virtual attribute provider to add support for a new password-quality-requirement field whose value is a JSON object that provides information about the requirements that passwords will be required to satisfy. This information was already available through the “get password quality requirements” extended request or the “password validation details” request control, but including it in the ds-pwp-state-json attribute makes it easier to access by generic LDAP clients that aren’t written with the UnboundID LDAP SDK for Java, and it also makes it available to non-LDAP clients (for example, those using the Directory REST API).

The value of the password-quality-requirement field will be an array of JSON objects, each of which describes a requirement that passwords will be required to satisfy. These JSON objects may include the following fields:

  • description — A human-readable description for the requirement.
  • client-side-validation-type — A string that identifies the type of validation being performed. In some cases, clients may be able to use this field, along with information in the client-side-validation-properties field to pre-validate passwords to determine whether it is acceptable before sending it to the server, in order to give a better end-user experience. For example, if passwords may be required to meet certain length requirements, the client-side-validation-type value will be “length”, and it can use the min-password-length and max-password-length properties to convey the minimum and maximum (if defined) lengths for the password. Of course, some types of validation (for example, checking a server-side dictionary of prohibited passwords) can’t readily be performed by clients.
  • client-side-validation-properties — An array of properties that provide additional information about the requirement that may be useful in client-side validation. Each element of the array will be a JSON object with the following fields:

    • name — The name for the property.
    • value — The value for the property. Note that this will always be formatted as a JSON string, even if the value actually represents some other data type, like a number or Boolean value.
  • applies-to-add — Indicates whether this requirement will be imposed for new entries that are added with the same password policy as the entry to which the ds-pwp-state-json value is associated.
  • applies-to-self-change — Indicates whether this requirement will be imposed when the user is changing their own password.
  • applies-to-administrative-reset — Indicates whether this requirement will be imposed when the user’s password is being reset by an administrator.
  • applies-to-bind — Indicates whether this requirement may be checked during bind operations.

Augmenting Default Crypto Manager Cipher Suites

By default, the server automatically chooses an appropriate set of TLS cipher suites to enable. It tries to maintain a good balance between making it possible to use the strongest available security for clients that support it but still allowing reasonably secure alternatives for older clients that may not support the most recent options. However, it is also possible to override this default configuration in case you want to customize the set of enabled cipher suites (for example, if you know that you won’t need to support older clients, you can have only the strongest suites enabled).

The set of enabled cipher suites can be customized for each connection handler (for example, if you want to have different suites available for LDAP clients than those using HTTP). For any other communication that the server may use (for example, to secure replication communication between instances), that may be customized in the crypto manager configuration.

It has always been possible to explicitly state which TLS cipher suites should be enabled, using the ssl-cipher-suite property in either the connection handler or crypto manager configuration. In that case, you can simply list the names of the suites that should be enabled. For connection handlers, we have also offered the ability to augment the default set of suites to mostly use those values, but to also enable or disable certain suites by prefixing the name with a plus or minus sign. For example, specifying a value of “-TLS_RSA_WITH_AES_256_CBC_SHA256” indicates that the server should omit the “TLS_RSA_WITH_AES_256_CBC_SHA256” suite from the default set of cipher suites that would otherwise be available. However, we overlooked providing the ability to add or remove individual suites in the crypto manager configuration in previous releases. That has now been fixed.

Improvements in Delaying Bind Responses

In the 7.2.0.0 release, we made it possible to delay the response to any failed LDAP bind request as a means of rate-limiting password guessing attacks. This provides a great alternative to locking an account after too many failed bind attempts because it can dramatically impede how quickly clients can make guesses while also preventing the opportunity for malicious clients to intentionally lock out legitimate users by using up all of the allowed failures.

In the 8.1.0.0 release, we added the ability to configure the password policy to take some alternative action instead of locking the account after the specified number of failed attempts, and one of the available options is to delay the bind response. Although similar to the earlier feature, this is a little better for a couple of reasons:

  • It will only start delaying bind responses after a few failed attempts. This can help avoid any visible impact for users who simply mis-type their password one or two types. The delay will only kick in after the configured failure count threshold has been reached.
  • It will also delay the response to the first successful bind after the failure count threshold has been reached. This makes the delay even more effective for deterring malicious clients undertaking guessing attacks because it prevents them from detecting when the response has been delayed, terminating the connection, and establishing the new one to try again without waiting for the full configured delay period to elapse.

In the 8.1.0.0 release, this ability was limited to LDAP clients because the server can efficiently impose this delay for LDAP clients without holding up the worker thread being used to process the operation (thereby making it available to immediately start processing another request). This is unfortunately not possible for non-LDAP clients, like those using HTTP (for example, via SCIM or the Directory REST API).

In the 8.2.0.0 release, we are adding support for delaying the responses to non-LDAP clients. This provides better protection for servers in which clients may communicate through means other than LDAP, but we still can’t do it without tying up the thread processing the request, so you have to opt into it by setting the allow-blocking-delay property to true in the configuration for the delay bind response failure lockout action. As a means of mitigating the potential for tying up a worker thread, we recommend bumping up the number of HTTP request handler threads so that it’s less likely that they will all be consumed by delayed responses.

To improve security for administrative accounts, we are also updating the password policy that is used by default for root users and topology administrators so that it will impose a default delay of one second after five failed authentication attempts. This can help rate-limit password guessing attacks for these high-value accounts. This default will not be imposed for other users by default, but it is possible to enable this feature in other password policies if desired.

Using /dev/urandom for Secure Random Data

Being able to obtain secure random data (that is, data that is as unpredictable as possible) is critical to many types of cryptographic operations. By default, on Linux and other UNIX-based systems, the Java Virtual Machine relies on /dev/random to get this secure random data. This is indeed an excellent source of secure random data, and it relies on a pool of entropy that is managed by the underlying operating system and can be replenished by unpredictable inputs from a variety of sources.

Unfortunately, attempts to read from /dev/random will block if the operating system doesn’t have enough entropy available. This means that if the server experiences a surge of cryptographic processing (for example, if it needs to conduct TLS negotiation for a lot of clients all at once), it can run out of entropy, and attempts to retrieve additional secure random data can block until the pool is replenished, causing the associated cryptographic operations to stall for a noticeable period of time.

To address this, we are updating the server to use /dev/urandom instead of /dev/random as the source of secure random data. /dev/urandom behaves much like /dev/random in that it can be used to obtain secure random data, but attempts to read from /dev/urandom will never block if the pool of entropy runs low, and therefore cryptographic operations that rely on access to secure random data will never block.

While there is a common misconception that using /dev/urandom is notably less secure than /dev/random, this is really not true in any meaningful way. See https://www.2uo.de/myths-about-urandom/ for a pretty thorough refutation of this myth.

Improvements to Self-Signed Certificates and Certificate Signing Requests

If you enable secure communication when setting up the server, you’re given the option to use an existing certificate key store or to generate a self-signed certificate. When creating a self-signed certificate, we previously used a twenty-year lifetime so that administrators wouldn’t need to worry about it expiring. However, some TLS clients (and especially web browsers, as may be used to access the Administration Console, Delegated Administration, or other HTTP-based content) are starting to balk at certificates with long lifetimes and are making it harder to access content on servers that use them. As such, we are reducing the default lifetime for client-facing self-signed certificates from twenty years to one year.

When an installed certificate is nearing expiration, or if you need to replace it for some other reason, you can use the replace-certificate tool. If you choose to replace the certificate with a new self-signed certificate, that will also now have a default validity of one year. However, you can also generate a certificate signing request (CSR) so that you can get a certificate issued by a certification authority (CA), and we have updated that process, too.

When generating a CSR, you need to specify the subject DN for the desired certificate. The tool will list a number of attributes that are commonly used in subject DNs, but it previously only stated the subject DN should include at least the CN attribute. In accordance with CA/Browser Forum guidelines, we have updated this recommendation, and we now suggest including at least the CN, O, and C attributes in the requested subject DN. We have also updated the logic used to select host names and IP addresses for inclusion in the subject alternative name extension, including:

  • When obtaining the default set of DNS names to include in the subject alternative name extension, we no longer include unqualified names, and we will warn about the attempt to add any unqualified host names.
  • When obtaining the default set of DNS names to include in the subject alternative name extension, we no longer include names that are associated with a loopback address.
  • When obtaining the default set of IP addresses to include in the subject alternative name extension, we no longer include any IP addresses from IANA-reserved ranges (including loopback addresses or those reserved for private-use networks) and will warn about attempts to add any such addresses.

New Administrative Alert Types

We have defined new types of administrative alerts that can be used to notify administrators of certain events that occur within the server. These include:

  • We have added a new admin alert account status notification handler. If this is enabled and associated with a password policy, then the server can generate an administrative alert any time one of the selected account status notification events occurs. This is primarily intended for use with high-value accounts, like root users or topology administrators. For example, you can have the server generate an alert if a root user’s password is updated or if there are too many consecutive failed authentication attempts. This account status notification handler will use a different alert type for each type of account status notification (for example, it will use an alert type of password-reset-account-status-notification for the alert generated if a user’s password is reset by an administrator).
  • We have added a new privilege-assigned alert type that will be raised whenever any privileges are assigned. This includes creating a new entry with one or more explicitly defined privileges, updating an existing entry to add one or more explicitly defined privileges, creating a new root user or topology administrator that inherits a default set of root privileges, updating the default set of root privileges, or creating a new virtual attribute that assigns values for the ds-privilege-name attribute.
  • We added a new insecure-request-rejected alert type that will be raised if the server rejects a request received over an insecure connection as a result of the reject-insecure-requests global configuration property.

Monitor Improvements

We have updated the system information monitor entry to restrict the set of environment variables that it may contain. Previously, this monitor entry would include the names and values of all environment variables that are available to the server process, but this has the potential to leak sensitive information because some deployments use environment variables to hold sensitive information (for example, credentials or secret keys used by automated processes). To avoid leaking this information to clients with permission to access server monitor data, we now only include information about a predefined set of environment variables that are expected to be useful for troubleshooting purposes and is not expected to contain sensitive information.

We have also added a new isDocker attribute to the server information monitor entry to indicate whether the server is running in a Docker container.

The JVM memory usage monitor entry has been updated to fix an issue that could prevent it from reporting the total amount of memory held by all memory consumers (that is, components of the server that may be expected to consume a significant amount of data), and to fix an issue that could cause it to use an incomplete message for consumers without a defined maximum size. We have also added an additional memory-consumer-json attribute whose values are JSON objects with data that can be more easily be consumed by automated processes.

We have updated the status tool to optimize some of the searches that it uses to retrieve monitor data.

Identity Mapper Improvements

Identity mappers can be used to map a username or other identifier string to the entry for the user to which it refers. We have made the following identity mapper-related improvements in the server:

  • We have updated the exact match and regular expression identity mappers to make it possible to include only entries that match a given filter.
  • We have added an aggregate identity mapper that can be used to merge the results of other identity mappers with Boolean AND or OR combinations.
  • We have added new identity mapper instances in the out-of-the-box configuration for matching only root users, only topology administrators, and either root users or topology administrators.

Uniqueness Control Improvements

The uniqueness request control may be included in an add, modify, or modify DN request to request that the server should not allow the operation if it would result in multiple entries with the same value for a specified attribute (or a set of values for a set of attributes). It operates in two main phases:

  • In the pre-commit phase, the server can search for entries that already exist and conflict with the requested change. If a conflict is found in the pre-commit phase, then the operation will be rejected so that it is not allowed to create the conflict.
  • In the post-commit phase, the server can once again search for conflicts. This can help detect any conflicts that may have arisen as a result of multiple changes being processed concurrently on the same or different servers.

In the 8.2.0.0 release, we have added two enhancements to our support for the uniqueness request control:

  • It is now possible to indicate that the server should create a temporary conflict prevention details entry before beginning pre-commit processing. This is a hidden entry that won’t be visible to most clients, but that can make the server more likely to detect conflicts arising from concurrent operations so that they can be rejected before the conflict arises. While this dramatically increases the chance that concurrent conflicts will be prevented, it does incur an additional performance penalty, and it is disabled by default.
  • It is now possible to indicate that the server should raise an administrative alert if a uniqueness conflict is detected during post-commit processing. If a conflict is identified in the post-commit phase, then it’s too late to prevent it, but it is now at least possible to make it easier to be brought to the attention of server administrators so they can take any corrective action that may be necessary. This is enabled by default.

Improved Directory Proxy Server SASL Support

We have updated the Directory Proxy Server to provide better support for the PLAIN, UNBOUNDID-DELIVERED-OTP, UNBOUNDID-TOTP, and UNBOUNDID-YUBIKEY-OTP SASL mechanisms.

Previously, the Directory Proxy Server attempted to process these operations in the same way as the backend Directory Servers. It would retrieve the target user’s entry from the backend server, including the encoded password and any additional data needed in the course of processing the bind (for example, the user’s TOTP secret when processing an UNBOUNDID-TOTP bind). However, this would fail if the server backend server was configured to block external access to this information. Now, the Directory Proxy Server has been updated to simply forward the bind request to the appropriate backend server for processing, which eliminates the potential for problems in environments in which the Directory Proxy Server cannot get access to all necessary data in the target user’s entry.

Ping Identity Directory Server 8.1.0.0

We have just released the Ping Identity Directory Server version 8.1.0.0. It’s a pretty big release with several new features, enhancements, and bug fixes. You can keep reading (or see the release notes) for a pretty comprehensive overview of what’s included, but here are the highlights:

  • Composed attributes, which are like constructed virtual attributes, except that their values are computed when the entry is created or updated, and they can be indexed for much faster searches.
  • A new JSON-formatted virtual attribute that exposes all kinds of password policy state information.
  • Alternative failure lockout actions, like delaying the bind response after too many failed authentication attempts instead of actually locking the account.
  • Support for SASL integrity and confidentiality when using GSSAPI.
  • Authentication support for the file servlet, and a default instance that allows administrators to access files below the server instance root over HTTPS.
  • It’s easier to run collect-support-data and manage-profile generate-profile remotely without needing command-line access to the server.
  • Improvements to the character set and attribute-value password validators, and significant updates to the banned password lists used by the dictionary password validator.
  • Several command-line tool improvements.
  • Performance improvements.

New Features

We added a new composed attribute plugin, which allows the server to generate values for an attribute from a combination of static text and the values of other attributes in the same entry. Composed attributes behave much like constructed virtual attributes, but the composed values are computed when the entry is created or updated, and they are stored in the database. This means that the values can be indexed, and it is possible to compose values for attributes that are required by one or more of the entry’s object classes. A new “populate composed attribute values” task allows you to generate composed values for entries that already existed at the time the composed attribute was enabled.

We added support for a new ds-pwp-state-json virtual attribute whose value is a JSON object with a comprehensive overview of the associated user’s password policy state and related configuration from their password policy. The UnboundID LDAP SDK provides enhanced support for extracting data from this virtual attribute, but any JSON API should be able to parse the value.

We updated the password policy to add support for alternative failure lockout actions. As an alternative to completely locking a user’s account after too many failed authentication attempts, it is now possible to delay the responses to subsequent bind operations as a way of imposing a rate limit on authentication attempts. It is also possible to use a “no-op” action that does not have any client-observable effect but can be used to make administrators aware of the issue through the server’s support for account status notifications.

We updated our support for the GSSAPI SASL mechanism to include support for the auth-int and auth-conf quality of protection (QoP) values. We previously supported GSSAPI for authentication only, but it is now possible to sign or encrypt all subsequent communication on the connection with a key negotiated during the authentication process.

We updated the file servlet to support HTTP basic authentication. If authentication support is enabled, you can optionally restrict access by group membership or with the new file-servlet-access privilege (which is included in the set of default privileges that can be automatically inherited by root users or topology administrators). The server is now configured with an instance of this file servlet at /instance-root that provides access to files within the server instance root to users with the file-servlet-access privilege. This servlet provides more convenient access to server files for instances running in containers or other environments where filesystem access is not readily available.

Enhancements

We made several improvements to the collect-support-data tool. They include:

  • The server now supports an extended operation that a remote client can use to invoke the collect-support-data tool on the server and stream the resulting archive back to the client. This can be especially useful if the server is running in a container or other kind of environment without convenient command-line access. This extended operation can be requested by adding the --useRemoteServer argument to the command line, or it can be used programmatically through the UnboundID LDAP SDK for Java.
  • The server now allows running collect-support-data as either a one-time task or a recurring task. The recurring task allows the server to automatically generate support data archives at regular intervals. The UnboundID LDAP SDK for Java has been updated to allow you to create an instance of the task programmatically.
  • The tool now offers an --outputPath argument that allows you to specify the path for the resulting support data archive. The path can target either a file or a directory, and if you specify a directory, then the file will be created in that directory with a name that the tool generates.
  • If the Delegated Admin application is configured in the server, collect-support-data will now capture its config.js configuration file, its version file, and any files used to create custom UI fields.

We made several improvements to the manage-profile tool, including:

  • The server now allows running manage-profile generate-profile as either a one-time task or a recurring task. The recurring task allows the server to generate an up-to-date profile at regular intervals. The UnboundID LDAP SDK for Java has been updated to allow you to create an instance of the task programmatically.
  • The manage-profile generate-profile subcommand now offers a --zip argument that will cause it to package the generated server profile in a zip file.
  • The manage-profile generate-profile subcommand now excludes the contents of the server’s bak and ldif directories by default, which can make the resulting server profiles much smaller.
  • We fixed an issue that could prevent the manage-profile replace-profile subcommand from creating new local DB backends through dsconfig batch files.
  • We fixed an issue that could prevent the manage-profile replace-profile subcommand from correctly exporting and re-importing data from a server with multiple backends.
  • We fixed an issue that could cause the server to warn about unexpected offline configuration changes the first time it was started after running manage-profile setup or manage-profile replace-profile.
  • We updated the manage-profile replace-profile subcommand so that it always requires a license file in the profile to apply. It is now easier to install a new license when updating the server to a new version.
  • We updated the manage-profile replace-profile subcommand so that it will check for any encryption-related arguments in the setup-arguments.txt file. If they are present, it will export the data to LDIF before applying the new profile, and it will re-import it after applying the profile.
  • We fixed an issue in which manage-profile replace-profile could fail to update recurring task chains.
  • We fixed an issue in which manage-profile replace-profile would not allow you to enable the LDAP changelog backend.

We made several updates to our SCIM support, including:

  • The server now allows you to join separate objects that will be returned as a single SCIM 2 resource object.
  • We updated our support for SCIM 2 PATCH operations so that they now require the schemas request attribute to be “urn:ietf:params:scim:api:messages:2.0:PatchOp” as per RFC 7644.
  • We updated SCIM 1.1 to use JSON-formatted responses by default when the request does not specify the expected content type.
  • We fixed a potential memory leak that could arise when processing SCIM requests.

The server now uses the 5.1.0 release of the UnboundID LDAP SDK for Java. In addition to several behind-the-scenes improvements, this also includes several improvements that are reflected in command-line tools, like:

  • Better default certificate trust settings for many of the tools that are provided with the server. In most cases, if you want to use a secure connection but don’t specify any trust-related arguments, the tool will automatically trust the certificates for any server in the topology.
  • Many of the tools that offer an interactive mode now use a more streamlined flow for obtaining the information needed to connect and authenticate to the server. It will prefer secure communication over insecure, and it can read the server configuration to determine the default port to suggest for the connection.
  • We updated ldapsearch to improve its usefulness in shell scripts. It is easier to extract attribute values to assign to script variables, and you can optionally have the tool exit with an error if the search completes successfully but does not return any entries.
  • We made several updates to the summarize-access-log tool so that it can now report on a lot of additional things like the use of TLS protocols and cipher suites, the most common authentication and authorization DNs, and information about the most expensive or biggest searches.

We improved performance for index-related processing when importing data from LDIF.

We improved the server’s performance when updating a composite index key that matches a very large number of entries.

We added support for caching password policies defined in user data rather than in the configuration, which can improve performance for password policy-related processing for entries making use of those policies.

We updated the character set password validator to make it possible to indicate that passwords must include characters from at least a specified number of character sets. For example, if you define sets that include lowercase letters, uppercase letters, digits, and symbols, you can require passwords to contain characters from at least three of those sets.

We updated the attribute-value password validator to make it possible to specify a minimum substring length when checking to see if the password contains the value for any other attribute in the user’s entry. This can help avoid problems with entries that have attributes with short values (especially values that are just one or two characters long).

We added a new “Replication Purge Delay” gauge that can help prevent administrators from setting a replication purge delay that is too low. If a server instance is offline for longer than the purge delay, it can be unable to re-join the replication topology when it is started because it missed changes that are no longer available in any of the other replicas.

We improved the logic that the server uses to automatically select an appropriate set of TLS cipher suites. It now does a better job of prioritizing the order for the cipher suites it selects, including preferring TLSv1.3-specific suites if they are available. This logic will also be used when creating LDAP client connections in command-line tools or server components that establish secure connections to other servers. This automatic selection can still be overridden by explicitly specifying the set of TLS cipher suites that should be used.

We updated the create-systemd-script tool so that it generates a forking service file, which is a better fit for the server because the process ultimately used to run the server is different from the start-server script used to launch it.

We updated the installer to require a minimum Java heap size of 768 megabytes when setting up the server.

We improved the logic used to automatically select the cache size for the replication database. The previously selected cache size could be too small under certain circumstances, which could cause the replication database to have an unnecessarily large on-disk footprint.

We updated the status tool to use more efficient search requests when retrieving replication state information.

We changed the behavior of the bypass-pw-policy privilege. It could previously allow a user to exempt themselves from certain password policy restrictions, but it will now only apply for operations against other users. A user with this privilege will be permitted to do the following:

  • Set a pre-encoded password for another user, even if that user’s password policy does not allow pre-encoded passwords.
  • Set a password for another user that does not satisfy all of the password validators in that user’s password policy.
  • Set a password for another user that is already in that user’s password history.

We updated the server to add an option to enter lockdown mode and report itself as unavailable if an error occurs while attempting to write to a log file.

We added a consent REST API that allows users to create and store consents, and to allow users to search for consents that have been granted to them.

We updated the commonly-used-passwords.txt file to include lots of additional values, especially from studies released at the end of 2019. We also updated wordlist.txt to add many additional English words, as well as words from several other languages. Both of these files can be used by the dictionary password validator to reject passwords that are likely to be guessed by attackers.

We added a global ACI that allows clients to use the pre-read and post-read controls by default. The server will only process these controls if the requester has permission to perform the associated write operation, and it will only include attributes in the pre-read or post-read entry that the client has permission to access.

We added an “--addBaseEntry” argument to dsreplication enable. If provided, the server will create the base entry in the target backend if it does not already exist. The base entry must be present in the backend when enabling replication for that backend.

We updated the general monitor entry to include locationName and locationDN attributes that can be used to determine the server’s location.

We updated the server so that it will log a warning if it is running on a Linux system with a memory control group that may allow portions of the process memory to be swapped out to disk.

We updated the HTTP connection handler to make it possible to require clients to present their own certificates to the server.

We added a new HTTP Processing (Percent) gauge that can be used to help monitor the server’s capacity for processing additional HTTP requests.

We updated the server to make information in the general monitor entry (that is, the “cn=monitor” entry itself) available over JMX. Previously, the server exposed information about monitor entries that exist below the general monitor entry, but not the general monitor entry itself.

We updated the LDAP external server configuration so that the use-administrative-operation-control property is only offered for specific types of server instances that support that control.

We updated the StatsD monitoring endpoint so that it no longer uses spaces, commas, or colons in metric names. Those characters are now replaced with underscores. We also remove single and double quotes from metric names.

We updated the Server SDK to provide methods for retrieving the name or DN of the client connection policy from a ClientContext or OperationContext.

We updated the server to provide support for additional debug logging when invoking Server SDK extensions.

We updated the server so that it will not allow changes to data below “cn=Cluster,cn=config” if the cluster contains servers with different versions.

We updated the server so that it will now warn if there are multiple different versions of the same library in the server classpath.

Bug Fixes

We fixed an issue in which retrieving the “cn=version,cn=monitor” entry could cause the underlying JVM to leak a small amount of memory.

We fixed a potential memory leak that could arise when processing SCIM requests.

We fixed an issue that could prevent changing the password for a topology administrator unless their password policy was configured to allow pre-encoded passwords.

We fixed an issue that could cause mirrored subtree polling to create unnecessary files in the server’s configuration archive.

We fixed an issue that could prevent the server from generating certificates for systems with hostnames containing non-ASCII characters.

We fixed an issue that could interfere with the ability to install custom extensions that require additional libraries.

We fixed a rare issue that could cause a delay during TLS handshake processing.

We fixed an issue that could cause the server to raise an administrative alert about an uncaught exception when the server tried to send data over a TLS-encrypted connection that is no longer valid.

We fixed an issue that could cause certain tools (including collect-support-data, dsreplication, and rebuild-index) from being able to use a tools.properties file if that file was encrypted.

We fixed an issue that could delay the shutdown process if the server was configured to communicate with an unresponsive StatsD endpoint over TCP.

We fixed an issue that caused exec recurring tasks to ignore the configured working directory.

We fixed an issue that could prevent the server from generating an “account updated” account status notification for a modify operation that matched the associated criteria but did not include a password update.

We fixed an issue that could cause the value of the load-balancing-algorithm-name property to be lost when adding using the manage-topology add-server command to add an instance to an existing topology.

We fixed a replication issue that could arise when re-initializing a replica with data containing changes that are older than the replica previously had.

We fixed a replication issue that could cause initialization to stall if an error occurred while trying to send an internal replication message.

We fixed an issue that could cause the server to consider obsolete replicas when attempting to determine the total replication backlog.

We fixed an issue that could cause the server to incorrectly report how much memory it is using after performing an explicitly requested garbage collection.

We fixed a rare replication issue that could arise when upgrading from a pre-7.3 release in an environment where servers had been removed from the topology. The server could incorrectly detect a backlog that it would never see as resolved.

We fixed an issue with the dsjavaproperties tool that allowed you to request both aggressive and semi-aggressive tuning options at the same time.

We fixed an issue that could cause the server to report a spurious error message when disabling the PingOne pass-through authentication plugin.

We fixed an issue that could cause the server to return an attribute with the name formatted in all lowercase characters if the attribute was present in an entry but not defined in the server schema, and if the client explicitly requested that attribute to be returned. The server will now format the attribute name using the same capitalization the client used when requesting that attribute.

We fixed an issue that could prevent certain command-line tools from reporting the correct error if a problem occurred and the server was configured with a custom result code map.

UnboundID LDAP SDK for Java 5.0.0, now available under the Apache License

The UnboundID LDAP SDK for Java is a fast, powerful, user-friendly, and completely free Java library for communicating with LDAP directory servers and performing other LDAP-related processing. We have just released version 5.0.0 of the LDAP SDK, and it is available for download from GitHub and SourceForge, as well as from the Maven Central Repository. The release notes are available online at https://docs.ldap.com/ldap-sdk/docs/release-notes.html.

The most significant change in this new release is that the LDAP SDK is now available under the terms of the Apache License, Version 2.0, which is a very permissive OSI-approved open source license. Although it was already open source under the terms of the GNU GPLv2 and LGPLv2.1, the Apache License imposes fewer restrictions on how you can use the LDAP SDK. You are no longer required to offer to redistribute the source code (even if you want to use a modified version), and there’s no longer any concern about whether you need to keep the LDAP SDK jar file as a separate component. The Apache License is well respected and is often seen as more compatible and easier to use in non-open-source software than the GNU license, so we hope that this will make it easier to use in your applications, whether open source or proprietary. The LDAP SDK is still available for use under the terms of the GPLv2 and LGPLv2.1 (as well as the non-open-source UnboundID LDAP SDK Free Use License), but we recommend that new users consider using it under the Apache License.

Aside from adding the new license, we made several code changes in this release as well. They include:

  • The LDAP SDK offers an LDAPConnectionDetailsJSONSpecification class that allows you to define a JSON file with all of the settings needed to create and authenticate individual LDAP connections or connection pools. We’ve updated this class so that it’s now possible to indicate that when establishing a connection that is secured with SSL or StartTLS, the LDAP SDK should automatically trust any certificates signed by an authority in the JVM’s default set of trusted issuers. This was already the default behavior if you didn’t provide your own trust store (or choose to blindly trust all certificates, which isn’t recommended for production use), but it’s now possible to use this option in conjunction with a provided trust store so that it’s possible to trust a certificate either through that trust store or through the JVM’s default set of trusted issuers.
  • The KeyStoreKeyManager can be used to obtain a certificate from a key store file if one is needed during TLS negotiation. We have updated this class to provide an option to better validate that the key store can actually be used by this purpose with the settings that you provide. If you use this option and supply the alias of the certificate you wish to use, then the key manager will now verify that the alias exists in the key store, that it’s associated with a private key entry (as opposed to a trusted certificate entry, which only contains the public portion of a certificate and isn’t suitable for use if you need to present that certificate to the peer), and that all of the certificates in the chain are currently within their validity window. If you don’t specify a certificate alias, then the validation will make sure that the key store contains at least one private key entry in which all of the certificates in the chain are within their validity window.
  • The TrustStoreTrustManager can be used in the course of determining whether to trust a certificate presented by a peer during TLS negotiation. We have improved performance and concurrency for this trust manager by eliminating unnecessary synchronization that forced interaction with the trust store to be single-threaded.
  • We fixed an issue that could interfere with GSSAPI authentication if a JAAS login module configuration was loaded and cached by the JVM before the login attempt. In such cases, the cached configuration could be used instead of the one that was intended.
  • The LDAPDebuggerRequestHandler can be used to log detailed information about LDAP requests and responses that pass through an application using the LDAP SDK’s LDAPListener framework (including the in-memory directory server and the ldap-debugger command-line tool). We fixed an issue that could cause messages to be held up in an internal buffer rather than immediately written out as soon as they’re logged. In some cases, this could significantly delay the appearance of these messages or could prevent them from being written out at all if the amount of data to be logged was never enough to fill that internal buffer.
  • We added a new JSONAccessLogRequestHandler to the LDAPListener framework. This can log information about requests and responses as JSON objects, which are both human-readable and machine-parseable. While the existing AccessLogRequestHandler produces output that can be parsed programmatically to some extent, it is more optimized for human readability.
  • The LDAP SDK offers debugging logging support that can be helpful in diagnosing problems whose cause may not otherwise be readily apparent. Previously, the debug messages were logged in a form that was primarily intended to be human-readable rather than machine-parseable. They are now written in a JSON format that is both human-readable and machine-parseable.
  • The manage-certificates command-line tool provides a utility for interacting with certificate key and trust stores in the Java JKS format or the standard PKCS#12 format. When displaying detailed information about certificates in a key or trust store, the tool may not have been able to properly decode public key information for certificates with 384-bit elliptic curve public keys, and it also may not have been able to properly decode a subject alternative names extension that included one or more directoryName values. While it was still possible to display most of the information about the affected certificates, the updated version can now provide the full details about those elements.
  • The Ping Identity Directory Server includes a collect-support-data utility that can be used to gather a variety of information from a server installation that can be very useful for troubleshooting problems, tuning performance and scalability, and better understanding the environment in which the server is running. Previously, this utility could only be invoked by logging into the system on which the server instance is running and running the command-line tool. We have now added a couple of additional mechanisms for running the utility. It can now be invoked via an administrative task (either as an individual event that is requested by a remote client or as a recurring task that runs on a regular basis) that will create the resulting support data archive in a specified location on the system (which may be a shared filesystem for easier exfiltration). It can also be invoked via an extended operation that will run the tool and stream its output and the resulting support data archive back to the client in the form of intermediate response messages. Further, although the logic for actually collecting all of this support information remains in the server, we have added the collect-support-data command-line tool to the LDAP SDK so that it is easier to invoke the tool against a remote server without needing to install the server software on the client system.
  • The Ping Identity Directory Server provides a monitor backend that authorized clients can use to obtain a wealth of useful information about the state of the server, and the LDAP SDK includes support for retrieving and parsing the information in these monitor entries. We have updated the LDAP SDK’s support for the general monitor (that is, the top-level “cn=monitor” entry) to make it easier to obtain information about the cluster with which the server is associated, the location of the server instance, and a unique identifier that was generated for the server when the instance was initially configured.
  • The LDAP SDK offers a Version class that provides version information for the LDAP SDK, including the version number and information about the repository (e.g., the repository URL and revision ID) from which the LDAP SDK source code was obtained. This information was previously only offered as public static final constants, but referencing these constants from third-party applications could lead to unexpected behavior thanks to a “feature” of the Java compiler that will directly imbed the values of those constants (even if they come from a separate library) in the Java bytecode that it generates. This means that if your application references these LDAP SDK version constants and you compile it against one version of the LDAP SDK, then those version constants will be placed directly into the compiled bytecode. If you upgrade the LDAP SDK version that you use without recompiling your application (e.g., by just replacing the LDAP SDK jar file with a newer version), the code referencing the LDAP SDK version would still have the old values. To address this, we have updated the Version class to provide methods for obtaining the values of all the version constants. If you use these methods to obtain the values rather than referencing the constants directly, then you will always get the correct LDAP SDK version information even if you update the LDAP SDK without recompiling your application.

Ping Identity Directory Server 8.0.0.0

We have just released version 8.0.0.0 of the Ping Identity Directory Server, along with new releases of the related Directory Proxy Server, Data Synchronization Server, Metrics Engine, and Delegated User Admin products. The release notes include a comprehensive list of features, enhancements, and fixes, but here are some of the most notable changes included in the release:

  • We have expanded support for the manage-profile tool to include the Directory Proxy Server, Data Synchronization Server, and Data Governance Server products. This allows you to set up, update, or reconfigure a server using the information in a provided profile. The profile defines the configuration, schema, extensions, certificates, encryption settings, and all the other components needed to configure a server instance exactly the way you want it.
  • We have updated the Directory Proxy Server so that it can use the topology registry to automatically discover and start using Ping Identity Directory Server instances without needing to change the Directory Proxy Server configuration.
  • We have improved our support for integrating with third-party monitoring services like Splunk by updating the stats collector plugin to support sending data in StatsD format to a specified endpoint. We have also updated the periodic stats logger so that it supports generating JSON-formatted output. The former CSV output format is also still supported. And we have added a new “Status Health Summary” monitor entry that provides a summary of the server’s current assessment of its health, which especially simplifies monitoring with third-party monitoring over JMX.
  • We have updated the Directory Server so that it now supports SCIMv2 in addition to the existing SCIMv1 and Directory REST API options for REST-based access to directory data. Formerly, SCIMv2 was only available through the Data Governance Server.
  • We have added a new replace-certificate tool that makes it easier to replace a server’s listener or inter-server certificate. The tool offers a non-interactive mode that is suitable for scripting support, but it also has a full-featured interactive mode that can walk you through the process of obtaining and installing a new certificate. The interactive mode will also provide you with the necessary commands to achieve the same result in non-interactive mode.
  • We have dramatically improved our support for account status notifications. We have defined a couple of new notification types that can be raised whenever an entry is created or modified by a request that matches a given set of criteria. We have also defined many new properties that can be used in the notifications. And we have added a new multi-part email account status notification handler that can be used to send plain-text and/or HTML-formatted email messages whenever an appropriate event occurs within the server.
  • We have added a new password validator that leverages the Pwned Passwords service to make it easier to reject passwords that are known to have been compromised in data breaches.
  • We have added a new password storage scheme that uses the Argon2i password hashing algorithm, which was selected as the winner of a 2015 password hashing competition.
  • We have updated our support for the PBKDF2 password storage scheme so that it offers additional variants that leverage the 256-bit, 384-bit, and 512-bit SHA-2 digest algorithms. We have also updated the default salt length and iteration count values in accordance with NIST SP 800-63B recommendations.
  • We have improved the server’s support for generating passwords. We have added a new request control that can be included in add requests to have the server generate a password for the new entry and return it to the client in a corresponding response control. We have also added a new extended operation that can be used to request that the server generate one or more passwords that can be provided to the end user as new password suggestions when creating an account or changing a password.
  • We updated the Data Synchronization Server’s password sync agent for Active Directory so that it encodes passwords using a salted 256-bit SHA-2 digest rather than the previous salted SHA-1 digest. The SHA-1 digest can still be used if necessary for purposes of backward compatibility.
  • We updated the Data Synchronization Server’s create-sync-pipe-config tool to add support for using the PingOne for Customers service as a sync source or destination.
  • We updated Delegated Admin’s support for constructed attributes. Constructed attributes can now be made read-only, and they can also reference other constructed attributes. Constructed attribute values can now also be updated when any of their dependent attributes change.
  • We updated the HTTP external server configuration to make it possible to specify the alias of the certificate chain to be presented during mutual TLS negotiation.
  • We added a new JVM-default trust manager provider that can be used to automatically trust any certificate signed by one of the trusted issuers in the JVM’s default trust store.
  • We have added a new Server SDK API for sending email messages.
  • We updated the exec task to make it possible to specify the current working directory for the command that is being executed. The server previously always used the server root as the current working directory, and that is still the default if no alternate path is specified.
  • We updated the collect-support-data tool to add a --duration argument that will cause it to capture log content for the specified duration up to the current time.
  • We fixed an issue that prevented assured replication from being honored for requests received via SCIM or the Directory REST API.
  • We fixed an issue in which the restore tool might not have automatically restored all of the dependencies of an incremental backup.
  • We fixed an issue in which the Directory Proxy Server could incorrectly report a success result for an entry-balanced search operation in which all attempts in a backend set failed with a timeout.
  • We updated log file rotation listeners, including the summarize access log and copy log file listeners, so that they perform their processing in a background thread. This can help ensure that their processing does not temporarily block logging attempts on very busy servers.
  • We fixed an issue in which the verify-index tool could report spurious error messages when examining index keys containing multi-byte UTF-8 characters.
  • We fixed an issue in which escaped special characters in schema extensions may not be handled properly. This could cause unexpected or incorrect behavior in cases where those values are interpreted by the server (for example, in the X-VALUE-REGEX constraint in attribute type definitions).
  • We fixed an issue that could cause access log messages for bind and StartTLS operations to report the client connection policy that was previously in use for the connection rather than the new policy that was assigned as a result of the associated operation.

Password Policy Recommendations for the Ping Identity Directory Server

When using an LDAP directory server to authenticate users, the vast majority of those authentications will make use of a password. Even though the Ping Identity Directory Server supports multiple options for two-factor authentication, you’re still likely to use a password as one of those factors. As such, ensuring that you have a good password policy in place is an essential element of your server’s security configuration.

Obviously, the biggest risk when using password-based authentication is that someone will provide the correct credentials for an account that isn’t theirs, and will be able to do whatever the account owner can do. Some of the most common ways that this can happen are:

  • The account has a really simple, easily guessable password. Maybe they use the word “password” or some other common value. Maybe their password is the same as their username or email address. Maybe their password is the same as your application or service.
  • The account owner used the same username and password across multiple sites, and one of those other sites got breached. When this happens, attackers often try to use the breached credentials to log into other sites.
  • An attacker managed to get access to the encoded representation of a user’s password and was able to crack it through some means.

Some of these are things that are out of your control as a website owner. For example, no matter how careful you are with your own site, you can’t prevent some other site from getting breached and its credentials exposed. However, you can take steps to ensure that your site is better protected against these kinds of things. In this post, I’ll discuss options for creating a password policy that will help keep your site as safe as possible.

While some of the recommendations I provide here are specific to the Ping Identity Directory Server, there is a lot of generic advice in here as well, and that may be applicable to other types of directory servers.

Require Secure Communication

Most LDAP authentication schemes that involve a password send that password to the server in the clear, without any kind of encoding or transformation. There are some that do try to obscure the password (e.g., salted challenge-response authentication mechanisms), but they are typically garbage that actually dramatically weaken the security of your environment and should be avoided at all costs. All of the best password-based authentication mechanisms send the password in the clear, which means that you must ensure that all of the communication happens over a secure, encrypted connection.

The server offers a few options to help ensure that this is done. They are as follows, in order of strongest protection to weakest protection:

  1. Disable any connection handlers that allow insecure communication. If you only allow LDAPS and not unencrypted LDAP, then there is no chance that someone with the ability to observe the communication will be able to decipher it.
  2. Set the “reject-insecure-requests” property to “true” in the global configuration. While this will allow clients to establish insecure connections to the server, it will reject most requests issued over those connections until the client has used the StartTLS extended operation to convert that connection from an insecure one to one that is secure. However, this option isn’t as good as simply disabling all insecure communication because even though it ensures that clients won’t be permitted to authenticate with an unencrypted bind request, it can’t prevent the client from sending that request in the first place.
  3. Set the “require-secure-authentication” and “require-secure-password-changes” properties to “true” in the password policy configuration. These properties ensure that the server will reject any requests that attempt to authenticate or change passwords that are sent over an unencrypted connection. Like option #2 above, this won’t prevent the requests from being sent, but will merely reject them if they are. But it also won’t protect other sensitive information that may be transferred over the connection.

Use a Strong Password Storage Scheme

When the Directory Server receives a password to store in a user entry, it encodes that password with a password storage scheme. This helps protect that password so that anyone who gains access to the entry (whether over LDAP, in a backup of the database, in an LDIF export, or in some other form) won’t be able to determine what the clear-text password really is. When the server receives a bind request that contains a clear-text password, it will use the password storage scheme and information contained in the encoded password to determine whether the provided clear-text password matches the one used to create the encoded password.

The server offers a number of different password storage scheme options, that fall into different categories:

  • Some of them, like those that use the UNIX crypt algorithm or versions of the MD5 or SHA-1 digest algorithms, are only intended for legacy purposes like migrating already-encoded passwords from another data store. You definitely shouldn’t use these for new passwords unless you absolutely have to maintain backward compatibility with a legacy system for some period of time.
  • Some of them, like those that use AES or triple-DES, use reversible encryption to encode the passwords in a way that allows the server to obtain their original clear-text value. These should not be used unless you absolutely have to support a legacy authentication scheme (like CRAM-MD5 or DIGEST-MD5) that requires that the server be able to determine the clear-text representation of the password.
  • Some of them offer salted variants of more modern digest algorithms, like 256-bit, 384-bit, and 512-bit SHA-2 variants. These schemes are acceptable, especially if you have configured an appropriate set of password validators that require users to have strong passwords, but they are very fast, which means that an attacker can make a lot of guesses in a short period of time.
  • Some of them use algorithms that are intentionally designed to require a lot of CPU processing or memory access so that they take longer to compute, or so that it’s harder to compute a lot of them at the same time. We currently offer support for the PBKDF2, bcrypt, and scrypt schemes, and plan to add others in the future. These are the strongest options available, and they will definitely have a dramatic impact on how long it will take an attacker to brute-force a password, but they can also dramatically impact the performance of legitimate authentication attempts, and also the performance of an LDIF import that includes clear-text passwords that need to be encoded.

Ideally, you should encode passwords with the strongest scheme that you can tolerate based on your load. If you don’t need more than a few hundred authentications or password changes per second, then you should probably consider the “expensive” schemes to make it as difficult as possible for attackers that may get access to encoded passwords. But if your performance demands are such that you can’t afford enough servers to handle the authentication load with these expensive schemes, then at least use one of the strong modern digests.

Note that if you choose a particular scheme now, or if you have existing passwords encoded with legacy schemes, you aren’t stuck with them, and you don’t have to require all of your users to change their passwords to transition to stronger encodings. For that, we offer the deprecated-password-storage-scheme property in the password policy configuration. If a user has a password encoded with a deprecated scheme, the server will automatically re-encode it using the default scheme the next time they use that password to authenticate.

Require Strong Passwords

It’s important to ensure that passwords are encoded in a secure manner, but it’s even more important that the passwords are strong to start with. If a user has a password that can be easily guessed, then it won’t take long to crack it using even the strongest encoding.

The best practices for choosing secure passwords have changed over the last several years. Things that people used to think were good ideas have turned out to be not so hot. Some of the things that you shouldn’t do are:

  • Don’t require users to change their passwords on a regular basis for no reason. That’s just annoying to your users and doesn’t improve security in any meaningful way. In fact, it’s more likely to cause people to make bad decisions, like just keeping the same basic password but adding a counter to the end. You should only require users to change their passwords if you suspect that they may have been compromised, or if they’ve been reset by an administrator (e.g., because the user forgot what their previous password was).
  • Don’t impose a ridiculous upper limit on the length of a password. There’s just no good reason for it. All of the good password storage schemes that we provide generate the same encoded password length regardless of the size of the password being encoded, so you’re not saving any space by preventing long passwords. With all other things being equal, longer passwords are stronger than shorter ones, so you want to encourage people to choose long passwords. It is true that some algorithms may take longer to encode a really long password, so maybe don’t let people choose passwords longer than a few hundred characters, but there’s never a good reason to impose an upper limit of something like ten or twenty characters.
  • Don’t require passwords to have different classes of characters. It’s entirely possible to have a very strong password that is comprised entirely of lowercase letters, just as it’s entirely possible for a password containing a mix of lowercase and uppercase letters, digits, and symbols to be very weak. You certainly shouldn’t prevent people from using a mix of character types, but you also shouldn’t require it for no good reason.
  • Don’t impose ridiculously low limits on the number of times the same character may appear consecutively in a password. Long, randomly generated passwords are very strong, and yet it’s entirely possible that such passwords may have the same character appear two or three or four times in a row. It’s very frustrating to choose what is clearly a very strong password only to have it rejected for some stupid reason like this. If you want to prevent passwords comprised of repeated characters, like “aaaaaaaaaa”, then it would probably be better to require them to have a minimum number of unique characters than limiting the number of times the same character may appear in a row.

So what should you do? Despite being a government document, NIST Special Publication 800-63B has some really good advice. And here are some good guidelines that include these and other recommendations:

  • Don’t allow users to choose obvious or weak passwords. This includes commonly used passwords, dictionary words, and words related to the service that you’re operating. To help with this, the Ping Identity Directory Server offers a dictionary password validator that can reject attempts to use passwords in a given dictionary file, and we offer a couple of instances of that validator preconfigured with dictionary files to use with it: one with over 500,000 of the most commonly used passwords, and one with over 400,000 English words. We recommend that you also configure a validator with your own custom dictionary that includes at least words related to your organization and the products or services you provide., and you may also want to use dictionaries of non-English words. Also, we’ve got plans to provide additional options in this area in the near future.
  • Impose a minimum length for passwords (but not a maximum). If a password is too short, then it’s too easy to brute force. There are a couple of ways that you can do this in the Ping Identity Directory Server:

    • You can use the length-based validator to simply impose a minimum length. NIST 800-63B suggests at least eight characters, but to be honest, even a complex eight-character password can be cheaply brute-forced in a handful of hours.
    • Another option is to use the haystack validator, which uses the concept of password haystacks to evaluate a password’s complexity as a factor of both its length and the types of characters it contains. This is perhaps more complicated to explain to users, but it can do a better job of ensuring resistance to brute force attacks.
  • Prevent users from choosing passwords that are related to other information that you have about them. For example, you should not allow them to choose a password that matches their username, email address, telephone number, date of birth, etc. The Ping Identity Directory Server offers an attribute value password validator that can be used to ensure that passwords aren’t allowed to match (or optionally contain) the values of other attributes in the entry.
  • When users are changing their passwords, they should not be permitted to choose a new password that is too similar to their current password. For example, if their current password is “ThisIsMyStrongPassword1”, it’s probably a good idea to prevent them from choosing “ThisIsMyStrongPassword2” as their new password. To achieve this, the Ping Identity Directory Server offers a similarity validator that uses the Levenshtein distance algorithm to compare a proposed new password with the user’s current password (which would have to be provided as part of the password change, since we don’t recommend configuring the server to store passwords in a reversible form) to determine the minimum number of changes (characters added, removed, or replaced).
  • As noted above, it’s not a good idea to reject passwords just because they have the same password repeated a few times in a row, but it might not be a bad idea to require that they have a minimum number of different characters. The Ping Identity Directory Server offers a unique characters password validator that can accomplish this.
  • Let the user know what the requirements are. The Ping Identity Directory Server offers a get password quality requirements extended operation that you can use to programmatically retrieve information about the validation that the server will perform, and it offers a password validation details control that you can use to determine which requirements were satisfied and which were not when trying to set a password. I wrote an earlier blog post about using these features.
  • Recommend the use of a password manager, and recommend using them to generate long random passwords. Certainly don’t do anything that would discourage their use, like prevent users from pasting text into fields on login forms, or interfere with things that might legitimately happen in long random passwords (like having the same character repeated a few times).

If your service imposes additional requirements on password quality that can’t be enforced with the above validators, then I would first recommend taking a hard look at whether they are actually good requirements. Lots of organizations have really dumb rules that either weaken security or at least don’t do anything to improve it while also annoying the end users. But if it turns out that you do have a legitimate need to impose additional types of requirements, then our Server SDK provides an API that allows you to create your own custom password validator implementations.

Require the Current Password for Password Changes

When a user wants to change their password, it’s a good idea to verify that it’s actually them and not someone who has gotten access to their account (e.g., someone using a shared computer that was left logged in). Further, if the user provides their current password when choosing a new password, it allows the server to perform additional types of validation for the new password that may not otherwise be available (e.g., making sure that the new password is not too similar to the current password).

The Ping Identity Directory Server offers a password policy setting to enforce this. If the password-change-requires-current-password property is set to true, then any user changing their own password will be required to provide their current password. This applies to both password changes that use a standard LDAP modify operation and those that use the password modify extended operation. The password modify extended operation already includes a field for the current password, so meeting this requirement for that type of operation is obvious. It’s less obvious for a standard LDAP modify operation, but the way to achieve that is to provide the clear-text representation of the current password in a delete modification, and the desired new password in an add modification, like:

dn: uid=test.user,ou=People,dc=example,dc=com
changetype: modify
delete: userPassword
userPassword: oldPassword
-
add: userPassword
userPassword: newPassword
-

Note that this does not apply to administrative password resets in which one user changes the password for another user.

Enable a Password History

Password reuse is one of the most common ways for accounts to get breached. The biggest risk is when someone uses the same password across multiple sites: if one of those sites is breached and the passwords are obtained, then attackers can try the same credentials on other sites. However, it’s also a bad idea for a user to repeatedly use the same password on the same site.

Repeatedly using the same password on the same site is most commonly an issue if that site requires people to change their password, which, as already stated, is an ill-advised strategy. If you don’t enable password expiration, then people will be less likely to arbitrarily change their passwords, and therefore will be less likely to want to reuse one that they’ve chosen before.

Another common reason for someone wanting to reuse the same password is if they have accidentally locked their account by mis-typing their current password too many times. If this happens, then it’s almost certainly the result of a bad account lockout configuration, which is something that I’ll cover below.

Even if the most common reasons that a user would want to reuse a password are the result of ill-advised configurations, it’s still a good idea to prevent someone from repeating a password that they’ve already had in the past. The server already prevents a user from choosing a new password that is the same as their current password, but if you want to prevent users from repeating passwords they’ve had in the past, then the best way to do that is by enabling a password history. The password policy configuration offers a couple of properties to do this:

  • password-history-count — Specifies the maximum number of passwords to retain in the history, regardless of how long it’s been since the passwords were in use.
  • password-history-duration — Specifies the length of time to retain previous passwords in the history, regardless of how many of them have been used during that time.

In either case, the server will maintain an operational attribute with the encoded representations of the user’s former passwords, along with a timestamp indicating when the password was added to the history.

Note that if you do enable a password history, then you may also want to prevent users from changing their passwords too frequently, which you can do with the min-password-age property. This is especially true if you have configured a maximum history count because it’s a common trick for a user who really wants to reuse a previous password to change their password multiple times in quick succession until the password they want to reuse has been flushed from the history. But even though this trick is less effective if you configure a maximum duration rather than a maximum count, it’s still a good idea to prevent frequent password changes in that case too, just to avoid unnecessarily building up a large history.

A relatively short minimum password age, like a day or perhaps even an hour, should be enough of a deterrent to someone who wants to just blow out the password history so that they can reuse a password. Note, however, that the minimum password age only applies to self-changes and not administrative resets. If a user chooses a new password and then immediately forgets it so that they cannot authenticate, then an administrator will be able to reset the password even within the minimum age. And if force-change-on-reset is true, indicating that users are required to choose a new password after an administrative reset, then the minimum age will not interfere with that, either.

Don’t Allow Pre-Encoded Passwords

As previously noted, the server does not store passwords in the clear, but instead encodes all clear-text passwords that it is asked to store (ideally in a strong, non-reversible form). While it is technically possible to send the password to the server in a form that is already encoded, this is generally a very bad idea, for many reasons. Some of them include:

  • If a password is pre-encoded, the server can’t determine its clear-text representation, and therefore can’t verify that it meets the configured password quality requirements.
  • Because there are multiple ways of encoding a single password (e.g., using a different salt), there is no way to ensure that the new password isn’t just an alternate encoding for the user’s current password, or for another password in the user’s password history.
  • The Directory Server generally supports a wide range of applications. If applications pre-encode passwords before sending them to the server, then it makes it more difficult to migrate to a stronger encoding if that becomes necessary or desirable.

The Ping Identity Directory Server offers an allow-pre-encoded-passwords configuration property that controls whether it will accept pre-encoded passwords, and it has a value of false by default. We strongly recommend leaving this as the default.

We do understand that there may be legitimate cases in which it may be necessary to send pre-encoded passwords to the server. For example, if you’re synchronizing data between two sources, and if the data being synchronized includes pre-encoded passwords, then you might not have access to the clear-text password. In that case, rather than reconfiguring the password policy to allow any client to supply passwords in a pre-encoded form, a better alternative is to use the password update behavior request control in any such applications. One of the features that this control offers is a way to indicate that a pre-encoded password should be accepted on a per-change basis. The Ping Identity Synchronization Server (which is included at no additional charge with the Ping Identity Directory Server) makes use of this control for the synchronization that it performs.

Note that the value of the allow-pre-encoded-passwords property does not have any effect on LDIF import. The server will always permit pre-encoded passwords contained in an LDIF file that is loaded into the server with the import-ldif command or with the LDIF import task.

Don’t Allow Multiple Passwords

The two most common attribute types for storing passwords, userPassword (defined in RFC 4519) and authPassword (defined in RFC 3112), are both permitted to have multiple values. In theory, this means that a user could have multiple passwords. However, this is not a good idea. If a user has multiple passwords, then there is no way to control which one is used for any given authentication attempt, so all of them will be considered valid and represents the potential opportunity for the account to be breached.

Further, password policy state is not tied specifically to any password, but rather is maintained on a per-account basis. This means that if a user is allowed to have multiple passwords, then they may be able to exploit that to circumvent certain password policy functionality. For example, if password expiration is enabled (which I’ve already mentioned is a bad idea), then a user with two passwords could just keep changing one of them and continue using the other indefinitely. The same would apply to similar restrictions, like being forced to choose a new password after an administrative reset.

The Ping Identity Directory Server provides an allow-multiple-password-values configuration property to control this behavior, and it has a value of false by default. We do not recommend changing its value to true.

Configure Failure Lockout

If an attacker can get access to the encoded representation of a password, they can throw a lot of hardware at trying to crack it. But if they can’t get access to the encoded representation (and hopefully, that’s a very difficult thing to do), they can still try to crack a password by simply repeatedly trying to log in to an application that authenticates against the server. This is much slower, but it can still be quite effective for users with weak passwords.

You can make this a lot harder by limiting the number of login attempts that someone can make before the account is locked. If an account is locked, then it doesn’t matter what password is tried because any attempt will be rejected. The Ping Identity Directory Server offers support for two types of failure lockout:

  • Permanent lockout, in which you don’t specify a maximum lockout duration. In this case, the account lockout will remain in effect until an administrator resets the user’s password.
  • Temporary lockout, in which you do specify a maximum lockout duration (via the lockout-duration configuration property). In this case, the lockout will automatically be lifted after a period of time, but it can still be unlocked earlier by a password reset.

I would strongly recommend using the temporary lockout with a relatively short lockout duration (e.g., one minute). This is sufficient to severely limit the rate at which an attacker can make guesses, but it also ensures that the lockout is short enough that an administrator shouldn’t need to get involved in the case that a user accidentally locks their account but still remembers the correct password. However, I would also strongly recommend setting the lockout failure count (via the lockout-failure-count property) to be high enough that it’s very unlikely to be reached accidentally by merely fat-fingering the password. Ten failed attempts seems like a good value for this purpose.

Note that the Ping Identity Directory Server offers an additional useful feature for helping to prevent accidental failure lockout. If the ignore-duplicate-password-failures property is set to its default value of true, the server will consider repeated attempts to use the same wrong password as just a single failed attempt. This can be helpful in cases where the password has been configured in an application, and that application hasn’t been updated after the user changes their password. If you’re concerned about this scenario, then you may also want to consider enabling password retirement, as described in a later section.

Configure Password Reset Constraints

The account’s owner should be the only one to know its password. If they forget their password and they need to have it reset, there are two basic options:

  • Have the password reset by a human administrator. In this case, the administrator can choose the password themselves, or they can allow the server to generate it for them (more on this later). Either way, the administrator will end up knowing the password, which means that it should only be usable for the purpose of allowing the user to set a password of their own choosing. The force-change-on-reset configuration property can be used to enable this, and you may also want to set a value for the max-password-reset-age property to limit how long that temporary password is valid.
  • Create an automated process that allows the user to reset their own password after providing reasonable proof of their identity. To help with this, the Ping Identity Directory Server offers a feature that we call password reset tokens. A password reset token is a single-use password that can be delivered to the user through some out-of-band mechanism (e.g., email message, text message, voice call, mobile push notification, etc.) and can be provided to the password modify extended operation to allow the user to choose a new password. See the DeliverPasswordResetTokenExtendedRequest class in the LDAP SDK for more information about this feature.

Use a Strong Password Generator

The server has the ability to generate passwords under certain circumstances. Some of them include:

  • When using the password modify extended operation. If the new password field of the request is left empty, the server can generate the new password and return it in the extended response. The password generator used for this purpose is specified using the password-generator property in the password policy configuration.
  • When using the deliver password reset token extended request, as described in the previous section. The password generator used for this purpose is specified using the password-generator property in the extended operation handler configuration.

We may also introduce additional methods for generating passwords in the future.

When generating a one-time password, password reset token, or single-use token, the time frame for the generated password is inherently very limited (typically a matter of minutes). In such cases, it may be acceptable to have the password be relatively simple. For example, something like eight alphanumeric characters should be sufficient, and you may even feel comfortable with something simpler.

When using the password modify extended operation to perform an administrative reset, the time that a generated password may be valid can be limited by the max-password-reset-age property in the password policy configuration, but there is no such limit for a generated password used for a self-change. In these cases, you may want to ensure that the generated password is substantially stronger. By default, the Ping Identity Directory Server will generate a passphrase that is at least 20 characters long and consists of at least four randomly chosen words. This generated password will hopefully be something that is both memorable and easy to type, and will also serve as an example of the kind of strong passwords that you hope users choose for themselves (although it’s still a better idea to use a password manager in most cases, and to let it generate a strong random password).

Note that the password generator is not guaranteed to generate passwords that will satisfy the configured set of password validators, but the server will accept and use the generated password even if it would have been rejected if the user had chosen it for themselves. If you have eccentric password quality requirements, you may wish to customize the password generator to be more likely to create something that the server would accept.

Consider Using Password Retirement

The Ping Identity Directory Server offers a useful feature called password retirement that allows a user to change their password, but continue using their former password as an alternative to their new password for a limited period of time. This is helpful for cases in which the password might be used in multiple places (especially if it’s configured in an application, and even more especially if there might be multiple instances of that application), because it gives the user the opportunity to update all of the things that use the password without them failing immediately as soon as it is changed. I wrote an earlier blog post about password retirement, so I won’t repeat it here, but it’s definitely something to consider, at least for application accounts.

Consider Using Account Status Notifications

Another potentially useful password policy feature that the Ping Identity Directory Server offers is account status notifications. This allows the server to perform custom processing whenever certain events occur that affect user accounts, like a user’s account has been locked as a result of too many failed attempts, an authentication attempt has failed because the account is expired or has been unused for too long, a user’s password is changed or reset by an administrator, etc.

The server includes support for a couple of different types of account status notification handlers out of the box: one that is capable of sending an email message to the end user and/or to administrators when such an event occurs, and one that simply logs a message about the event. Our Server SDK also provides support for creating custom account status notification handlers, so you can integrate with other systems that you might use in your environment. We’re also going to be substantially improving our account status notification handler subsystem in the near future.

One purpose of account status notification handlers is to provide a means of letting end users know about substantial events that affect their accounts. This is useful from a security perspective because if a user sees something that indicates something may be amiss, then they can inquire about it. But account status notification handlers can also serve as a kind of auditing mechanism for these kinds of events so that administrators are made aware of things that might suggest a problem or an attack. For example, if a large number of accounts are being locked out, or if the same account is repeatedly being locked out, then that suggests someone might be trying a password guessing attack.

Alternatives to Arbitrary Password Expiration

I’ve already pointed out on multiple occasions that arbitrary password expiration is not recommended. However, there may be reasons that you want users to change their passwords that might not be simply the result of them having the same password for too long.

For example, if you believe that one or more user accounts might have had their passwords exposed (whether in the clear or in encoded form), then it’s a very good idea to require them to choose a new password. The Ping Identity Directory Server offers a couple of features to help with this:

  • The password policy offers a require-change-by-time configuration property that allows you to require all users associated with that policy to change their passwords by a specified time. Failure to comply will cause their account to be locked until the password is reset by an administrator.
  • The password policy state extended operation provides a mechanism for manipulating the account state for a user or set of users. Using this extended operation, or the manage-account tool that provides a simple command-line interface to the operation, you could put a specified account or set of accounts in a “must change password” state that will require them to change their passwords the next time they authenticate. See my previous blog post about managing password policy state for more information.

If you’re considering password expiration as a means of ensuring that users will be required to authenticate every so often or their accounts will be locked, then we offer a better alternative to that: idle account lockout. If you update the password policy to enable last login time tracking (via the last-login-time-attribute and last-login-time-format configuration properties), then the server will maintain a record of the last time that each user successfully authenticated. If you also set a value for the idle-lockout-interval property, then a user who hasn’t authenticated in at least that length of time will be locked out of their account until the password is reset by an administrator.

If the reason for forcing a password change is because you want to upgrade the password storage scheme that you’re using, you can do that without actually requiring a password change. As noted above in the section on password storage schemes, you can simply make the desired scheme the new default and mark the old scheme as deprecated. The next time a user authenticates with a password encoded with the deprecated scheme, it will be automatically re-encoded using the new default scheme.

If you have other reasons that you were considering password expiration that aren’t covered by these options, then let us know what they are, and we’ll let you know whether there might be a suitable alternative.

Other Related Settings Outside the Password Policy

There are a few other related settings that you might want to consider setting, even though they’re not strictly part of the password policy configuration. They include:

  • You should enable data encryption in the server so that data is not stored in the clear in the database. The best way to do this is to enable encryption when setting up the server, but if you need to do it after the fact, you can use the encryption-settings tool to manage the set of encryption settings definitions that are available in the server, then update the values of the the encrypt-data, encryption-settings-cipher-stream-provider, encrypt-backups-by-default, and encrypt-ldif-exports-by-default global configuration properties. Enabling encryption can be done on the fly, and any subsequent writes will cause the target entry to be encrypted, but you may still want to export the data to LDIF and re-import to ensure that all the existing data gets encrypted.
  • You may wish to configure the server to delay the response to a failed bind operation by a specified length of time (e.g., maybe a second). This can help slow down online password guessing attacks, and it can be used either in conjunction with or as an alternative to locking an account after too many failed attempts. This setting can be configured through the failed-bind-response-delay property in the LDAP connection handler configuration.
  • You may want to configure the userPassword and authPassword attributes as sensitive attributes within the server, and ensure that they can never be retrieved over LDAP, even in encoded form, and even by root users. Check out my earlier blog post about sensitive attributes for more information on this subject.
  • Consider enabling one or more two-factor authentication mechanisms. While it’s still important to ensure that users have strong passwords, two-factor authentication adds yet another layer of protection that an attacker must overcome even if they do happen to learn a user’s password. I also wrote an earlier blog post about this topic.

Sensitive Attributes in the Ping Identity Directory Server

The Ping Identity Directory Server offers a rich access control framework that can be used to help ensure that clients are only given the level of access that they need in the server. It’s a deny-by-default mechanism that means that a user doesn’t get access to something unless there’s a rule that grants it, and our default access control configuration is very restrictive. However, there may be cases in which access control might not be good enough.

For example, let’s say that you want to absolutely ensure that clients cannot retrieve encoded passwords from user entries. To help with this, the server does offer the following rule defined in the out-of-the-box set of global ACIs:

(targetattr="userPassword || authPassword")
(version 3.0;
acl "Prevent clients from retrieving passwords from the server";
deny (read,search,compare)
userdn="ldap:///anyone";)

ACIs that deny access take precedence over those that allow it, so the above rule will ensure that no client that is subject to access control evaluation will be able to retrieve either the userPassword or authPassword attribute, even if another rule explicitly or implicitly tries to grant that access.

However, there is one significant flaw with the above access control rule: it only applies to clients that are subject to access control enforcement. If an authenticated user has the bypass-acl or bypass-read-acl privilege, then the searches they request won’t be subject to access control evaluation, and therefore the entries returned in response to those searches won’t be pared down by the access control handler.

To address this limitation, the server does offer additional forms of protection that can apply even to clients that aren’t subject to access control restrictions. For example, client connection policies can be used to indicate which types of operations are allowed. And sensitive attributes allow you to impose restrictions around access to specified attributes even for privileged users.

A sensitive attribute configuration definition includes the following properties:

  • attribute-type — The names or OIDs of the attribute types to which the sensitive attribute definition applies. At least one attribute type must be specified in each sensitive attribute definition.
  • include-default-sensitive-operational-attributes — Indicates whether the sensitive attribute definition should also apply to certain operational attributes that might include values for the target attribute. At present, this includes the ds-sync-hist attribute, which may hold current or former values for attributes in the entry for the purposes of replication conflict resolution.
  • allow-in-returned-entries — Indicates whether the specified attribute types are allowed to be included in entries that are returned to the client. The value may be one of “allow” (in which the attribute is allowed to be returned, as long as nothing else prevents it), “suppress” (in which the attribute will never be returned, even if something else permits it), or “secure-only” (which behaves like “allow” over secure connections, but “suppress” over insecure ones). If sensitive attribute values are suppressed, then they will be stripped out of entries before they are returned to the client.
  • allow-in-filter — Indicates whether the specified attribute types are allowed to be used in search filters. The value may be one of “allow” (in which the server will permit searches including a sensitive attribute type in the filter), “reject” (in which the server will reject any search request with a filter that includes a sensitive attribute type), or “secure-only” (which behaves like “allow” over secure connections, but “reject” over insecure ones).
  • allow-in-add — Indicates whether the specified attribute types may be included in add requests. The value may be one of “allow”, “reject”, or “secure-only”.
  • allow-in-compare — Indicates whether the specified attribute types may be included in compare requests. The value may be one of “allow”, “reject”, or “secure-only”.
  • allow-in-modify — Indicates whether the specified attribute types may be included in modify requests. The value may be one of “allow”, “reject”, or “secure-only”.

The server includes a few sensitive attribute definitions that are defined but not enforced by default. One of these is the “Sensitive Password Attributes” definition, which can be used to ensure that the userPassword, authPassword, and ds-pwp-retired-password attributes are never returned to clients, cannot be included in search filters or compare requests, and can only be added or modified over secure connections. There are similar definitions for TOTP shared secrets, and also for one-time passwords and other single-use tokens.

To create a new sensitive attribute definition, you can use the dsconfig command-line tool or the web-based administration console. For example, let’s say that you have an attribute named employeeSSN that holds the social security numbers for your employees, and that you wanted to ensure that those values could never be retrieved from the server but could only be targeted by LDAP compare operations, and only over secure connections. You also want to ensure that writes to that attribute are only allowed over secure connections. The following dsconfig command could be used to accomplish that:

dsconfig create-sensitive-attribute \
     --attributeName "Employee Social Security Numbers" \
     --set attribute-type:employeeSSN \
     --set include-default-sensitive-operational-attributes:true \
     --set allow-in-compare:secure-only \
     --set allow-in-add:secure-only \
     --set allow-in-modify:secure-only \
     --set allow-in-returned-entries:suppress \
     --set allow-in-filter:reject

However, merely creating a sensitive attribute definition isn’t sufficient to ensure that it will be enforced. You also need to associate it with one or more client connection policies so that it will be enforced for clients assigned to those policies. The best way to do this is to configure the sensitive attribute in the global configuration so that it will apply across all client connection policies by default. You can do this with a dsconfig command like the following:

dsconfig set-global-configuration-prop \
     --set "sensitive-attribute:Employee Social Security Numbers"

There may be cases in which you want certain clients to be exempt from these restrictions (e.g., if you want to use the Ping Identity Synchronization Server to synchronize the Directory Server with some other data store). If such a need arises, you can create a client connection policy for that application and configure that policy to exclude the sensitive attribute restriction. You can do that with a command like:

dsconfig set-client-connection-policy-prop \
     --policy-name sync-server-policy \
     --set "exclude-global-sensitive-attribute:Employee Social Security Numbers"

Alternately, it is possible to only associate a sensitive attribute definition with a specific set of client connection policies so that it will not be enforced for other policies (using the sensitive-attribute property in the client connection policy configuration), but in most cases, it’s probably better to use the global policy to enable it across all policies by default and only exclude it for a specific set of policies.