UnboundID LDAP SDK for Java 7.0.4

We have just released version 7.0.4 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added a “discard results” search result listener that can be used in cases where a search should be performed, but the actual matching entries and references aren’t needed (for example, if you only need to know the number of matching entries).
  • We added client-side support for a W3C trace context request control that can be included in requests sent to the latest versions of the Ping Identity Directory Server or ForgeRock Directory Services. This can be used to convey information for use in distributed tracing (e.g., via OpenTelemetry).
  • We improved debug logging when adding or removing servers from the blacklist used to temporarily avoid creating connections to a server when using the round robin and fewest connections server set.
  • We updated the PropertyManager class to make it possible to cache property values for faster access with less contention. Caching is disabled by default, although you can enable it by specifying a maximum cache duration. You can also programmatically clear the cache and pre-populate the cache based on currently defined system properties and environment variables.
  • We improved performance and reduced contention when retrieving the values of environment variables from the JVM process.
  • We updated the documentation to include the latest revisions of draft-bouchez-scram-mcf and draft-codere-ldapsyntax in the set of LDAP-related specifications.

Ping Identity Directory Server 10.3.0.0

We have just released version 10.3.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Summary of Deprecated Functionality and Potential Compatibility Impacts

  • Removed support for Java 11. The server must now be run using Java 17 or Java 21.

Summary of New Features and Enhancements

  • Updated the Directory Proxy Server to add a new “forward-authorization-entry-control” authorization method for LDAP external servers. This can be particularly useful in entry-balanced topologies in which clients may need to process operations that target entries in backend sets that do not contain the requester’s entry, and when the existing authorization mapping mechanism is insufficient.
  • Added support for LDAP client connections passing through a software load balancer that adds an HAProxy PROXY protocol header (v1 or v2) to the communication. The source address and port contained in the PROXY protocol header will then be used for all subsequent processing for that client connection.
  • Updated the Directory REST API to add support for the soft delete, hard delete, undelete, and soft-deleted entry access controls.
  • Updated manage-profile replace-profile to make it possible to change the value of the --fips-provider argument between BCFIPS and BCFIPS2. This makes it possible to update an existing instance running in FIPS 140-2-compliant mode to use FIPS 140-3-compliant mode. It is still not possible to enable or disable FIPS compliance for an existing instance.
  • Updated the server’s default configuration so that it will not attempt to update the user’s recent login history in cases where the user’s account is unavailable (e.g., because it is locked or disabled, or because the user’s password is expired). This can help reduce performance impact of login history tracking in cases where an authentication attempt cannot possibly succeed.
  • Improved the encoded password cache (which can dramatically improve performance of repeated authentication when using expensive password encoding schemes) to use an LRU algorithm instead of a FIFO algorithm if the cache became full and it was necessary to purge existing records.
  • Added an option to allow LDAP external servers to create their connection pools without any initial connections so that all of the pool’s connections will be created on demand. This can help make it faster to initialize components that interact with LDAP external servers, and it can also reduce contention that may result from concurrent attempts to authenticate multiple connections as the same user.
  • Updated the processing time histogram plugin to add an option to expose the histogram information in an alternate manner, using a separate attribute for each histogram bucket, and making it easier to parse the histogram bucket boundaries and the associated value.
  • Updated the file-based audit logger to make it possible to exclude virtual attribute values from delete records. By default, they will continue to be included in the regular audit log file, but they will now be excluded from the data recovery log. Excluding virtual attribute values can improve performance and reduce the resulting audit log size.
  • Added support for synchronizing the bypassMFA attribute to and from PingOne endpoints.
  • Added the ability to skip LDIF import processing when running manage-profile setup.
  • Updated the version and SSL context monitor entries so that they always include the fips-compliant-mode, fips-140-2-compliant-mode, and fips-140-3-compliant-mode attributes. They were previously only present when the server was actually running in a FIPS-compliant mode.
  • Exposed the values of the fips-compliant-mode, fips-140-2-compliant-mode, and fips-140-3-compliant-mode attributes in the “Version” monitor entry in the administration console’s Status section.

Summary of Bug Fixes

  • Fixed an issue in which the remove-defunct-server tool could leave a Synchronization Server topology in a state where it was not possible to add new servers.
  • Fixed the default behavior for the resync tool to avoid inadvertently removing unicodePwd values when synchronizing from Active Directory sources, or to avoid removing password values when synchronizing from PingOne sources.
  • Fixed an issue in which a SCIMv2 PUT operation could incorrectly remove some of the values of a multivalued complex attribute.
  • Fixed an issue that could prevent using the server in FIPS-compliant mode when running on Oracle Java 17 or later. This issue did not affect other OpenJDK variants.
  • Fixed an issue that could prevent the server from operating on RHEL on Java 17 or later if the operating system was configured to use FIPS-compliant mode.
  • Fixed issues in which a failed dsreplication initialize attempt could interfere with subsequent initialization attempts.
  • Fixed an issue that prevented the Synchronization Server from properly logging changes that only affected password policy state attributes.
  • Fixed an issue that could cause password synchronization to fail when attempting to synchronize changes from multiple Active Directory subdomains through multiple sync pipes.
  • Fixed an issue in which the monitor history plugin could incorrectly remove files earlier than expected when using the retain-files-sparsely-by-age configuration option.
  • Fixed an issue that could prevent a password reset performed by a user with the bypass-pw-policy privilege from honoring the password policy’s force-change-on-reset property.
  • Fixed an issue that prevented password resets using the password modify extended operation from being subject to replication assurance processing.
  • Fixed an issue that could cause changes to the modifiable password policy state plugin’s filter property to be ignored until the plugin was disabled and re-enabled, or until the server was restarted.
  • Fixed an issue in which the Directory REST API would reject attempts to use attribute values that violated the associated syntax even if the server was configured to permit such violations.
  • Fixed an issue in which the replication server did not properly honor the listen-on-all-addresses configuration property.
  • Fixed an issue in which the Delegated Admin application could cause a memory leak in the server.
  • Fixed an issue in which replace-certificate replace-listener-certificate didn’t honor the --trust-store-update-type argument.
  • Fixed an issue that prevented using the JVM-default trust store when replacing a listener certificate using the replace-certificate tool’s interactive mode.
  • Fixed an issue that prevented SCIM clients from altering the ds-pwp-modifiable-state attribute.
  • Fixed an issue with inconsistent id-attribute values in SCIM responses.
  • Fixed an issue that could cause a 404 response to a SCIM GET request.
  • Fixed an issue in which the Directory REST API could return a 500 response when attempting to replace values of a virtual attribute.
  • Fixed an issue in which the entry counter plugin may not have properly evaluated criteria that is based on a virtual attribute with require-explicit-request-by-name set to true.
  • Fixed an issue that could prevent altering the configuration for an enabled entry counter plugin.
  • Fixed an issue that prevented capturing connection pool debug messages in the LDAP SDK debug log.
  • Fixed an issue in which dsreplication enable did not properly honor the --noPropertiesFile argument.
  • Fixed an internal error that could arise upon restarting a server instance that had been configured with support for Delegated Admin.
  • Fixed an issue that prevented the Synchronization Server from properly persisting state information for third-party change detectors.
  • Fixed an issue in the Server SDK’s example LDAP sync destination plugin in which it did not properly handle processing for modify DN operations that didn’t affect the destination entry’s RDN.
  • Suppressed spurious warning messages that could be logged during server startup.

UnboundID LDAP SDK for Java 7.0.3

We have just released version 7.0.3 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We fixed an issue in which the LDAP SDK did not properly handle certificates with a notBefore or notAfter timestamp that fell in the year 2049 if that timestamp was encoded with the antiquated UTCTime syntax (which only uses two digits to encode the year). It incorrectly used a year of 1949 instead of 2049.
  • We updated the ldifmodify tool so that it will report an error if any of the sourceLDIF, changesLDIF, or targetLDIF arguments refer to the same file. Previously, the tool would run, but could yield incomplete results if an input file was also used as an output file.
  • We updated the IP address argument value validator to improve performance and to catch additional types of malformed IPv4 addresses that were previously accepted due to leniency in Java’s InetAddress.getByName implementation.
  • We simplified and improved the toLowerCase, toUpperCase, and getBytes methods in the StaticUtils class. The former implementations were more efficient than the versions provided in the Java String class in older Java versions when primarily dealing with ASCII strings, but this is no longer the case in newer versions of Java where strings are backed by byte arrays rather than character arrays.
  • We updated client-side support for the Ping-proprietary transaction settings request control to make it possible to request that the server acquire a lock using a client-specified scope under a specified set of conditions. This allows more control in the event of lock conflicts in cases where the client is able to determine which operations are most likely to conflict with each other. For example, in a multi-tenant server, it may be useful to specify a scope that includes a tenant-specific identifier so that only operations associated with that tenant will be affected by the scoped lock.
  • We also updated the transaction settings request control to make it possible to override the conditions under which the server may attempt to acquire a single-writer lock. This was previously only controlled through the server configuration.
  • We improved error reporting in the dump-dns tool for use with the Ping Identity Directory Server.
  • We updated client-side support for the Ping Identity Directory Server’s version monitor entry to handle attributes used to indicate whether the server is running in FIPS 140-2-compliant or FIPS 140-3-compliant mode.
  • We updated the documentation to include the newest versions of the draft-bucksch-sasl-passkey, draft-bucksch-sasl-rememberme, draft-codere-ldapsyntax, draft-ietf-kitten-sasl-ht, draft-ietf-kitten-sasl-rememberme, and draft-schmaus-kitten-sasl-ht specifications.

Ping Identity Directory Server 10.2.0.0

We have just released version 10.2.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Summary of Deprecated Functionality and Potential Compatibility Impacts

  • Java 11 support has been deprecated and will be removed in a future release. If you’re currently running on Java 11, we recommend upgrading to Java 17 or 21.
  • Java EE provides a framework for HTTP servlets and web applications, along with certain other functionality not directly included in Java SE. Oracle has decided that they will end their role in maintaining Java EE, so that support has been transferred to the Eclipse Foundation. Unfortunately, as part of doing so, Oracle has also revoked the right to use “javax” for base package names for all of those classes, so any code using those classes will need to be updated to replace “javax” package references with “jakarta”. Most of these changes are contained entirely within the server and won’t have any direct customer-facing impact, but some third-party extensions developed using the Server SDK may need to be updated. This primarily includes extensions that specifically deal with functionality formerly provided by Java EE (e.g., HTTP servlet extensions, web application extensions, OAuth token validators, etc.), but it may also apply to additional types of extensions that rely on Java EE functionality.
  • The SCIM version 2 API is also affected by the transition of Java EE functionality to the Eclipse Foundation and the need to use “jakarta” instead of “javax” as the base name for affected packages. As such, applications that rely on the SCIMv2 SDK may need to be updated to use the latest release of the SDK.
  • Support for SCIM version 1.1 has been deprecated and will be removed in a future release. We recommend upgrading to SCIM version 2.
  • Support for SNMP has been deprecated, both for accessing limited monitor data and for generating traps from administrative alerts. We recommend using one of the many alternatives we support, including JMX, Prometheus, or StatsD for access to monitor data, and JMX, SMS, or email messages for alert notifications.

Summary of New Features and Enhancements

  • Added the ability to run the server on a Java 21 JVM. Java version 17 is also fully supported, and Java 11 is currently supported but has been deprecated and will be removed in a future release.
  • Added support for running in a FIPS 140-3-compliant manner. [more information]
  • Added a cache for improving authentication performance when using expensive password storage schemes. [more information]
  • Added a new entry counter plugin. [more information]
  • Updated the Directory REST API to support making access control decisions based on OAuth scopes. [more information]
  • Dramatically improved bind performance in environments with a very large number of dynamic groups.
  • Updated the Synchronization Server to support synchronizing changes to a Ping Identity Directory Server for updating both a user’s password and their password policy state at the same time.
  • Added the ability to specify a proxy server when defining HTTP external servers in the configuration.
  • Added support for pausing database cleaning activity when creating a backup. In some cases, this can increase the speed and reduce the size of the backup.
  • Added a new db-on-disk-to-db-cache-size-ratio attribute to database environment monitor entries. Also, added a gauge to raise an alarm if the on-disk database size becomes many times larger than the in-memory cache size, which could lead to performance degradation.
  • Added support for caching the contents of key and trust stores for improved performance during TLS negotiation.
  • Updated the check-replication-domains tool to better distinguish between deleted and obsolete replicas.
  • Updated the dsjavaproperties tool to allow using the new --gcType argument to change type type of garbage collector used for the server.
  • Added the ability to use generational ZGC garbage collection when running on Java 21.
  • Updated collect-support-data to use the most recent monitor history file if monitor information is not obtained from LDAP.
  • Updated the Synchronization Server to use the get changelog batch extended operation as the default mechanism for discovering changes in the Ping Identity Directory Server.

Summary of Bug Fixes

  • Fixed an issue in which a Directory REST API error response could potentially allow an unauthorized user to determine whether a specified entry existed in the server.
  • Fixed an issue that could cause replication changes to be lost between locations when a remote gateway was in the process of starting or shutting down.
  • Fixed an issue that could cause the default topology admin user to be subject to the default password policy when upgrading the server via manage-profile replace-profile.
  • Fixed a potential memory leak that could occur in the Synchronization Server in certain failover states when using a Kafka destination.
  • Fixed an issue that could result in inconsistency in the metadata for a composite index record. This could cause the server to send errors in response to certain requests, and has the potential to prevent bringing the affected backend online.
  • Fixed an issue that could cause upgrade attempts to fail in servers in which the default userRoot backend had been removed.
  • Fixed an issue that prevented the server from starting when configured to use a third-party key manager provider created using the Server SDK.
  • Fixed an issue in which the Synchronization Server did not always properly encode spaces in HTTP URLs used when communicating with PingOne.
  • Fixed an issue in which an untrusted VLV index could interfere with the server’s ability to process certain kinds of searches.
  • Fixed an issue in which the server did not always properly use the configured substring-index-entry-limit value when maintaining substring attribute indexes.
  • Fixed an issue in which dsjavaproperties did not always properly handle changes to the path to the desired Java runtime.
  • Fixed an issue in which the Directory REST API may not use a configured alternative authorization identity when attempting to access data outside the requester’s backend set in an entry-balanced topology.
  • Updated the server’s default configuration to prevent going into lockdown mode as a result of missed replication changes from obsolete replicas or as a result of null CSNs.
  • Fixed an issue in which the HTTP connection handler’s response-header property was not properly used for certain kinds of error responses.
  • Fixed an issue in which the Directory REST API could incorrectly use less-than-or-equal-to matching when comparing JSON fields in cases where strict less-than matching had been requested.
  • Fixed an issue in which config-diff could report an error when attempting to compare configuration objects with the same name but different types.
  • Fixed an issue in which the Synchronization Server may not properly exclude entries in cases where a configured include-filter targeted a virtual attribute in a NOT component.
  • Fixed a potential null pointer exception that could be raised in the Synchronization Server in certain cases in which an operation failed with no additional information about the cause of that failure.
  • Fixed an issue that could prevent dsreplication enable from reporting a useful error message when it was unable to establish a connection to one of the server instances.
  • Fixed an issue in which isMemberOf values were not automatically updated for groups contained in a subtree that was moved or renamed by a modify DN operation. The server would have continued to use the former group DNs until it was restarted.
  • Fixed an issue that allowed the Directory Proxy Server to be configured with attribute mapping proxy transformations for attribute types that were not defined in the schema.
  • Fixed an issue in which the server could report an incorrect value for the ds-backend-entry-count attribute in the replicationChanges backend monitor entry if a sequence number counter rolled over after reaching its maximum value.
  • Fixed an issue that caused the server to incorrectly indicate that a restart was needed for a change to the LDAP connection handler’s ssl-certificate-nickname property to take effect.
  • Fixed an issue that would cause dsconfig or the admin console to suggest a malformed default value when creating a new dictionary-based password validator.
  • Reduced the number of error messages generated if the server lost connection to a Prometheus server.
  • Updated the server to begin suppressing repeated error log messages of the same type after 200 such messages had been logged. Previously, suppression didn’t kick in until 2000 messages had been logged.
  • Fixed an issue in which the server could log information about suppressing duplicate alert messages for alert types that had been disabled.
  • Fixed an issue in which the Synchronization Server could incorrectly report errors for all sync pipes when they were only relevant to a single pipe.
  • Fixed an issue in which the server could log an irrelevant error message if it was in the process of receiving mirrored topology data when the server began shutting down.
  • Fixed an issue with an error message that was generated if an HTTP connection handler could not use a configured key manager provider.

FIPS 140-3 Compliance Support

We have updated the Directory Server, Directory Proxy Server, and Synchronization Server to support operating in a FIPS 140-3-compliant manner using the recently released 2.x version of the Bouncy Castle FIPS-compliant library. The server already provided support for operating in a FIPS 140-2-compliant manner using 1.x versions of the Bouncy Castle library, and that is still an option, but you can now choose to use the newer version of the library when setting up an instance of the server.

If you wish to enable FIPS-compliant mode, use the --fips-provider argument when setting up the server (either when running the setup command in non-interactive mode or when using manage-profile setup), specifying a value of either “BCFIPS1” if you want to use the 1.x version of the Bouncy Castle library for FIPS 140-2-compliant behavior, or “BCFIPS2” if you want to use the 2.x version of the library for FIPS 140-3-compliant behavior. Previously, you could use a value of “BCFIPS” to request FIPS 140-2 compliance, and that is still supported, although there is the potential that could change at some point in the future to defaulting to FIPS 140-3 compliance.

In general, the server’s behavior in FIPS 140-3-compliant mode should be essentially the same as it is in FIPS 140-2-compliant mode.

Encoded Password Caching for Expensive Schemes

Some of the Directory Server’s password storage schemes, including Argon2, PBKDF2, bcrypt, and scrypt, are intentionally designed to be expensive. In the event that an attacker manages to again access to encoded user passwords, passwords encoded with one of those expensive schemes will require significantly greater resources to attack with brute force attacks than passwords encoded with much faster schemes. However, the same holds true for legitimate authentication processing. The server has to do a lot more work to determine whether a bind request should succeed for users with expensively encoded passwords. This increases the length of time for individual authentication attempts, and also dramatically reduces the number of concurrent authentication attempts that the server can process.

To help address this problem, we have introduced a new type of cache that can be used to dramatically speed up authentication performance for users with expensively encoded passwords. Whenever a user attempts to authenticate with a password encoded with one of the aforementioned expensive password storage schemes, the server will check the cache to see if it contains a record for that encoded password. If not, then the server will perform the expensive processing needed to determine whether the provided password is correct. If it is correct, then the server will then encode the clear-text password provided in the bind request with a salted SHA-256 digest and put that in the cache along with the expensive encoding. The next time that user tries to authenticate, the server can retrieve that record from the cache and use the much faster SHA-256 processing to determine whether the provided password is correct.

Note that by default, the server uses PBKDF2 to encode the passwords for root users and topology administrators. As such, even if you’re not using these schemes for regular user accounts, this change can dramatically improve password-based authentication performance for those users. This is especially true for accounts used by applications, including those that the Directory Proxy Server or the Synchronization Server use to authenticate to the Directory Server, since they establish connection pools that can require authenticating multiple connections as that user at the same time.

A few notes about the security of the cache implementation:

  • The clear-text password is never stored in the cache. Instead, we store a cryptographically secure SHA-256 digest version of the password that has been combined with a securely generated 128-bit salt.
  • The contents of the cache are only stored in memory and are never stored in the database or otherwise persisted. There is also no mechanism for dumping the contents of the cache.
  • The encoded password cache only contains a mapping between the original expensive encoding of a password and the faster salted SHA-256 encoding of the same password. It does not include any information that links the password to a particular user entry.
  • Because the cache doesn’t do anything to associate an encoded password with the corresponding user, there is no risk that the cache could be used to allow the server to make authentication decisions using stale information. For example, if a user changes their password, the cache can’t be used to allow them to authenticate with their former password. Similarly, if a user’s account is locked or disabled, or if their password is expired, then the cache can’t be used to allow them to successfully authenticate.

By default, the server will cache up to 10,000 passwords using each of the expensive password storage schemes mentioned above. This can be customized using the encoded-password-cache-size property in the storage scheme configuration. If you have a lot of users with passwords encoded using one or more of these schemes, then you may wish to increase the cache size accordingly. Alternatively, if you want to completely disable this caching for some reason, then you can set the value to zero.

The Entry Counter Plugin

In many directory environments, there are a number of potentially expensive queries that are run on a regular basis for reporting purposes. For example, there may be queries to determine the total number of user accounts, and to identify how many of those are active versus inactive. The Directory Server already includes support for a “data security auditor” capability that can be used to identify accounts matching certain criteria, but that results in LDIF files that are written to the server filesystem that need to be processed in some way, and it’s often the case that these kinds of queries are really only needed to determine the number of matching entries rather than to obtain the contents of those entries. To help meet this need, we have introduced a new entry counter plugin in the server.

The entry counter plugin will periodically iterate across entries in the server and count the number of entries matching specified sets criteria. Each set of criteria may include things like:

  • The base DN(s) of the subtrees in which to find matching entries.
  • A filter to use to identify matching entries.
  • An optional additional set of named filters that may be used to further break down matching entries into sub-categories. For example, you may have a set of criteria for identifying all users in the server, with sub-categories for accounts that are considered active (based on last authentication time), those that are considered inactive, and those that have never authenticated.
  • Optional warning and/or error thresholds that can be used to cause the server to raise an alarm if the number of matching entries reaches those values. For example, this can be used to notify administrators if the number of users in the server approaches the number allowed for in the product license.
  • A flag that indicates whether to track metrics about the size of matching entries, including the average, maximum, and minimum amount of space that encoded representations of the matching entries consume in the underlying database.

Scope-Based Access Control in the Directory REST API

The Directory REST API provides a mechanism for HTTP-based clients to request operations in the server, and it provides substantially greater functionality than is available through SCIM. Clients using the Directory REST API can authenticate using HTTP basic authentication (with a DN and password), but are generally encouraged to use OAuth bearer tokens instead.

Previously, when clients authorized requests with an OAuth bearer token, that token was mapped to a user in the server, and the ACIs applicable to that user were used to determine whether to allow the associated request. However, we have now updated the REST API to make it possible to further consider the set of OAuth scopes in the provided access token when making access control decisions. In particular, the “oauthscope” ACI bind rule can be used to indicate that the ACI applies to requests authorized with a token containing a matching scope. This functionality had previously been available for SCIM requests, as well as for LDAP requests from clients authenticated with the OAUTHBEARER SASL mechanism, but it was not supported for the Directory REST API until the 10.2 release.

Note that for Directory REST API requests that pass through the Directory Proxy Server, we can only support scope-based authorization based in cases in which both the Directory Server and the Directory Proxy Server share a common encryption settings definition. In particular, the Directory Proxy Server must be configured with a preferred encryption settings definition, and that definition must also be available in the Directory Server.

UnboundID LDAP SDK for Java 7.0.2

We have just released version 7.0.2 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added support for using the 2.x version of the Bouncy Castle FIPS-compliant security provider, which provides support for FIPS 140-3 compliance. The 1.x version of the library, offering FIPS 140-2 compliance, is still supported. To use the LDAP SDK in this mode, you should ensure that the necessary jar files are in the classpath, and then you should call CryptoHelper.setUseFIPSMode(“BCFIPS2”) as early as possible in the life of the application.
  • We added a new PropertyManager class that can be used to retrieve the value of specified properties using either system properties or environment variables. Values can be optionally parsed as Booleans, numbers, or comma-delimited lists. Most uses of system properties within the LDAP SDK have been updated to support the new PropertyManager mechanism so that it’s possible to set values as environment variables as an alternative to system properties.
  • We fixed a bug in the SSLUtil.certificateToString method that prevented it from including the notBefore and notAfter timestamps in the string representation.
  • We added client-side support for the Ping Identity Directory Server’s new to-be-deleted accessibility state for use with the get subtree accessibility and set subtree accessibility extended operations.
  • We updated the MoveSubtree utility class to provide the ability to use the new to-be-deleted accessibility state (as an alternative to the hidden state) for the target subtree before starting to remove entries from the source server.
  • We added a new SubtreeAccessibilityState.isMoreRestrictiveThan method that can be used to determine whether one accessibility state is considered more restrictive than another.
  • Updated the documentation to include the latest versions of the following LDAP-related specifications:

    • draft-coretta-ldap-subnf-01
    • draft-coretta-oiddir-radit
    • draft-coretta-oiddir-radsa
    • draft-coretta-oiddir-radua
    • draft-coretta-oiddir-roadmap
    • draft-coretta-oiddir-schema
    • draft-ietf-kitten-scram-2fa
    • draft-melnikov-sasl2
    • draft-melnikov-scram-bis
    • draft-melnikov-scram-sha-512
    • draft-melnikov-scram-sha3-512

Ping Identity Directory Server 10.1.0.0

We have just released version 10.1.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Summary of New Features and Enhancements

  • Added the ability to include presence components in composite index filter patterns [more information]
  • Added the ability to include approximate-match components in composite index filter patterns [more information]
  • Added the ability to include static equality components in composite index filter patterns [more information]
  • Added the ability to stream search results directly from a composite index [more information]
  • Added support for caching the candidate set for searches using the simple paged results control [more information]
  • Improved Directory Proxy Server’s handling of requests with the simple paged results control [more information]
  • Updated the access control handler to provide enhanced support for controlling which attributes may be included in add requests [more information]
  • Added support for a verify password extended operation [more information]
  • Added support for collation matching rules for improved extensible matching support for non-English values [more information]
  • Added a new compare-ldap-schemas tool [more information]
  • Reduced the performance impact of exploded index cleanup [more information]
  • Improved warnings about high index entry limits for attribute indexes [more information]
  • Improved overall write performance and reduced the number of outliers for write operations with higher response times
  • Improved performance when applying changes via replication
  • Improved performance when retrieving the database environment monitor entry
  • Improved the efficiency of replicating server schema information between servers
  • Reduced the default size of messages used in the course of monitoring replication
  • Reduced the amount of memory that the server needs to cache information about dynamic groups
  • Enabled the expensive operations logger by default so that information about any operations taking longer than 1 second to complete will be written to logs/expensive-ops
  • Added the ability to include extended information about the associated connection in access log messages about requested operations
  • Added the ability to exclude specific certain kinds of messages from the server error log, based on message category, severity, message ID, and message content
  • Added the ability to define Prometheus metrics for Boolean monitor attributes by using a value of 1 for true and 0 for false
  • Improved the logic used to determine whether a given replica should be considered obsolete
  • Added an --ignoreDuplicateAttributeValues argument to the import-ldif command, which will allow it to successfully import entries that have duplicate values for the same attribute (with only one copy of each attribute value)
  • Updated the interactive setup process so that when asking about whether to prime the contents of the backend into the cache during server startup, the default response has been changed from enabling priming to disabling priming
  • Updated the server so that it will now only retain the last 100 copies of former configurations by default
  • Added a new repair-topology-listener-certificates tool that can be used to recover from issues related to improperly updating certificates that the server uses for TLS communication
  • Improved the efficiency of the Directory Proxy Server’s replication backlog health check
  • Updated the export-reversible-passwords tool to make it possible to include only entries below a specified set of base DNs, or to exclude entries from a specified set of base DNs
  • Added a subtree-modify-dn-size-limit property to the backend configuration that can be used to limit the size of subtree move and rename operations, and these operations are now limited by default to subtrees with no more than 100 entries
  • Added the ability to specify the key wrapping transformation that the PKCS #11 cipher stream provider should use to protect the contents of the encryption settings database
  • Updated the Synchronization Server to support synchronizing USER.LOCKED and USER.UNLOCKED events from the PingOne service
  • Added the ability to obscure sensitive producer property values when using the Kafka sync destination

Summary of Bug Fixes

  • Fixed an issue that could cause inconsistency in entryUUID values across replicas in servers configured with a custom password validator created with the Server SDK
  • Fixed an issue that could allow insufficiently authorized clients to use the get password policy state issues request control through the Directory Proxy Server
  • Fixed an issue in which manage-profile replace-profile could apply configuration changes in an incorrect order
  • Fixed an issue that could cause dsreplication status to fail after disabling replication
  • Fixed an issue that could cause dsreplication enable to report an error when run in interactive mode
  • Fixed an issue that could cause the server to store multiple duplicate copies of the values of some attributes in which the associated attribute type has one or more subordinate types
  • Fixed an issue that could prevent the server from adding real attribute values to a replicated entry that already had virtual values for the same attribute
  • Fixed an issue that could prevent the server from adding or modifying entries that matched the criteria for an untrusted composite index if debug logging was enabled
  • Fixed an issue that prevented the server from properly using a virtual list view index to process an applicable search using an extensible matching filter
  • Fixed an issue in which the server could have incorrectly reported that the underlying JVM did not provide support for strong encryption (e.g., 256-bit AES)
  • Fixed an issue that could result in increased memory pressure, and potential out-of-memory errors, when running in FIPS-compliant mode as a result of a quirk in the Bouncy Castle implementation for the AES cipher
  • Fixed an issue that could cause the server to add duplicate entries to the configuration when setting up the server in FIPS 140-2-compliant mode
  • Fixed a rare issue in which the server could report an error on startup when one or more replicas were not online
  • Fixed an issue in which the Synchronization Server would not properly encode certain UTF-8 characters when constructing a URI for interacting with a source or destination server
  • Fixed an issue in which the Synchronization Server could incorrectly omit certain attributes when synchronizing from the PingOne service when the modified-attributes-only mode
  • Fixed an issue in which the Synchronization Server could incorrectly omit certain escape characters in search filters sent to the PingOne service
  • Fixed an issue in which the Active Directory Password Synchronization Agent did not properly handle the case in which multiple users in a forest had the same sAMAccountName
  • Cleaned up an error message that may be used when attempting to generate a Delegated Admin report with an invalid SCIM filter

Composite Index Improvements

We have made a number of improvements in our support for composite indexes. These improvements basically fall into two categories: new types of components that you can use in filter patterns, and the ability to stream results as the server is reading an index.

New Composite Index Filter Component Types

We have added support for new types of components in composite index filter patterns to make it possible to replace more types of attribute indexes with composite indexes. This is especially useful for cases in which you have attribute indexes that match a large number of entries and have a high index entry limit, which causes them to be maintained as exploded indexes. Composite indexes are better than exploded indexes pretty much every way, with better read performance, basically equivalent write performance, and more compact storage.

The new component types that we support in composite index filter patterns are:

  • Presence components, like “(attributeName=*)”. These components will match any entry that has the specified attribute, regardless of what value(s) it has.
  • Approximate-match components, like “(attributeName~=?)”. These components can be used to match entries that have a value that is approximately equal to a given value. In most cases, the server treats “approximately equal to” as meaning “sounds like”.
  • Equality components with a static value rather than a wildcard, for example, “(attributeName=specificValue)”. This can be useful in cases where you want to limit an index to only matching entries with a specific value for a given attribute and you don’t care about indexing other values for that attribute.

These are in addition to the other filter pattern component types that we already support, including:

  • Equality components with a wildcard, like “(attributeName=?)”. These can be used to match entries that have a given value for a particular attribute. In addition, if the filter pattern is only an equality wildcard component, or if it’s the last component of an AND filter pattern, then you can also use them for ordering matching in greater-or-equal and less-or-equal filters, or in substring filters with at least a subInitial (starts with) component.
  • Substring components with a wildcard, like “(attributeName=*?*)”. You can only have one of these components in a filter pattern, and if it’s an AND filter pattern, then it must be the last component in that pattern. These can be used performing substring matching against attribute values, although they are primarily intended for use with substring filters that don’t include a subInitial component, since equality components are better for those.

Streaming Results Directly From a Composite Index

Normally, when the server is processing an indexed search request, it will first use the applicable indexes to come up with a candidate set containing the IDs of all of the entries that have the potential to match the search criteria, and then it will iterate through that candidate set, retrieving the entries, performing any additional processing needed to make sure they actually match the search criteria, and then returning them to the client. This works really well in the vast majority of cases, but it’s not necessarily the ideal approach to use in cases where the search criteria matches a huge number of entries. In that case, the resulting candidate set could consume a substantial amount of memory, and the server can’t start returning matching entries until it has identified all of the potential candidates.

In the 10.1 release, we’ve updated the server so that it can skip the process of building the candidate set in certain cases. When it does this, it will simply iterate through the pages of the composite index one-by-one, retrieving each of the entries referenced on that page and returning them to the client. This means that it doesn’t have to hold the entire candidate set in memory, and it can start returning matching entries right away.

The server can stream results from a composite index under the following conditions:

  • The search request must have a wholeSubtree scope. We won’t attempt to stream results for searches with a baseObject, singleLevel, or subordinateSubtree scope.
  • The search request can only include a limited set of controls. Most notably, it can’t be used with controls that attempt to alter the order or set of entries in the result set, like the server-side sort, simple paged results, or virtual list view controls. The request controls that are compatible with streaming include:

    • Access log field
    • Account usable
    • Administrative operation
    • Assertion
    • Get server ID
    • Intermediate client
    • Join
    • LDAP subentries
    • Permit unindexed search
    • Proxied authorization (either v1 or v2)
    • Reject unindexed search
  • The filter used in the search request must be one of the following:

    • A simple presence filter
    • A simple equality filter
    • A simple approximate-match filter
    • An AND filter that only contains some combination of presence, equality, and/or approximate-match components, and where each of the components targets a different attribute type
  • The filter used in the search request must directly correspond to the filter pattern used in the composite index. There can’t be any extra components in the search filter that aren’t covered by the composite index filter pattern.
  • The scope of the search request must directly correspond to the scope of the composite index. If the composite index is defined with a base DN pattern, then the base DN of the search request must match that filter pattern (and must not be subordinate to an entry that matches the pattern). If the composite index is not defined with a base DN pattern, then the base DN of the search request must be the base DN for the backend.
  • The server must not believe that the target index key has exceeded the index entry limit.

Basically, it means that it must be possible to perfectly satisfy the search using exactly one composite index record. It can’t require iterating across multiple records (which means that it can’t be used for searches with greater-or-equal, less-or-equal, or substring components), and that index record can’t include any entries that don’t match the search criteria (which means that it can’t be used for searches whose filter is more specific than the filter pattern).

The server will automatically use streaming for search requests that meet all of the necessary conditions, so you don’t need to do anything to enable it. For search requests that don’t meet these requirements, the server will fall back to the traditional approach of building a candidate set and then working through it to return matching entries.

Improvements in Simple Paged Results Control Support

By default, when the server processes an indexed search operation that includes the simple paged results control, it will recompile the ID list for the entire set of entries that have the potential to match the search criteria for each of the requests used to obtain pages of the result set. However, we have added a new simple-paged-results-id-set-cache-duration property in the backend configuration that can be used to enable caching for this candidate set so that the server only needs to compute the candidate set once at the start of the search, and then it can use that cached ID set for subsequent pages of the search, as long as the client doesn’t wait too long between requests to retrieve subsequent pages.

Note that the caching mechanism should only be enabled in environments in which all servers support this ability, so don’t turn it on until all servers in the topology have been updated to version 10.1 or later. Also note that it works best if clients consistently send requests to retrieve pages from the result set from the same Directory Server (or Directory Proxy Server) instance.

In addition, we have improved the logic that the Directory Proxy Server uses when forwarding requests that include the simple paged results control to backend servers. Previously, the presence or absence of this control did not affect the Directory Proxy Server’s choice of which backend server to use when handling the request, but it will now try to consistently route requests to retrieve all pages of the search from the same server. This makes it better able to take advantage of the Directory Server’s new ability to cache the candidate ID set, and it can also help avoid issues in which entries may have been returned in different orders for requests sent to different backend servers.

Improved Access Control Support for Add Operations

Historically, granting someone the “add” access control right has always given them the ability to add an entry with any set of attributes, without regard for the targetattr keyword, which can yield unexpected behavior in cases where administrators expect the ability to restrict the attributes that someone can include in an add request. To address this, we have added a new evaluate-target-attribute-rights-for-add-operations property to the access control handler configuration, which will cause the server to consider the targetattr element for any ACIs that grant the right to add entries. Note that this property is set to false by default to preserve backward compatibility.

The Verify Password Extended Operation

When processing a bind request, the server is careful to not expose any information that could help the client (which may be a malicious user or application) identify the reason for the authentication failure, and especially not whether the provided credentials are correct for the target user. If the server knows that their account isn’t in a usable state (for example, if the account is administratively disabled, or if it’s been locked because of too many failed attempts or because it’s been unused for too long), then the server won’t even attempt to verify the password.

However, some applications do attempt to determine the reason for the authentication failure, whether by using the get password policy state issues request control, or by retrieving the entry and using the ds-pwp-state-json attribute to identify any issues that may make the account unusable. This is usually done under the guise of customizing the page returned in response to the failure with options that can help them succeed, like allowing them to unlock their account or reset their password if they’ve been locked out as a result of too many failed attempts, even though it leaks information about the reason for the authentication failure. And some customers have even expressed an interest in being able to do this but only if the provided password was actually correct for the user. This is an absolutely terrible idea, because it directly circumvents the entire point of account lockout, giving an attacker unlimited attempts to guess a user’s password. This is something that we will never implement in the Ping Identity Directory Server.

Nevertheless, because some organizations seem absolutely dead-set on backdooring their security configuration, we have introduced a new extended operation in the server that can be used to determine whether a proposed password is correct for a user without performing any other password policy processing. It doesn’t care if the account is locked or disabled, if the password is expired, or if there’s any other reason that the user wouldn’t be able to actually authenticate. Similarly, it doesn’t cause any updates to the account as a result of the validation attempt. For example, if the password provided is not correct, it does not count as a failed authentication attempt toward a lockout.

Because this is an obviously dangerous feature that should definitely not be exposed to regular clients, there are a number of safeguards in place to prevent it from being made available to malicious clients. These include:

  • The extended operation handler is not defined in the server configuration by default. An administrator must explicitly configure and enable it for the verify password operation to be available.
  • The requester must have access control permission to use the extended operation. The server does not have any ACIs that grant access to the control in the out-of-the-box configuration (although this restriction does not apply to clients with the bypass-acl privilege, since they aren’t subject to access control restrictions).
  • The requester must have the permit-verify-password-request privilege. No one has this privilege by default, even root users and topology administrators.
  • The request must be issued over a secure connection.

If all of these conditions are satisfied, then the client can send a verify password extended request to determine whether a provided password is correct for a given user. The server will return a compareTrue result if the password is correct, compareFalse if it’s not, or it will use some other result code if it couldn’t make the determination for some reason.

Collation Matching Rules

LDAP schema tends to be designed with a fairly English-centric mindset. Most attributes meant to hold textual values use matching rules that work well for ASCII values, but not necessarily as well for values that contain non-ASCII characters. For example, a search request with a filter of “(givenName=Francois)” won’t match an entry with a givenName value of “François”, and a search request with a filter of “(givenName=François)” won’t match an entry with a givenName value of “Francois”.

And on top of that, there can be multiple ways of encoding some non-ASCII characters. For example, the “ç” character can be encoded in UTF-8 using either the bytes 0xC3A7 or the bytes 0x63CCA7, and values encoded in one form are not automatically considered equivalent to values encoded in the other form.

To properly handle this scenario, you need to use alternative matching rules that are specifically designed to work with values in the language that you’re trying to match. These are called collation matching rules, and we’ve just updated the server to support them for a whole bunch of them. See the documentation for details about the locales and languages that we support, and how to use extensible matching filters to use them to perform better language-aware matching.

The compare-ldap-schemas Tool

We have added a new compare-ldap-schemas command-line tool that can be used to examine schema definitions in two LDAP servers and identify differences between them, like definitions that only exist in one server, or definitions that exist in both servers but have differences between them. You can choose to only look at elements of certain types, can include or exclude elements with a given name prefix, or can include or exclude elements with given extension values. You can also optionally ignore differences that only affect the element description or element descriptions.

Improvements Around Exploded Attribute Indexes

Traditionally, attribute indexes are stored so that each key has a single database record. For example, in an equality attribute index, every unique value for that attribute will have its own database record, with the key being the normalized representation of that value, and the data being a list of the entry IDs for all entries that contain that specific value. This is great when using the index to process search requests because it only takes a single database read to identify all entries that contain the associated attribute value. However, as the number of entries with that attribute value increases, it becomes more expensive to update that index record because it gets bigger and bigger, and the server needs to rewrite the entire ID list any time it changes. It can also increase the overall size of the database and the amount of work that the cleaner needs to do.

To help avoid these write performance issues, once an attribute index key matches at least 50,000 entries, the server converts it into an exploded form. Instead of a single database record whose value is a list of all associated entry IDs, it is converted into multiple records, with a separate record for each of the entry IDs. This dramatically improves performance for write operations that involve updating that index key, but that makes it more expensive to retrieve the entire ID set when it’s needed for a search operation.

Further, in the event that the number of entries matching that key exceeds the index entry limit, the server needs to remove all of the associated database records. It does this in a background thread so that it doesn’t tie up the worker thread processing the operation that caused the limit to be exceeded. Nevertheless, we have observed that this background cleanup processing can have a notable performance impact for other writes attempted while it’s in progress. To help alleviate that, we have introduced rate limiting for that cleanup processing so that it is less likely to affect the performance of other write operations.

Nevertheless, if you have index keys that are expected to match a large number of entries, we strongly recommend that you use a composite index rather than an attribute index. Composite indexes can provide much better overall read and write performance in cases like this, and they don’t require the same expensive background cleanup processing for keys that have exceeded the index entry limit. To help reinforce this recommendation, we have updated the server so that it will write warning messages to the server’s error log for any attribute indexes that are configured with an index entry limit of 100,000 or more, and so that it will generate administrative alerts on startup for any attribute indexes that are configured with an index entry limit of 1,000,000 or more. In addition, both dsconfig and the Administration Console will now display a notice recommending composite indexes over attribute indexes when altering the index entry limit for an attribute index.

UnboundID LDAP SDK for Java 7.0.1

We have just released version 7.0.1 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added a new MaximumIdleDurationLDAPConnectionPoolHealthCheck class that can be used to replace connections that have remained idle for longer than a specified length of time. We generally recommend setting a maximum connection age for the pool so that connections are automatically replaced after a given amount of time regardless of their activity, but the new health check can be used as an alternative if you want to keep active connections around as long as possible while also ensuring that idle connections are closed by the LDAP SDK before they might be closed by the LDAP server or by intermediate network equipment.
  • We updated the in-memory directory server to improve its concurrency when processing operations that don’t need to make changes to the data, including binds, searches, and compares.
  • We added new Filter.createSubstringAssertion methods that can be used to create properly encoded string representations of substring assertions. This can be particularly helpful when you want to create an extensible matching filter using a substring matching rule.
  • We updated the KeyStoreKeyManager and TrustStoreTrustManager classes to make it possible to use an alternative security provider when accessing the associated key or trust store. We’ve also made it possible to indicate whether the LDAP SDK should be allowed to access non-FIPS-compliant key stores when operating in FIPS 140-2-compliant mode.
  • We fixed an issue in which the parallel-update tool would use an in-memory buffer to hold information about information to write to the reject file, but it would not automatically flush that buffer when changes are rejected. In some cases, this could introduce a significant delay between the time that a change is rejected and the time that a record of it was written to the specified log file.
  • We fixed an issue with the manage-certificates tool that could prevent it from accessing the JVM’s default trust store in cases where the LDAP SDK is operating in FIPS 140-2-compliant mode and the tool is invoked programmatically (as opposed to running it from the command line).
  • We updated the command-line tool framework to make it possible for tools to expose arguments for generating a debug log file. All of the tools included with the LDAP SDK have been updated to provide this option, and you can use the --help-debug argument to see the applicable arguments.
  • We updated the debug logging framework to make it possible to write debug messages, which are formatted as JSON objects, using a multi-line representation rather than the default single-line representation. People looking at the log messages may find the multi-line format easier to read.
  • We added a new StaticUtils.setSystemPropertyIfNotAlreadyDefined method that can be used to set the value of a specified system property in the JVM, but only if it’s not already set (in which case its current value will be preserved).
  • We added client-side support for a new “verify password” extended request in the Ping Identity Directory Server that properly authorized clients (under a restricted set of circumstances) can use to determine whether a given password is valid for a specified user without performing any other password policy processing.
  • We updated the OID registry to include records for a number of collation matching rules supported in the Ping Identity Directory Server, ForgeRock OpenDJ, Oracle OUD, and other servers.

UnboundID LDAP SDK for Java 7.0.0

We have just released version 7.0.0 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • The LDAP SDK now requires Java 8 or later. Java 7 is no longer supported.
  • We improved the behavior of LDAP connection pools when they are configured to invoke a health check when checking out a connection from the pool. Previously, if a connection was found to be invalid during checkout, the LDAP SDK would create a new connection to replace it, but would continue iterating through other connections in the pool trying to find an existing valid connection. It will now return the newly created connection immediately without checking other existing connections, which can substantially reduce the time to check out a connection in a scenario where many connections have been invalidated (e.g., by a server shutdown).
  • We added a new compare-ldap-schemas command-line tool that can be used to identify differences between the schemas of two LDAP servers.
  • We improved the behavior that the LDAP SDK uses when authenticating with the GSSAPI SASL mechanism. Previously, if you didn’t explicitly provide a JAAS configuration file to use for the attempt, the LDAP SDK would create a new one for each bind attempt. This would create a lot of temporary files that would need to be cleaned up when the JVM exited, and they might not get cleaned up properly if they JVM exits abnormally (e.g., it’s killed or if the JVM crashes). It would also require a small amount of additional memory for each bind attempt, since it has to remember another file to be deleted. Now, the LDAP SDK will be able to reuse the same generated configuration file for all GSSAPI bind requests that use the same JAAS settings, which will slightly improve performance, reduce memory usage, and reduce disk space consumption.
  • We added experimental client-side support for the relax rules support as defined in draft-zeilenga-ldap-relax-03. This draft doesn’t specify an OID for the control, but at least a couple of servers (OpenLDAP and ForgeRock OpenDJ) have implemented support for the control with an OID of 1.3.6.1.4.1.4203.666.5.12, so the LDAP SDK uses that OID for the control.
  • We added client-side support for a number of proprietary controls used by the ForgeRock OpenDJ directory server. These include:

    • A transaction ID request control, which can be included in an operation request to provide a transaction ID that will appear in the access log message for that operation.
    • A replication repair request control, which can be included in a write request to indicate that the associated change should not be replicated.
    • Change sequence number request and response controls, which can be used with a write operation to obtain the replication CSN that the server assigned to that operation.
    • Affinity request control, which can be included in related requests sent through an LDAP proxy server to consistently route them to the same LDAP server instance.
  • We added connection pool health checks for use in conjunction with the Ping Identity Directory Server, including:

    • One that will attempt to determine whether there are any active alerts in the server that cause it to consider itself to be either degraded or unavailable.
    • One that will assess the replication backlog and can consider a server unavailable if it has too many outstanding changes, or if the oldest outstanding change was originally processed too long ago.
    • One that will attempt to determine whether the server is in lockdown mode.
  • We updated the CryptoHelper class to add convenience methods for generating SHA-256, SHA-384, and SHA-512 digests from byte arrays, strings, and files. There are also generic versions of these methods that can be used with user-specified digest algorithms.
  • We added methods for normalizing JSON values and JSON object filters. This can help make it possible to compare two JSON object filters to determine whether two JSON object filters are equivalent.
  • We updated the BouncyCastleFIPSHelper class to add a constant with the name of a system property that can be used to enable support for the MD5 digest algorithm, which may be needed if you’re using the 1.0.2.4 or later version of the bc-fips jar file and need to use the MD5 message digest for some reason.

Ping Identity Directory Server 10.0.0.0

We have just released version 10.0.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Important Notices

  • As of the 10.0 release, the Directory Server only supports Java versions 11 and 17. Support for Java 8 has been removed, as a critical component (the embedded web container we use to support HTTP requests, including the Directory REST API, SCIM, and the Administration Console) no longer supports Java 8.
  • As of the 10.0 release, we are no longer offering the Metrics Engine product as part of the Directory Server suite (the Directory Proxy Server and Synchronization Server are still included, as is the Server SDK for developing custom extensions). You should instead rely on the server’s ability to integrate with other monitoring software, through mechanisms like our support for OpenMetrics (used by Prometheus and other software), StatsD, and the Java Management Extensions (JMX).

Summary of New Features and Enhancements

  • Added support for inverted static groups [more information]
  • Added support for post-LDIF-export task processors, which can be used to perform custom processing after successfully exporting an LDIF file, including the option to upload the resulting file to an Amazon S3 bucket [more information]
  • Added a new log file rotation listener that can be used to upload newly rotated log files to a specified Amazon S3 bucket [more information]
  • Added a new amazon-s3-client command-line tool [more information]
  • Added authentication support to the Directory REST API [more information]
  • Added support for a generate access token request control [more information]
  • Added support for configuring the server with a single database cache that may be shared across all local DB backends [more information]
  • Added an option to automatically re-encode passwords after changing the configuration of the associated password storage scheme [more information]
  • Exposed a request-handler-per-connection configuration property in the LDAP connection handler configuration [more information]
  • Updated the encrypt-file tool to add a --re-encrypt argument [more information]
  • Updated the encrypt-file tool to add a --find-encrypted-files argument [more information]
  • Updated the replication server and replication domain configuration to add a new missing-changes-policy property that can be used to customize the way the server behaves in the event that missing changes are detected in the environment, and the server will now remain available by default under a wider range of circumstances that may not represent actual problems
  • Significantly improved performance for creating a backup, restoring a backup, or performing online replica initialization
  • Significantly improved static group update performance
  • Improved performance for the process of validating the server state immediately after completing an update
  • Added a split-ldif tool that can be used to split a single LDIF file into multiple sets for use in setting up an entry-balanced deployment with the Directory Proxy Server
  • Updated the bcrypt password storage scheme to include support for the 2b variant (in addition to the existing 2y, 2a, and 2x variants)
  • Updated the HTTP connection handler to add an option for performing SNI hostname validation during TLS negotiation
  • Updated the backup tool to display a warning when creating a compressed backup of an encrypted backend, since encrypted backends cannot be effectively compressed, but attempting to do so will make the backup process take longer
  • Updated the dsreplication command so that it uses a separate log file per subcommand, and so that log files representing failed runs of the tool are archived rather than overwritten by subsequent runs
  • Removed the dsreplication removed-defunct-server subcommand, which is better provided through the dedicated remove-defunct-server tool
  • Removed the dsreplication cleanup-local-server subcommand, which is better provided through the remove-defunct-server --performLocalCleanup command
  • Updated dsreplication initialize-with-static-topology to add an --allowServerInstanceDelete argument that can be used to remove servers from the topology if they are not included in the provided JSON file
  • Updated dsreplication initialize-with-static-topology to add an --allowDomainIDReuse argument that can be used to allow domain IDs to be used with different base DNs
  • Updated the check-replication-domains tool so that it no longer requires the --serverRoot argument
  • Updated the replication server configuration to add an option that can be used to include information about all remote servers in monitor messages, which can be useful in large topologies where that can constitute a large amount of data
  • Added support for an access log field request control that can be used to include arbitrary fields in the log message for the associated operation
  • Updated the configuration API to treat patch operations with empty arrays as a means of resetting the associated configuration property
  • Added the ability to configure connect and response timeouts when connecting to certain external services over HTTP, including CyberArk Conjur instances, HashiCorp Vault instances, the Pwned Passwords service, and YubiKey OTP validation servers
  • Updated the Synchronization Server to improve performance when setting the startpoint to the end of the changelog for an Active Directory server
  • Reduced the default amount of memory allocated for the export-ldif and backup tools

Summary of Bug Fixes

  • Fixed an issue in which the Directory REST API could fail to strip out certain kinds of encoded passwords in responses to clients (although only to clients that were authorized to access those attributes)
  • Improved the way that the replication generation ID is computed, which can help ensure the same ID is generated across replicas when they are populated by LDIF import instead of online replica initialization
  • Fixed an issue that could cause an error while trying to initialize aggregate pass-through authentication handlers
  • Fixed an issue that could cause “invalid block type” errors when interacting with compressed files
  • Fixed an issue that could prevent the server from properly including an encrypted representation of the new password in the changelog entry for a password modify extended operation when the server was configured with the changelog password encryption plugin
  • Fixed an issue in which the server could fail to update a user’s password history on a password change that included a password update behavior request control indicating that the server should ignore password history violations
  • Fixed an issue that could cause the server to add two copies of the current password in the password history when changing a password with the password modify extended operation
  • Fixed an issue in which the server could incorrectly allow a user to set an empty password. Even though that password could not be used to authenticate, the server should not have allowed it to be set
  • Fixed an issue that could cause the dictionary password validator to incorrect accept certain passwords that contained a dictionary word as a substring that was larger than the maximum allowed percentage of the password
  • Fixed an issue in which the server could be unable to properly interpret the value of the allow-pre-encoded-passwords configuration property in password policies defined in user data that were created prior to the 9.3 release of the server
  • Fixed an issue in which the server may not have properly applied replace modifications for attributes with options
  • Fixed an issue in which the first unsuccessful bind attempt after a temporary failure lockout had expired may not be counted as a failed attempt toward a new failure lockout
  • Fixed an issue in which running manage-profile generate-profile against an updated server instance could result in a profile that may not be usable for setting up new instances
  • Fixed an issue in which dsreplication initialize could suggest using the --force argument in cases where that wouldn’t help, like when attempting to authenticate with invalid credentials
  • Fixed an issue with dsreplication enable-with-static-topology in which the server could report an error when trying to connect to a remote instance
  • Fixed an issue with dsreplication enable-with-static-topology in which case sensitivity in base DNs was not handled properly
  • Fixed an issue in which the remove-defunct-server command could fail in servers configured with the AES256 password storage scheme
  • Fixed an issue that could cause a replication error if missing changes were found for an obsolete replica that is not configured in all servers
  • Fixed an issue in which the server did not check the search time limit often enough during very expensive index processing, which could allow the server to process a search request for substantially longer than the maximum time limit for that operation
  • Fixed an issue that caused the server to incorrectly include client certificate messages in the expensive operations access log
  • Fixed an internal error that could be encountered if an administrative alert or alarm is raised at a specific point in the shutdown process
  • Fixed an issue with synchronizing Boolean attributes (e.g., “enabled”) to PingOne
  • Fixed an issue in which the Synchronization Server could fail to properly synchronize changes involving the unicodePwd attribute to Active Directory if the sync class was not configured with a DN map
  • Fixed an issue that could cause the create-sync-pipe-config command to improperly generate correlated attribute definitions for generic JDBC sync destinations
  • Fixed an error that could prevent manage-topology add-server from adding a Synchronization Server instance to a topology that already had at least two Synchronization Server instances
  • Fixed an issue in which the server did not properly log an alternative authorization DN for multi-update operations that used proxied authorization
  • Fixed an issue in which dsjavaproperties --initialize could result in duplicate arguments in the java.properties file
  • Fixed an issue that could cause a spurious message to be logged to the server’s error log when accessing the status page in the Administration Console

Inverted Static Groups

In the 10.0 release, we’re introducing support for inverted static groups, which try to combine the primary benefits of traditional static groups and dynamic groups without their most significant disadvantages.

Traditional static groups contain an attribute (either member or uniqueMember, depending on the group’s object class) explicitly listing the DNs of the members of that group. They are pretty straightforward to use and are widely supported by LDAP-enabled applications, but as the number of members in the group increases, so does the size of the entry and the cost of reading and writing that entry and updating group membership.

Traditional static groups also support nesting, but it’s not necessarily easy to distinguish between members that are users and those that are nested groups. The server has to maintain an internal cache so that it can handle nested memberships efficiently, and this requires both extra memory consumption and a processing overhead when the group is updated.

Dynamic groups, on the other hand, don’t have an explicit list of members, but instead are defined with one or more LDAP URLs whose criteria will be used for membership determinations. Because there is no member list to maintain, dynamic groups don’t have the same scalability issues as traditional static groups, and the number of members in the group isn’t a factor when attempting to determine whether a specific user is a member. However, dynamic groups aren’t as widely supported as traditional static groups among LDAP-enabled applications, there’s no way to directly add or remove members in a dynamic group (at least, not without altering the entries in a way that causes them to match or stop matching the membership criteria, which varies on a group-by-group basis), and they don’t support nesting.

Inverted static groups provide a way to explicitly manage group membership like with traditional static groups, but with the scalability of dynamic groups. Rather than storing the membership as a list of DNs in the group entry itself, each user entry has a list of the DNs of the inverted static groups in which they’re a member (in the ds-member-of-inverted-static-group-dn operational attribute). This means that the number of members doesn’t affect the performance of many group-related operations, like adding a new member to the group, removing an existing member from the group, or determining whether a user is a member of the group.

The only way in which the size of the group does impact performance is if you want to retrieve the entire list of members for the group (which you can do by performing a subtree search to find entries whose isMemberOf attribute has a value that matches the DN of the target group). While this is slower than simply retrieving a traditional static group entry and retrieving the list of member DNs, this is actually not an analogous comparison for a couple of key reasons:

  • Retrieving the list of member DNs from a traditional static group only gives you the DNs of the member entries. That isn’t enough if you need the values of any other attributes from the member entries.
  • Retrieving the list of member DNs from a traditional static group doesn’t work well if that group includes one or more nested groups. There’s no good way to tell which of the member DNs reference users and which represent nested groups, and the member DN list won’t include members of the nested groups.

As such, the best way to retrieve a list of all members of a traditional static group is also to perform a subtree search that targets the isMemberOf attribute, and it should be at least as fast to do that for an inverted static group as it is for a traditional static group.

The other key difference that inverted static groups have over traditional static groups lies in the way that we handle nested membership. As previously noted, traditional static groups can include both user DNs and group DNs in their membership attribute, and there’s not a good way to distinguish between them. Inverted static groups distinguish between user members and nested groups. Rather than adding a nested group to the inverted static group as a regular member, you need to add the DN of the nested group to the ds-nested-group-dn attribute of the inverted static group entry. This does make it possible to distinguish between user entries and nested groups, and it allows the server to handle nesting for these types of groups without a separate cache or expensive processing.

The main disadvantage that inverted static groups have in comparison to traditional static groups is that because they are a new feature, existing applications don’t directly support them. If an application only cares about making group membership determinations, makes those determinations using the isMemberOf attribute, and doesn’t need to alter group membership, then it should work just as well with inverted static groups as it does with traditional static groups. However, if it does need to alter group membership, or if it doesn’t support using the isMemberOf attribute, then that’s a bigger hurdle to overcome. To help with that, we’re including a “Traditional Static Group Support for Inverted Static Groups” plugin that can be used to allow clients to interact with inverted static groups in some of the same ways they might try to interact with traditional static groups. This includes:

  • The plugin will intercept attempts to modify the group entry to add or remove member or uniqueMember values, and instead make the corresponding updates to the ds-member-of-inverted-static-group-dn attribute in the target user entries.
  • The plugin can generate a virtual member or uniqueMember attribute for the group entry. It can do this in a few different ways, which may have different performance characteristics:

    • It can do it in a way that works for compare operations or equality search operations that target the membership attribute but don’t actually attempt to retrieve the membership list. This is the most efficient way to determine if a traditional static group has a specific DN in its list of members, and it should be about as fast to make this determination for an inverted static group as it is for a traditional static group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct members of the group (excluding nested members). The performance of this does depend on the number of direct members in the group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct and nested members of the group. The performance of this depends both on the number of direct members in the group as well as the types and sizes of the nested groups.

Support for the Amazon S3 Service

The data stored in the Directory Server is often absolutely critical to the organizations that use it, so it’s vital to have a good backup strategy, and that must include some kind of off-site mechanism. Amazon’s S3 (Simple Storage Service) is a popular cloud-based mechanism that is often used for this kind of purpose, and in the 10.0 release, we’re introducing a couple of ways to have the Directory Server take advantage of it. In particular, you can now easily use it as off-site storage for LDIF exports and log files. We also include a new amazon-s3-client tool that can be used to interact with the S3 service from the command line.

Post-LDIF-Export Task Processors

We’ve introduced a new API in the server that can be used to have the server perform additional processing after successfully exporting data to an LDIF file as part of an administrative task (including those created by a recurring task). You can use the Server SDK to create custom post-LDIF-export task processors that do whatever you want, but we’re including an “Upload to S3” implementation that can copy the resulting export file (which will ideally have already been compressed and encrypted during the export process) to a specified S3 bucket. This processor includes retention support, so you can have it automatically remove previous export files created more than a specified length of time ago, or you can have it keep a specified number of the newest files in the bucket.

The export-ldif command-line tool now has a new --postExportProcessor argument that you can use to indicate which processors should be invoked for after the export file has been successfully written, and you can also specify which processors to use when creating the tasks programmatically (for example, using the task support in the UnboundID LDAP SDK For Java) or by simply adding an appropriate formatted entry to the server’s tasks backend. We’ve also updated the configuration for the LDIF export recurring task to include a new post-ldif-export-task-processor property to specify which processor(s) should be invoked for LDIF exports created by that recurring task.

Note that the post-LDIF-export task processor functionality is only available for LDIF exports invoked as recurring tasks, and not those created using the export-ldif tool in the offline, standalone mode. This is because post-LDIF-export task processors may need access to a variety of server components, and it’s difficult to ensure that all necessary components would be available outside of the running server process.

The Upload to S3 Log File Rotation Listener

The Directory Server already had an API for performing custom processing whenever a log file is rotated out of service, including copying the file to an alternative location on the server filesystem or invoking the summarize-access-log tool on it. In the 10.0 release, we’re including a new “Upload to S3” log file rotation listener that can be used to upload the rotated log file to a specified S3 bucket. This is available for all of the following types of log files:

  • Access
  • Error
  • Audit
  • Data recovery
  • HTTP operations
  • Sync
  • Debug
  • Periodic stats

Although there are obvious benefits to copying all of these types of log files to an external service, I want to specifically call out the importance of having off-site backups for the data recovery log. The data recovery log is a specialized type of audit log that keeps track of all write operations processed by the server in a form that allows them to be easily replayed or reverted should the need arise. The data recovery log can be used as a kind of incremental backup mechanism for keeping track of changes made since the most recent backup or LDIF export, and in a worst-case scenario in which all server instances are lost and you need to start over from scratch, you can restore the most recent backup or import the most recent LDIF export, and the replay any additional changes from the data recovery log that were made after the backup or export was created.

As with the Upload to S3 post-LDIF-export task processor, the new log file rotation listener also includes retention support so that you can choose to keep either a specified number of previous log files, or those uploaded less than a specified length of time in the past.

The amazon-s3-client Command-Line Tool

The new amazon-s3-client tool allows you to interact with the S3 service from the command line, including in shell scripts or batch files. It supports the following types of operations:

  • List the existing buckets in the S3 environment
  • Create a new bucket
  • Remove an existing bucket (optionally removing the files that it contains)
  • List the files in a specified bucket
  • Upload a file to a specified bucket
  • Download a specified file from a bucket
  • Download one or more of the newest files from a specified bucket, based on the number of files to download, the age of files to download, or files created after a specified time
  • Remove a file from a bucket

This allows you to perform a number of functions, including manually upload additional files that the server doesn’t support uploading automatically, or to download files for use in bootstrapping new instances. It can generate output as either human-readable text or machine-parsable JSON.

Authentication Support in the Directory REST API

The Directory REST API allows you to submit requests and retrieve data from the server using a JSON-formatted HTTP-based API. It’s always had support for all of the typical operations needed for interacting with the data, like:

  • Creating new entries
  • Updating existing entries
  • Removing existing entries
  • Retrieving individual entries
  • Searching for all entries matching a given set of criteria

Within the last few releases, we’ve also introduced support for a wide variety of request controls, and also certain extended operations. But one of the big gaps between what the server offered over the Directory REST API versus what you could get via LDAP was in its support for authentication. The Directory REST API has always supported authorizing individual requests using either HTTP basic authorization or OAuth 2 bearer tokens, but it didn’t really provide any good way to authenticate clients and verify client credentials. And if you wanted to authorize requests with stronger authentication than just a DN and password, you had to have an external service configured for issuing OAuth tokens.

This is being addressed in the 10.0 release with a new /authenticate endpoint, which currently supports the following authentication methods:

  • password — Username or bind DN and a static password
  • passwordPlusTOTP — Username or bind DN, a static password, and a time-based one-time password (TOTP)
  • passwordPlusDeliveredOTP — Username or bind DN, a static password, and a one-time password delivered through some out-of-band mechanism like email or SMS
  • passwordPlusYubiKeyOTP — Username or bind DN, a static password, and a one-time password generated by a YubiKey device

We’re also adding other new endpoints in support of these mechanisms, including:

  • One for generating a TOTP secret, storing it in the user’s entry, and returning it to the client so that it can be imported into an app (like Authy or Google Authenticator) for generating time-based one-time passwords for use with the passwordPlusTOTP authentication method.
  • One for revoking a TOTP secret so that it can no longer be used to generate time-based one-time passwords that will be accepted by the passwordPlusTOTP authentication method.
  • One for generating a one-time password and delivering it to the user through some out-of-band mechanism so that it can be used to authenticate with the passwordPlusDeliveredOTP authentication method.
  • One for registering a YubiKey device with a user’s entry so that it can be used to authenticate with the passwordPlusYubiKeyOTP method.
  • One for deregistering a YubiKey device with a user’s entry so that it can no longer be used to authenticate with the passwordPlusYubiKeyOTP method.

If the authentication is successful, the response may include the following content:

  • An access token that can be used to authorize subsequent requests as the user via the Bearer authorization method.
  • An optional set of attributes from the authenticated user’s entry.
  • If applicable, the length of time until the user’s password expires.
  • A flag that indicates whether the user is required to choose a new password before they will be allowed to do anything else.
  • An optional array of JSON-formatted response controls.

Generated Access Tokens in the Directory Server

We’ve added support for a new “generate access token” request control that can be included in a bind request to indicate that if the bind succeeds, the server should return a corresponding response control with an access token that can be used to authenticate subsequent connections via the OAUTHBEARER SASL mechanism. While this control is used behind the scenes in the course of implementing the new authentication support in the Directory REST API, it can also be very useful in certain LDAP-only contexts.

For example, this ability may be especially useful in cases where you want to authenticate a client with a mechanism that relies on single-use credentials, like the UNBOUNDID-TOTP, UNBOUNDID-DELIVERED-OTP, or UNBOUNDID-YUBIKEY-OTP SASL mechanism. In such cases, the credentials can only be used once, which means you can’t use them to authenticate multiple connections (for example, as part of a connection pool), or to re-establish a connection if the initial one becomes invalidated.

Shared Database Cache for Local DB Backends

You can configure the Directory Server with multiple local DB backends. You should do this if you want to have multiple naming contexts for user data, and you can also do it for different portions of the same hierarchy if you want to maintain them separately for some reason (e.g., to have them replicated differently, as in an entry-balanced configuration where some of the DIT needs to be replicated everywhere while the entry-balanced portion needs to be replicated only to a subset of servers).

Previously, each local DB backend had its own separate database cache, which had to be sized independently. This gives you the greatest degree of control over caching for each of the backends, which may be particularly important if you don’t have enough memory to hold everything based on the current caching configuration, but it can also be a hassle in some deployments. And if you don’t keep track of how much you’ve allocated to each backend, you could potentially oversubscribe the available memory.

In the 10.0 release, we’re adding the ability to share the same cache across all local DB backends. To do this, set the use-shared-database-cache-across-all-local-db-backends global configuration property to true, and set the shared-local-db-backend-database-cache-percent property to the percentage of JVM memory to allocate to the cache. Note that this doesn’t apply to either the LDAP changelog or the replication database, both of which intentionally use very small caches because their sequential access patterns don’t really require caching for good performance.

Re-Encoding Passwords on Scheme Configuration Changes

The Directory Server has always had support for declaring certain password storage schemes to be deprecated as a way of transparently migrating passwords from one scheme to another. If a user successfully authenticates in a manner that provides the server with access to the clear-text password (which includes a number of SASL mechanisms in addition to regular LDAP simple binds), and their password is currently encoded with a scheme that is configured as deprecated, then the server will automatically re-encode the password with the currently configured default scheme.

Deprecated password storage schemes can only be used to migrate users from one scheme to another, but there may be legitimate reasons to want to re-encode a user’s password without changing the scheme. For example, several schemes use multiple rounds of encoding to make it more expensive for attackers to attempt to crack passwords, and you may want to have passwords re-encoded if you change the number of rounds that the server uses.

In the 10.0 release, we’re adding a new re-encode-passwords-on-scheme-config-change property to the password policy configuration. If this is set to true, if a client authenticates in a manner that provides the server with access to their clear-text password, and if their password is currently encoded with different settings than are currently configured for the associated scheme, then the server will automatically re-encode that password with the default scheme using current settings. This functionality is available for the following schemes:

  • AES256 — If there is a change in the definition used to encrypt passwords.
  • ARGON2, ARGON2D, ARGON2I, ARGON2ID — If there is a change in the iteration count, parallelism factor, memory usage, salt length, or derived key length.
  • BCRYPT — If there is a change in the cost factor.
  • PBKDF2 — If there is a change in the digest algorithm, iteration count, salt length, or derived key length.
  • SCRYPT — If there is a change in the CPU/memory cost factor exponent, the block size, or the parallelization parameter.
  • SSHA, SSHA256, SSHA384, SSHA512 — If there is a change in the salt length.

It is also possible to enable this functionality for custom password storage schemes created using the Server SDK by overriding some new methods added to the API.

Separate Request Handlers for Each LDAP Client Connection

When the Directory Server accepts a new connection from an LDAP client, it hands that connection off to a request handler, which will be responsible for reading requests from that client and adding them to the work queue so that they can be picked up for processing by worker threads. By default, the server automatically determines the number of request handlers to use (although you can explicitly configure the number if you want), and a single request handler may be responsible for reading requests from several connections.

The vast majority of the time, having request handlers responsible for multiple connections isn’t an issue. Just about the only thing that a request handler has to do is wait for requests to arrive, read them, decode them, and add them to the work queue so that they will be processed. While it’s doing this for a request from one client, any other clients that are sending requests at the same time will need to wait, but because the entire process of reading, decoding, and enqueueing a request is very fast, it’s rare that processing for one client will have any noticeable impact on the request handler’s ability to process other clients. However, there are a couple of instances in which this might not be the case:

  • If a client is submitting a very large number of asynchronous requests at the same time.
  • If the server needs to perform TLS negotiation on the connection to set up a secure communication channel, and some part of that negotiation is taking a long time to complete.

In practice, neither of these is typically an issue. Even if there are a ton of asynchronous requests submitted all at once, it’s still pretty unlikely that it will cause any noticeable starvation in the server’s ability to read requests from other clients. And the individual steps of performing TLS negotiation also tend to be processed very quickly. However, there have been exceptional cases in which these kinds of processing may have had a noticeable impact. In such instances, the best way to deal with that possibility is to have the server use a separate request handler for each connection that is established so that the process of reading, decoding, and enqueueing requests from one client cannot impact the server’s ability to do the same for other clients that may be sending requests at exactly the same time. In the unlikely event that the need arises in your environment, you can now use the request-handler-per-connection property in the connection handler configuration to cause the server to allocate a new request handler for every client connection that is established.

Updates to the encrypt-file Tool

As its name implies, the encrypt-file tool can be used to encrypt (or decrypt) the contents of files, either using a definition from the server’s encryption settings database or with an explicitly-provided passphrase. In many cases, if a file used by the server (or a command-line tool) is encrypted with an encryption settings definition, the server can detect that and automatically decrypt it as it’s reading the contents of that file.

If an administrator wishes to retire an encryption settings definition for some reason, and especially if they want to remove it from the encryption settings database, they need to ensure that it is no longer needed to decrypt any existing encrypted data. In the past, some customers have overlooked encrypted files when ensuring that a definition is no longer needed. To help avoid that, we’ve added two new arguments to the encrypt-file tool:

  • --find-encrypted-files {path} — This argument can be used to search for encrypted files below the specified path on the server filesystem. By default, it will find files that have been encrypted with any encryption settings definition or with a passphrase, but you can also provide the --encryption-settings-id argument to indicate that you only want it to find files encrypted with the specified definition.
  • --re-encrypt {path} — This argument can be used to re-encrypt an existing encrypted file, using either a different encryption settings definition or a new passphrase.

If you do plan to retire an existing encryption settings definition, then you should use the encrypt-file --find-encrypted-files command to identify any files that have been encrypted with that definition, and then use encrypt-file --re-encrypt to re-encrypt them with a different definition so that the server can still access them even if you remove the retired definition from the encryption settings database.

UnboundID LDAP SDK for Java 6.0.11

We have just released version 6.0.11 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

Note that this is the last release of the LDAP SDK that will offer support for Java 7. As of the next release (which is expected to have a version of 7.0.0), the LDAP SDK will only support Java 8 and later.

You can find the release notes for the 6.0.11 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We updated the ldapsearch and ldapmodify command-line tools to provide better validation for the --proxyAs argument, which includes the proxied authorization v2 request control in the requests that they issue. Previously, they would accept any string as the authorization ID value, but they will verify that it is a valid authorization ID using the form “dn:” followed by a valid DN or “u:” followed by a username.
  • We updated the Filter class so that the methods used to create substring filters are more user-friendly when the filter doesn’t contain all types of components. Previously, it expected a substring component to be null if that component wasn’t to be included in the request, and it would create an invalid filter if the component was provided as an empty string. It will now treat components provided as empty strings as if they had been null.
  • We updated the logic that the LDAP SDK uses to pare entries down to a specified set of attributes (including in the in-memory directory server and the ldifsearch command-line tool) to improve its behavior if it encounters an entry with a malformed attribute description (for example, one that contains characters that aren’t allowed). Previously, this would result in an internal error, but it will now make a best-attempt effort to handle the invalid name.
  • We updated the TimestampArgument class to allow it to accept timestamps in the ISO 8601 format described in RFC 3339 (e.g., 2023-11-30T01:02:03.456Z). Previously, it only accepted timestamps in the generalized time format (or a generalized time representation that didn’t include any time zone information, which was treated as the system’s local time zone).
  • We updated the JSONBuffer class to add an appendField method that can be used to append a generic field without knowing the value type. Previously, it only allowed you to append fields if you knew the type of the value.
  • We added new BinarySizeUnit and DecimalSizeUnit enums that can be used when dealing with a quantity of data, like the size of a file or the amount of information transferred over a network. Each of the enums supports a variety of units (bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, zettabytes, and yottabytes), but the BinarySizeUnit variant assumes that each subsequent unit is 1024 times greater than the previous (e.g., one kilobyte is treated as 1024 bytes), while DecimalSizeUnit assumes that each subsequent unit is 1000 times greater than the previous (e.g., one kilobyte is treated as 1000 bytes).
  • We updated the client-side support for invoking the LDIF export administrative task in the Ping Identity Directory Server to include support for activating one or more post-LDIF-export task processors, which can be used to perform additional processing after the data is successfully exported.