Soft Deletes in the Ping Identity Directory Server

As its name implies, the LDAP delete operation removes an entry from the directory server. Typically, this completely removes the entry from the server, but there may be times when you would prefer for the entry to be hidden from LDAP clients, while still available in the server for at least a period of time.

The Ping Identity Directory Server offers this capability in the form of soft deletes. A soft-deleted entry still exists in the server, but it is renamed so that the DN includes the entry’s entryUUID value, and a special ds-soft-delete-entry object class is added that ensures the entry won’t be visible to most clients, and to provide additional metadata about the soft delete operation (including the entry’s original DN, the time the entry was soft-deleted, and the authorization DN and IP address of the client that requested it).

Using soft deletes can offer a number of benefits. Some of them may include:

  • It makes it easier to resurrect an entry if it is removed in error. We also provide a simple way to undelete a soft-deleted entry to restore it to “regular entry” status.
  • It provides LDAP-accessible auditing information about the delete operation. Even though soft-deleted entries aren’t visible to most clients, we do provide ways for authorized clients to see them if they’re specifically looking for them.
  • You can use this to prevent reuse of values, even after an entry has been deleted. For example, say that you’re an email provider and you don’t ever want to allow an email address to be reused, even if the former owner has removed their account. The unique attribute plugin has support for either permitting or rejecting conflicts with soft-deleted entries.

There are two ways that you can perform soft deletes in the Ping Identity Directory Server: you can configure the server to automatically turn regular deletes matching a given set of criteria into soft deletes, or you can explicitly request them with the soft delete request control. But before you can do either one, you need to set a soft delete policy.

Configuring the Server’s Soft Delete Policy

A soft delete policy can be used to specify the conditions under which the server should automatically turn regular delete operations into soft deletes, and can also be used to indicate the conditions under which the server should automatically clean up soft-deleted entries.

There are two properties that can be used to specify the conditions under which the server should automatically turn regular deletes into soft deletes:

  • auto-soft-delete-connection-criteria — A reference to a connection criteria object that specifies the clients whose delete requests should automatically be turned into soft deletes. This criteria can include anything the server knows about the requester, including their identity (where their entry is in the DIT, the contents of their entry, their group memberships, etc.), the address of the client, whether the communication is secure, and the protocol they are using to communicate with the server.
  • auto-soft-delete-request-criteria — A reference to a request criteria object that specifies which delete requests should automatically be turned into soft deletes. This criteria can include anything the server knows about the delete request, including the location of the target entry in the DIT, the content of that entry, the groups in which that entry is a member, the controls included in the request, and the origin of the request (e.g., directly requested by a client, replicated from another server, initiated by a component within the server, etc.).

If neither of these properties has a value, then only delete requests that include the soft delete request control will be treated as soft deletes. If only one of them has a value, then all delete requests that match that criteria object (or that include the soft delete request control) will be treated as soft deletes. If both of them have values, then only delete requests that match both sets of criteria (or that include the soft delete request control) will be treated as soft deletes.

By default, soft-deleted entries will remain in the server forever (or until someone explicitly deletes them), but you can also configure the server to automatically delete them under certain conditions. The soft delete policy offers two properties that can be used to control this:

  • soft-delete-retention-time — The maximum length of time that soft-deleted entries should be retained in the server before they are eligible to be automatically removed.
  • soft-delete-retain-number-of-entries — The maximum number of soft-deleted entries that should be retained in the server.

If either or both of these properties is configured, then soft-deleted entries that fall outside of either one of them will be eligible for removal. If neither is configured, then soft-deleted entries won’t be automatically removed by the server.

To enable soft delete functionality in the server, you need to create a soft delete policy, and you also need to update the global configuration to make it the active policy. With no active soft delete policy, the server will not automatically turn any deletes into soft deletes, nor will it allow clients to use the soft-delete request control.

Example 1: Only Explicit Soft Deletes Without Automatic Cleanup

If you don’t want the server automatically turning regular deletes into soft deletes, but you do want to allow clients to use the soft delete request control, and if you don’t want the server to automatically clean up any soft-deleted entries, then you can just create a soft delete policy with the default settings and make that the active policy. You can do that with the following configuration changes:

dsconfig create-soft-delete-policy \
     --policy-name "Explicit Soft Delete Requests Without Cleanup"

dsconfig set-global-configuration-prop \
     --set "soft-delete-policy:Explicit Soft Delete Requests Without Cleanup"

Example 2: Automatic Soft Deletes With Automatic Cleanup

If you want the server to automatically turn all delete operations into soft deletes, and to keep soft-deleted entries around for 30 days, you can do that with the following changes:

dsconfig create-request-criteria \
     --criteria-name "All Delete Requests" \
     --type simple \
     --set operation-type:delete

dsconfig create-soft-delete-policy \
     --policy-name "Automatic Soft Deletes" \
     --set "auto-soft-delete-request-criteria:All Delete Requests" \
     --set "soft-delete-retention-time:30 d"

dsconfig set-global-configuration-prop \
     --set "soft-delete-policy:Automatic Soft Deletes"

The Soft and Hard Delete Controls

The Soft Delete Request Control

If you want to explicitly control which delete requests get turned into soft deletes, then you can include the soft delete request control in the delete request. This request control has an OID of “1.3.6.1.4.1.30221.2.5.20”, and it can optionally have a value. If there is a value, then it should have the following ASN.1 encoding:

SoftDeleteRequestValue ::= SEQUENCE {
     returnSoftDeleteResponse     [0] BOOLEAN DEFAULT TRUE,
     ... }

The Soft Delete Response Control

If the request control doesn’t have a value, or if it has a value with the returnSoftDeleteResponse flag set to true, then the delete result may include a soft delete response control with an OID of “1.3.6.1.4.1.30221.2.5.21” and whose value is simply the string representation of the DN for the soft-deleted entry. The soft-deleted entry DN will be the same as the original DN, but with the RDN updated to include the entry’s entryUUID attribute value. For example, if the entry “uid=jdoe,ou=People,dc=example,dc=com” has an entryUUID value of “53e84e32-4be9-4ed6-b489-88d8bea4bdcd”, then the resulting DN for the soft-deleted entry would be “entryUUID=53e84e32-4be9-4ed6-b489-88d8bea4bdcd+uid=jdoe,ou=People,dc=example,dc=com”. The soft-deleted entry will also include the ds-soft-delete-entry object class, and it will include a ds-soft-delete-from-dn attribute whose value was the DN of the original entry and a ds-soft-delete-timestamp attribute whose value reflects the time that the soft delete operation was performed.

The Hard Delete Request Control

We also offer a hard delete request control, which can be used to explicitly indicate that an entry should be completely removed, even if the server would have otherwise automatically turned the delete operation into a soft delete. The hard delete request control has an OID of “1.3.6.1.4.1.30221.2.5.22” and no value. There is no corresponding hard delete response control.

Using the Soft and Hard Delete Controls With ldapmodify

The ldapmodify command-line tool offers support for the soft delete request control via the “--softDelete” argument, and for the hard delete request control via the “--hardDelete” argument.

For example, the following can be used to remove the “uid=jdoe,ou=People,dc=example,dc=com” entry using a soft delete operation:

$ bin/ldapmodify --hostname ldap.example.com \
     --port 636 \
     -useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=admin,dc=example,dc=com" \
     --softDelete
Enter the bind password:

# Successfully connected to ldap.example.com:636.

dn: uid=jdoe,ou=People,dc=example,dc=com
changetype: delete

# Deleting entry uid=jdoe,ou=People,dc=example,dc=com ...
# Result Code:  0 (success)
# Soft Delete Response Control:
#      OID:  1.3.6.1.4.1.30221.2.5.21
#      Soft-Deleted Entry DN:  entryUUID=53e84e32-4be9-4ed6-b489-88d8bea4bdcd+uid=jdoe,ou=People,dc=example,dc=com

Using the Soft and Hard Delete Controls With the UnboundID LDAP SDK for Java

The UnboundID LDAP SDK for Java supports the soft delete controls via the SoftDeleteRequestControl and the SoftDeleteResponseControl classes. It supports the hard delete request control via the HardDeleteRequestControl class. The class-level Javadoc documentation for the SoftDeleteRequestControl includes an example that demonstrates the use of these controls, and the related undelete and soft-deleted entry access request controls.

The Soft-Deleted Entry Access Request Control

There wouldn’t be much benefit to having soft-deleted entries if we didn’t provide a way to get access to them. By default, soft-deleted entries are hidden from clients, so they won’t be included in search results. However, there are two ways that you can access soft-deleted entries:

  • If you know the soft-deleted entry’s DN, then you can retrieve that entry with a search using the baseObject scope.
  • If you issue a search with the soft-deleted entry access request control, then soft-deleted entries can be included in the search results.

The latter option is much more useful than the former because it’s hard to know what a soft-deleted entry’s DN is unless you knew that entry’s UUID value before it was deleted, or you got the soft-deleted entry DN from the soft delete response control.

The soft-deleted entry access request control has an OID of “1.3.6.1.4.1.30221.2.5.24”. It may optionally have a value, and if it does, then that value must have the following ASN.1 encoding:

SoftDeleteAccessRequestValue ::= SEQUENCE {
     includeNonSoftDeletedEntries     [0] BOOLEAN DEFAULT TRUE,
     returnEntriesInUndeletedForm     [1] BOOLEAN DEFAULT FALSE,
     ... }

The includeNonSoftDeletedEntries element of the request control indicates whether the server should include non-soft-deleted entries in the search results. If this is true (which is the default), then the set of entries returned may include both soft-deleted and non-soft-deleted entries. If this is false, then only soft-deleted entries will be returned.

The returnEntriesInUndeletedForm element of the request control indicates whether matching soft-deleted entries should be returned in their undeleted form (if true) rather than their soft-deleted form (if false). The main difference between these forms is that the undeleted form will have the entry’s original DN rather than the soft-deleted DN that includes the entryUUID attribute, and will not include the ds-soft-delete-entry object class or the ds-soft-delete-from-dn or ds-soft-delete-timestamp attributes.

If the request control doesn’t have a value, then it will behave as if you provided a value with includeNonSoftDeletedEntries set to true and returnEntriesInUndeletedForm set to false.

Using the Soft-Deleted Entry Access Request Control With ldapsearch

The ldapsearch command-line tool offers support for the soft-deleted entry access request control through the --includeSoftDeletedEntries argument. This argument must take a value, and that value should be one of the following:

  • with-non-deleted-entries — Indicates that both soft-deleted and non-soft-deleted entries should be included in the search results. Soft-deleted entries will be returned in their soft-deleted form.
  • without-non-deleted-entries — Indicates that only soft-deleted entries should be returned, in their soft-deleted form. Non-soft-deleted entries will not be returned.
  • deleted-entries-in-undeleted-form — Indicates that only soft-deleted entries should be returned, but they should be returned in their undeleted form.

For example, if you wanted to search for the soft-deleted entry with a uid value of jdoe, you could use a command like:

$ bin/ldapsearch --hostname ldap.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=admin,dc=example,dc=com" \
     --baseDN "dc=example,dc=com" \
     --scope sub \
     --requestedAttribute "*" \
     --requestedAttribute "+" \
     --includeSoftDeletedEntries without-non-deleted-entries \
     "(uid=jdoe)"
Enter the bind password:

# Soft Delete Response Control:
#      OID:  1.3.6.1.4.1.30221.2.5.21
#      Soft-Deleted Entry DN:  entryUUID=53e84e32-4be9-4ed6-b489-88d8bea4bdcd+uid=jdoe,ou=People,dc=example,dc=com
dn: entryUUID=53e84e32-4be9-4ed6-b489-88d8bea4bdcd+uid=jdoe,ou=People,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: ds-soft-delete-entry
sn: Doe
cn: John Doe
givenName: John
uid: jdoe
createTimestamp: 20190227170715.814Z
creatorsName: cn=Directory Manager,cn=Root DNs,cn=config
modifyTimestamp: 20190227170715.814Z
modifiersName: cn=Directory Manager,cn=Root DNs,cn=config
entryUUID: 53e84e32-4be9-4ed6-b489-88d8bea4bdcd
ds-soft-delete-from-dn: uid=jdoe,ou=People,dc=example,dc=com
ds-soft-delete-timestamp: 20190227170734.870Z
ds-entry-checksum: 2440630426
subschemaSubentry: cn=schema

# Result Code:  0 (success)
# Number of Entries Returned:  1

Using the Soft-Deleted Entry Access Request Control With the UnboundID LDAP SDK for Java

The UnboundID LDAP SDK for Java provides support for the soft-deleted entry access request control through the SoftDeletedEntryAccessRequestControl class. The class-level Javadoc documentation for the SoftDeleteRequestControl class provides an example that demonstrates how to use this control (along with the soft delete, hard delete, and undelete request controls).

The Undelete Request Control

Support for soft deletes would also not be very useful if we didn’t provide a way to restore a soft-deleted entry back to being a regular, non-soft-deleted entry. And we do offer that ability through the undelete request control. If you include this control in a specially crafted add request, then the server will restore the target entry back to its former glory. The undelete request control has an OID of “1.3.6.1.4.1.30221.2.5.23”, and it does not need a value. There is no corresponding response control.

Note that I mentioned a “specially crafted add request” in that last paragraph. The server handles the undelete operation as an add operation, but if the undelete request control is present, then the contents of that add request will be a little different from when you’re adding an entry from scratch. Here’s what you need to include:

  • The DN included in the add request should be the DN that you want the undeleted to have. If you want this to be the entry’s original DN, then you could use the value of the soft-deleted entry’s ds-soft-delete-from-dn attribute, but you can choose something else if you want the restored entry to have a different DN.
  • The add request must include a ds-undelete-from-dn attribute whose value is the DN of the soft-deleted entry that you want to undelete.

Using the Undelete Request Control With ldapmodify

The ldapmodify tool supports the use of the undelete request control through the “--allowUndelete” argument. If you add this argument, then the undelete request control will automatically be included in any add requests that it sends. For example:

$ bin/ldapmodify --hostname ldap.example.com \
     --port 636 \
     -useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=admin,dc=example,dc=com" \
     --allowUndelete
Enter the bind password:

# Successfully connected to ldap.example.com:636.

dn: uid=jdoe,ou=People,dc=example,dc=com
changetype: add
ds-undelete-from-dn: entryUUID=53e84e32-4be9-4ed6-b489-88d8bea4bdcd+uid=jdoe,ou=People,dc=example,dc=com

# Adding entry uid=jdoe,ou=People,dc=example,dc=com ...
# Result Code:  0 (success)

Using the Undelete Request Control with the UnboundID LDAP SDK for Java

The UnboundID LDAP SDK for Java supports the undelete request control via the UndeleteRequestControl class. This class even provides a helpful createUndeleteRequest convenience method that allows you to construct an appropriate add request when provided with the DN that you want the undeleted entry to have and the DN of the soft-deleted entry that you want to undelete. As noted above, the class-level Javadoc documentation for the SoftDeleteRequestControl class provides an example that demonstrates how to use all of the controls related to soft-delete processing.

The Multi-Update Extended Operation in the Ping Identity Directory Server

In an earlier post, I mentioned that while you can process multiple searches in parallel to speed up an application that needs multiple pieces of information to do its work, that generally only works if those searches are independent. If the searches are related (meaning, that you need the results of one to construct another request), then this approach won’t work. Fortunately, the Ping Identity Directory Server offers an LDAP join control that allows you to perform a search that not only retrieves the entries matching the search criteria, but that also joins those entries with related entries as specified by the join rule.

Wouldn’t it be nice if there were something similar for write operations? While LDAP allows you to send multiple write requests concurrently on the same or different connections, you can’t really do that if there are dependencies between those write operations. For example, let’s say that you want to add an entry along with some subordinate entries. You can’t add a child before creating its parent, and if you try to send them in parallel, then maybe it’ll work and maybe it won’t. But here again, the Ping Identity Directory Server has you covered, this time in the form of the multi-update extended operation.

As its name implies, the multi-update operation allows you to send multiple updates (any combination of add, delete, modify, modify DN, and password modify extended operations) in a single request that will be processed in the order that you provide them. At the very least, this allows you to reduce the amount of time required to process those operations because there’s only one round trip between the client and the server. But it also allows you to decide what happens if an error occurs while processing any of those updates, and here you have three options:

  • You can have the processing occur atomically so that no changes will be applied unless all of them are processed successfully, and so that no client will be able to see the data in an intermediate state with only some of the changes completed. You can get this same benefit from LDAP transactions as described in RFC 5805 (which the Ping Identity Directory Server also supports), but the multi-update operation is more efficient because there’s only a single request and response, whereas LDAP transactions require a separate network round trip for each of the changes, plus additional round trips when starting and ending the transaction.
  • You can have processing stop after the first error. Any writes that succeeded before the error will be preserved, but any changes in the multi-update request after the one that caused the error will be ignored. Clients may be able to see the data in an intermediate state while these operations are being processed.
  • You can have processing continue until all of the operations have been attempted. Any of the changes that are successful will remain in place, and again, clients may be able to see the data in an intermediate state while they are being processed.

The Multi-Update Extended Request

The multi-update extended request has an OID of 1.3.6.1.4.1.30221.2.6.17 and a value with the following ASN.1 encoding:

MultiUpdateRequestValue ::= SEQUENCE {
     errorBehavior     ENUMERATED {
          atomic              (0),
          quitOnError         (1),
          continueOnError     (2),
          ... },
     requests          SEQUENCE OF SEQUENCE {
          updateOp     CHOICE {
               modifyRequest     ModifyRequest,
               addRequest        AddRequest,
               delRequest        DelRequest,
               modDNRequest      ModifyDNRequest,
               extendedReq       ExtendedRequest,
               ... },
          controls     [0] Controls OPTIONAL,
          ... },
     ... }

As you might expect, the request just specifies the behavior to use in case an error is encountered during processing and the set of requests to be processed. Note that while the ASN.1 definition above does allow for any kind of extended request to be included, the only one that the Ping Identity Directory Server currently allows in a multi-update request in the password modify extended request.

The UnboundID LDAP SDK for Java offers support for the multi-update extended request through the MultiUpdateExtendedRequest class, with an assist from the MultiUpdateErrorBehavior enum. If you want to use the multi-update extended operation through some other API, you’ll need to encode the request for yourself.

The Multi-Update Extended Result

The multi-update extended result has an OID of 1.3.6.1.4.1.30221.2.6.18 and a value with the following encoding:

MultiUpdateResultValue ::= SEQUENCE {
     changesApplied     ENUMERATED {
          none        (0),
          all         (1),
          partial     (2),
     ... },
     responses     SEQUENCE OF SEQUENCE {
          responseOp     CHOICE {
               modifyResponse     ModifyResponse,
               addResponse        AddResponse,
               delResponse        DelResponse,
               modDNResponse      ModifyDNResponse,
               extendedResp       ExtendedResponse,
               ... },
          controls       [0] Controls OPTIONAL,
          ... },
     ... }

There are two components to the extended result value:

  • An indicator as to whether none, all, or some of the changes were applied.
  • The results for all of the operations that were attempted. The results will be listed in the same order as in the request, and each operation result may optionally include the response controls for that operation.

If only a portion of the operations were attempted (for example, because the server stopped processing the multi-update operation after an error was encountered while processing one of the changes and did not attempt any of the others after that), then there may be fewer results than there were requests.

The UnboundID LDAP SDK for Java offers support for the multi-update extended result through the MultiUpdateExtendedResult class and the MultiUpdateChangesApplied enum. To use this extended operation in another API, you’ll need to decode the result value on your own.

Supported Controls

There are two categories of controls that may be used in conjunction with the multi-update extended request: those that can be attached to the multi-update extended operation itself, and those that can be attached to the individual operation requests inside the multi-update request value.

The controls that may be attached to the multi-update extended operation itself are:

  • Get Backend Set ID (only for atomic requests)
  • Intermediate Client
  • Proxied Authorization v1
  • Proxied Authorization v2
  • Route To Backend Server (only for atomic requests)
  • Transaction Settings (only for atomic requests)

The controls that may be attached to operation requests inside the multi-update request value are:

  • Account Usable
  • Assertion
  • Intermediate Client
  • Get Backend Set ID (only for non-atomic requests)
  • Hard Delete
  • Manage DSA IT
  • Password Policy
  • Post-Read
  • Pre-Read
  • Replication Repair
  • Route To Backend Server (only for non-atomic requests)
  • Soft Delete
  • Subtree Delete
  • Undelete

An Example Using the UnboundID LDAP SDK for Java

The ldapmodify tool provided as part of the UnboundID LDAP SDK for Java already includes support for the multi-update extended operation (via the --multiUpdateErrorBehavior argument), and you can find the code for that tool at https://github.com/pingidentity/ldapsdk/blob/master/src/com/unboundid/ldap/sdk/unboundidds/tools/LDAPModify.java.

However, that version of ldapmodify has a lot of features, and the multi-update support is only a tiny portion of it. For the sake of clearer illustration, I wrote a much simpler command-line tool that serves as a clearer demonstration of the multi-update operation. It operates much like ldapmodify, but it only reads the changes from an LDIF file, and it sends them to the server all at once through a multi-update operation. You can find that example at https://github.com/dirmgr/blog-example-source-code/tree/master/multi-update.

The LDAP Join Control in the Ping Identity Directory Server

LDAP is an asynchronous protocol, and most directory servers are very good when it comes to handling multiple requests simultaneously, whether on the same connection or multiple connections. If an application needs to issue multiple independent searches in the course of performing some function, and if it wants to get the results in as little time as possible, then it can issue those searches concurrently rather than sequentially. This means that the application should have all the results it needs in the time required to process the longest operation, as opposed to the sum of the times required to process those operations (not to mention the network round-trip time, which gets magnified when issuing requests in series rather than in parallel).

However, you can’t use this approach if there are dependencies between the searches that you want to perform. For example, let’s say that when an employee logs in, you want to show them a portion of their organizational chart that includes their manager and their peers (that is, the other employees who share the same manager). Normally, you’d need to perform three searches:

  • One search to retrieve the entry for the user who is authenticating and get the manager attribute
  • One to retrieve the entry for the user referenced by the employee’s manager attribute
  • One to retrieve the entries of all users with that same manager value

While you could potentially issue the second and third searches concurrently, you have to wait for the results of the first search to have the information you need for the other two.

In the Ping Identity Directory Server, we provide support for a feature that can allow you to do all of this with a single request: the LDAP join control. As its name implies, it offers an LDAP take on the SQL join you can perform in a relational database. If you issue a search request that includes the join request control, then each entry that matches the search criteria will be joined with “related” entries (in accordance with the criteria in the join request control).

The Join Request Control

The join request control describes the relationship that you want to use to identify other entries that are in some way related to the entries matching your search request. The request control has an OID of 1.3.6.1.4.1.30221.2.5.9 and a value with the following ASN.1 encoding:

LDAPJoin ::= SEQUENCE {
     joinRule         JoinRule,
     baseObject       CHOICE {
          useSearchBaseDN      [0] NULL,
          useSourceEntryDN     [1] NULL,
          useCustomBaseDN      [2] LDAPDN,
          ... },
     scope            [0] ENUMERATED {
          baseObject             (0),
          singleLevel            (1),
          wholeSubtree           (2),
          subordinateSubtree     (3),
          ... } OPTIONAL,
     derefAliases     [1] ENUMERATED {
          neverDerefAliases       (0),
          derefInSearching        (1),
          derefFindingBaseObj     (2),
          derefAlways             (3),
          ... } OPTIONAL,
     sizeLimit        [2] INTEGER (0 .. maxInt) OPTIONAL,
     filter           [3] Filter OPTIONAL,
     attributes       [4] AttributeSelection OPTIONAL,
     requireMatch     [5] BOOLEAN DEFAULT FALSE,
     nestedJoin       [6] LDAPJoin OPTIONAL,
     ... }

JoinRule ::= CHOICE {
     and               [0] SET (1 .. MAX) of JoinRule,
     or                [1] SET (1 .. MAX) of JoinRule,
     dnJoin            [2] AttributeDescription,
     equalityJoin      [3] JoinRuleAssertion,
     containsJoin      [4] JoinRuleAssertion,
     reverseDNJoin     [5] AttributeDescription,
     ... }

JoinRuleAssertion ::= SEQUENCE {
     sourceAttribute     AttributeDescription,
     targetAttribute     AttributeDescription,
     matchAll            BOOLEAN DEFAULT FALSE }

Most of the fields of the join request should be familiar because they’re similar to the fields of an ordinary search request. But I’ll go ahead and call them all out anyway. Also note that the Javadoc for the JoinRequestControl, JoinRequestValue, JoinRule, and JoinBaseDN classes in the UnboundID LDAP SDK for Java may provide additional information.

The Join Rule

The first, and probably most important, field of a join request control is the join rule. This is used to specify the types of entries that should be joined with the corresponding search result entry. We currently offer six types of join rules (and may add support for more in the future):

  • AND — An AND join rule encapsulates a set of one or more other join rules and will only join a search result entry with entries that match the criteria for all of the encapsulated join rules.
  • OR — An OR join rule encapsulates a set of one or more other join rules and will only join a search result entry with entries that match the criteria for at least one of the encapsulated join rules.
  • DN Join — A DN join rule will join a search result entry with other entries whose DNs are contained in a specified attribute in the search result entry. For example, You could use a DN join rule to join a groupOfNames entry with the entries whose DNs are contained in the member attribute of the group. Or you could join an employee’s entry with the entry of their boss via the manager attribute in the employee’s entry.
  • Equality Join — An equality join rule will join a search result entry with other entries that share a common attribute value. For example, say that a mobile phone service provider has an entry for each account, and an entry for each device linked to an account. If the account entry has an accountNumber attribute, and the devices associated with that account also contain that same accountNumber value, you could use an equality join to associate the device entries with the account. Also note that the value that the joined entries have in common doesn’t necessarily have to be in the same attribute type (for example, it could be in the accountNumber attribute of an account entry and the deviceAccountNumber attribute of a device entry).
  • Contains Join — A contains join rule is much like an equality join rule, except that the server uses a substring match (and more correctly, a subAny match) instead of an equality match when identifying the entries to join with the search result entry. That is, the value of a specified attribute in the search result entry must be equal to or a substring of a value in a specified attribute in the joined entries.
  • Reverse DN Join — A reverse DN join rule will join a search result entry with entries that contain the DN of that search result entry in the value of a specified attribute. For example, you could use a reverse DN join to retrieve the entries for a user’s direct reports via the manager attribute.

The Join Base DN

The base DN field of a join request is expressed a little bit differently than the base DN from a search request. In a search request, you must always specify the topmost entry in the subtree containing the entries you’re interested in retrieving. The base DN field of a join request has the same purpose, but there are three different ways that you can indicate what that base DN should be:

  • You can indicate that the base DN from the search request should also be the base DN used when finding entries to be joined with each search result entry.
  • You can indicate that the DN of each search result entry should be used as the base DN used when finding entries to be joined with that search result entry.
  • You can explicitly specify the base DN that you want to use for the join request.

The Join Scope

The scope field of a join request has basically the same meaning as the scope field of a search request, except that it’s relative to the join base DN rather than the search base DN. Those scopes are:

  • baseObject — This indicates that only the entry specified by the join base DN may be joined with the search result entry. None of its subordinates will be included.
  • singleLevel — This indicates that only the entries that are the immediate subordinates of the entry specified by the join base DN may be joined with the search result entry. The join base entry itself will not be included, nor will entries more than one level below the join base entry.
  • wholeSubtree — This indicates that the join base entry and all of its subordinates, to any depth, may be joined with the search result entry.
  • subordinateSubtree — This indicates that all entries below the join base entry, to any depth, may be joined with the search result entry. The join base entry itself will not be included.

Note that the scope element of a join request control is optional, and if it is not provided, then the scope from the search request will be used.

The Join Alias Dereferencing Policy

The alias dereferencing policy in the join request control has the same meaning as in the search request itself. It tells the server how it should treat any aliases that are encountered during join processing. The allowed values include:

  • neverDerefAliases — Indicates that the server should not attempt to dereference any aliases encountered during join processing.
  • derefInSearching — Indicates that the server should attempt to dereference any aliases encountered below the join base DN, but not the join base DN itself if it happens to be an alias.
  • derefFindingBaseObj — Indicates that the server should attempt to dereference the join base DN if it happens to be an alias, but not any aliases encountered below that entry.
  • derefAlways — Indicates that the server should attempt to dereference any aliases it encounters during join processing.

As with the join scope, this is an optional element in a join request control. If you leave it out, the server will use the same policy as specified in the search request.

The Join Size Limit

This specifies the maximum number of entries that should be joined with each search result entry. If a search result entry would have been joined with more than 1000 entries, then the join result control associated with that entry will include a “size limit exceeded” join result code and will not include any of the joined entries.

Note that the server may impose a size limit that is lower than the one requested by the client. In particular, the effective size limit, cannot be larger than the requester’s maximum search size limit, the maximum size limit imposed by the requester’s client connection policy, and the maximum join request size limit defined in the global configuration.

This is an optional element, and if it is not specified, then the size limit from the search request will be used as the size limit for the join processing.

On a related note, the join request control does not include an element to specify a time limit. The time limit from the search request applies to the entire set of processing for the request.

The Join Filter

This is an optional additional search filter that will be required to match any entry for it to be joined with a search result entry. By default, all entries within the scope of the join request that match the criteria specified by the join rule will be joined with the search result entry. If an additional filter is specified, then entries will also have to match that filter to be included in the join.

The Join Attributes

This specifies the set of attributes that should be included in the entries that are joined with a search result entry. This works in the same way as the set of requested attributes for the search request itself, and it can include special tokens like “*” (to indicate that all user attributes should be included), “+” (to indicate that all operational attributes should be included), and “@” followed by an object class name (to indicate that all attributes associated with that object class should be included). If this element is missing or empty, then the default behavior will be to return all user attributes.

Note that the set of attributes that will actually be included in joined entries may be less than the set of requested attributes. For example, some attributes may be excluded because the requester does not have access control permission to retrieve them, or because returning them would violate a sensitive attribute constraint.

The Require Match Flag

This indicates whether a search result entry must be joined with at least one other entry for it to be included in the search results. If this is set to true and there are no entries that match the join criteria for a given search result entry, then that search result entry will not be returned to the client. If this is false, or if it is omitted from the join request control, then the search result entry will still be returned.

The Nested Join Element

The nested join element allows you to extend the join processing to more levels. Not only can you join each search result entry with a related set of entries, but you can also join each of those joined entries with even more related entries based on another set of join criteria. Note that if there is a nested join element, then each joined entry effectively becomes the search result entry for the next level of the join. And if you want to have more than two levels of joins, nested joins can include their own nested joins, but you probably don’t want to go too deep because it has the potential to make the join result really big.

For example, in the scenario we described above where you want to join an employee to their manager and peers, the search request could be used to find the employee’s entry, and you could use a DN join to join it with their manager’s entry (based on the manager attribute in the employee’s entry), and then you could use a reverse DN join to join the manager with their direct reports (also by the manager attribute in their entries).

The Join Result Control

If the search request includes a join request control, then each search result entry will include a join result control that provides information about the join processing for that entry. The join result control has an OID of 1.3.6.1.4.1.30221.2.5.9 and a value with the following encoding:

JoinResult ::= SEQUENCE {
     COMPONENTS OF LDAPResult,
     entries     [4] SEQUENCE OF JoinedEntry }

JoinedEntry ::= SEQUENCE {
     objectName            LDAPDN,
     attributes            PartialAttributeList,
     nestedJoinResults     SEQUENCE OF JoinedEntry OPTIONAL }

So basically, the join result control contains a result code, an optional diagnostic message, an optional matched DN, an optional set of referral URLs, and a list of the entries that were joined with the search result entry. And if the join request included a nested join, then each joined entry can have its own set of joined entries.

See the JoinResultControl and JoinedEntry classes in the UnboundID LDAP SDK for Java for more information.

An Example Using the ldapsearch Tool

The ldapsearch command-line tool shipped with the Ping Identity Directory Server (and also with the UnboundID LDAP SDK for Java) includes support for the LDAP join control through the following arguments:

  • --joinRule — The join rule to use. This is the only argument that is required to include the join request control in the search request. The value must be in one of the following formats:

    • dn:{sourceAttribute} — Indicates that each search result entry should be joined with entries whose DNs are contained in the specified source attribute of the search result entry.
    • reverse-dn:{targetAttribute} — Indicates that each search result entry should be joined with entries that contain the DN of the search result entry in the specified target attribute.
    • equals:{sourceAttribute}:{targetAttribute} — Indicates that each search result entry should be joined with entries that have the value of the search result entry’s source attribute in the joined entry’s target attribute.
    • contains:{sourceAttribute}:{targetAttribute} — Indicates that each search result entry should be joined with entries that contain the value of the search result entry’s source attribute as a substring in the joined entry’s target attribute.
  • --joinBaseDN — The join base DN to use. If this is omitted, then the base DN from the search request will be used. If it is provided, the value can be one of the following:

    • The string “search-base”, which indicates that the base DN of the search request should also be used as the join base DN.
    • The string “source-entry-dn”, which indicates that the DN of the search result entry should be used as the join base DN.
    • Any valid LDAP DN, which will be used as the join base DN.
  • --joinScope — The scope for the join processing. If this is omitted, then the scope from the search request will be used. If it is provided, then the value may be one of the following:

    • base — The baseObject scope.
    • one — The singleLevel scope.
    • sub — The wholeSubtree scope.
    • subordinates — The subordinateSubtree scope.
  • --joinSizeLimit — The maximum number of entries that should be joined with each search result entry. If this is omitted, then the search request size limit will be used.
  • --joinFilter — An additional filter that joined entries will be required to match. If this is omitted, then no additional filter will be used.
  • --joinRequestedAttribute — The name or OID of an attribute that should be included in joined entries. This can be specified multiple times to indicate that multiple attributes should be included. If this is omitted, then all user attributes will be requested.
  • --joinRequireMatch — If present, this indicates that search result entries that aren’t joined with any entries should be omitted from the results.

As you might have noticed, the ldapsearch tool doesn’t quite provide full support for all of the LDAP join features. For example, it doesn’t offer the AND or OR join rule types, and it doesn’t allow you to perform a nested join. But it’s still good enough to let you try out the join control in a number of common cases.

The following is an example that demonstrates using ldapsearch to perform a DN join that links an employee’s entry to the entry of their boss via the manager attribute:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --bindDN 'cn=LDAP Join Example,ou=Applications,dc=example,dc=com' \
     --joinRule dn:manager \
     --joinBaseDN search-base \
     --joinScope sub \
     --joinRequestedAttribute givenName \
     --joinRequestedAttribute sn \
     --joinRequestedAttribute mail \
     --joinRequestedAttribute telephoneNumber \
     --baseDN dc=example,dc=com \
     --scope sub "(uid=ernest.employee)" \
     givenName \
     sn \
     mail \
     telephoneNumber
Enter the bind password:

The server presented the following certificate chain:

     Subject: CN=ds.example.com,O=Ping Identity Self-Signed Certificate
     Valid From: Friday, February 8, 2019 at 12:23:34 AM CST
     Valid Until: Friday, February 4, 2039 at 12:23:34 AM CST
     SHA-1 Fingerprint: 78:f1:49:a0:06:0b:bd:1e:2c:88:cb:76:60:cb:87:cb:c4:c3:76:97
     256-bit SHA-2 Fingerprint: 55:9c:9a:54:97:48:8c:51:fa:10:da:a0:08:f0:15:dc:f0:92:75:3e:e9:be:56:c5:5c:5c:ec:d5:d4:85:15:a2

WARNING:  The certificate is self-signed.

Do you wish to trust this certificate?  Enter 'y' or 'n': y
# Join Result Control:
#      OID:  1.3.6.1.4.1.30221.2.5.9
#      Join Result Code:  0 (success)
#      Joined With Entry:
#           dn: uid=betty.boss,ou=People,dc=example,dc=com
#           mail: betty.boss@example.com
#           sn: Boss
#           givenName: Betty
#           telephoneNumber: +1 123 456 7891
dn: uid=ernest.employee,ou=People,dc=example,dc=com
mail: ernest.employee@example.com
sn: Employee
givenName: Ernest
telephoneNumber: +1 123 456 7890

# Result Code:  0 (success)
# Number of Entries Returned:  1

An Example Using the UnboundID LDAP SDK for Java

I’ve also written an example that demonstrates the use of the LDAP join control in the UnboundID LDAP SDK for Java. The LDAP SDK does support using nested joins, so this example uses the scenario outlined above, in which we retrieve a user, their manager, and their peers. You can find it at https://github.com/dirmgr/blog-example-source-code/tree/master/ldap-join.

The Get Password Policy State Issues Control in the Ping Identity Directory Server

In the Ping Identity Directory Server, we’re very serious when it comes to security. We make it easy to encrypt all your data, including the database contents (and the in-memory database cache), network communication, backups, LDIF exports, and even log files. We’ve got lots of password policy features, like strong password encoding, many password validation options, and ways to help thwart password guessing attempts. We offer several two-factor authentication options. We have a powerful access control subsystem that is augmented with additional features like sensitive attributes and privileges. We have lots of monitoring and alerting features so that you can be notified of any problems as soon as (or, in many cases, before) they arise so that your service remains available. Security was a key focus back when I started writing OpenDS (which is the ancestor of the Ping Identity Directory Server), and it’s still a key focus today.

One small aspect of this focus on security is that, by default, we don’t divulge any information about the reason for a failed authentication attempt. Maybe the account doesn’t exist, or maybe it’s locked or administratively disabled. Maybe the password was wrong, or maybe it’s expired. Maybe the user isn’t allowed to authenticate from that client system. In all of these cases, and for other types of authentication failures, the server will just return a bind result with a result code of invalidCredentials and no diagnostic message. The server will include the exact reason for the authentication failure in the audit log so that it’s available for administrators, but we won’t return it to the client so that a malicious user can’t use that to better craft their attack.

Now, if you don’t care about this and want the server to just go ahead and provide the message to the client, then you can do that with the following configuration change:

dsconfig set-global-configuration-prop --set return-bind-error-messages:true

However, that may not be the best option because it applies equally to all authentication requests for all clients, and because the output is human-readable but not very machine parseable. It’s not easy for a client to programmatically determine what the reason for the failure is. For that, your best option is the get password policy state issues control.

The get password policy state issues control indicates that you want the server to return information about the nature of the authentication failure, and details of the user’s password policy state that might interfere with authentication either now or in the future. This information is easy to consume programmatically, but it also contains user-friendly representations of those conditions as well. We intend for this control to be used by applications that authenticate users, and that can decide what information they want to make available to the end user.

Restrictions Around the Control’s Use

As previously mentioned, we might not always want to divulge the reason for a failed authentication attempt to the end user. As such, if we allowed just anyone to use this control, then that would get thrown out the window since a malicious client could just always include that control and get some helpful information in the response. So we don’t do that. Instead, this control will only be permitted if all of the following conditions are met:

  • The server’s access control handler must allow the get password policy state issues request control to be included in bind requests. This control is allowed in bind request by default, but you can disable it if you want to.
  • A bind request that includes the get password policy state issues request control must be received on a connection that is already authenticated as a user who has the permit-get-password-policy-state-issues privilege.

Since we intend this feature to be used by applications that authenticate users, we expect that any application that is to be authorized to use it will have an account with the necessary privilege. And since the get password policy state issues control is a proprietary feature, we expect that any application that knows how to use it can also easily include the retain identity request control in those same bind requests.

The Get Password Policy State Issues Request Control

The get password policy state issues request control is very simple: it’s got a request OID of 1.3.6.1.4.1.30221.2.5.46 and no value. This control is only intended to be included in bind requests, and it’s really just asking the server to include the corresponding response control in the bind result message.

It’s easy enough to use this request control any LDAP API, but if you’re using the UnboundID LDAP SDK for Java, then we provide the GetPasswordPolicyStateIssuesRequestControl class to make it even easier.

The Get Password Policy State Issues Response Control

The get password policy state issues response control is more complicated than the request control. It has an OID of 1.3.6.1.4.1.30221.2.5.47 and a value with the following ASN.1 encoding:

GetPasswordPolicyStateIssuesResponse ::= SEQUENCE {
     notices               [0] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     warnings              [1] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     errors                [2] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     authFailureReason     [3] SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     ... }

If you’re using the UnboundID LDAP SDK for Java, then you can use the GetPasswordPolicyStateIssuesResponseControl class to do all the heavy lifting for you. If you’re using some other API, then you’ll probably have to decode the value for yourself.

There are four basic components to the get password policy state issues response control:

  • A set of error conditions in the user’s password policy state that will either prevent that user from authenticating, or that will prevent them from using their account until they take some action. In the UnboundID LDAP SDK for Java, this we offer the PasswordPolicyStateAccountUsabilityError class to make it easier to interpret these errors. Possible password policy state error conditions include:

    • The account is administratively disabled.
    • The account has expired.
    • The account is not yet active.
    • The account is permanently locked (or at least until an administrator unlocks it) after too many failed authentication attempts.
    • The account is temporarily locked after too many failed authentication attempts.
    • The account is locked because it’s been idle for too long.
    • The account is locked because the password was administratively reset, but the user didn’t choose a new password quickly enough.
    • The password is expired.
    • The password is expired, but there are one or more grace logins remaining. Authenticating with a grace login will only permit them to bind for the purpose of changing the password.
    • The password has been administratively reset and must be changed before the user will be allowed to do anything else.
    • The password policy was configured so that all users governed by that policy must change their passwords by a specified time, but the user attempting to authenticate failed to do so.
  • A set of warning conditions in the user’s password policy state that won’t immediately impact their ability to use their account, but that may impact their ability to use the account in the near future unless they take some action. In the UnboundID LDAP SDK for Java, we offer the PasswordPolicyStateAccountUsabilityWarning class to make it easier to interpret these warnings. Possible password policy state warning conditions include:

    • The account will expire in the near future.
    • The password will expire in the near future.
    • The account has been idle for too long and will be locked unless they successfully authenticate in the near future.
    • The account has outstanding authentication failures and may be locked if there are too many more failed attempts.
    • The password policy was configured so that all users governed by that policy must change their password by a specified time, but the user attempting to authenticate has not yet done so.
  • A set of notice conditions that additional information about the user’s password policy state that may be helpful for applications or the end user to know. The UnboundID LDAP SDK for Java provides the PasswordPolicyStateAccountUsabilityNotice class to make it easier to interpret these notices. Possible password policy state notices include:

    • A minimum password age has been configured in the password policy governing the user, and it has been less than that length of time since the user last changed their password. The user will not be permitted to change their password again until the minimum age period has elapsed.
    • The account does not have a static password, so it will not be allowed to authenticate using any password-based authentication mechanism.
    • The account has an outstanding delivered one-time password that has not yet been consumed and is not yet expired.
    • The account has an outstanding password reset token that has not yet been consumed and is not yet expired.
    • The account has an outstanding retired password that has not yet expired and may still be used to authenticate.
  • An authentication failure reason, which provides information about the reason that the bind attempt failed. The UnboundID LDAP SDK for Java offers the AuthenticationFailureReason class to help make it easier to use this information. Possible authentication failure reasons include:

    • The server could not find the account for the user that is trying to authenticate (e.g., the user doesn’t exist, or the authentication ID does not uniquely identify the user).
    • The password or other provided credentials were not correct.
    • There was something wrong with the SASL credentials provided by the client (e.g., they were malformed or out of sequence).
    • The account isn’t configured to support the requested authentication type (e.g., they attempted a password-based bind, but the user doesn’t have a password).
    • The account is in an unusable state. The password policy error conditions should encapsulate the reasons that the account is not usable.
    • The server is configured to require the client to authenticate securely, but the authentication attempt was not secure.
    • The account is not permitted to authenticate in the requested manner (e.g., from the client address or using the attempted authentication type).
    • The bind request was rejected by the server’s access control handle.
    • The authentication attempt failed because a problem was encountered while processing one of the controls included in the bind request.
    • The server is currently in lockdown mode and will only permit a limited set of users to authenticate.
    • The server could not assign a client connection policy to the account.
    • The authentication attempt used a SASL mechanism that was implemented in a third-party extension, and that extension encountered an error while processing the bind request.
    • The server encountered an internal error while processing the bind request.

Each password policy state error, warning, and notice, as well as the authentication failure reason, is identified by a name and a numeric type, and also includes a human-readable message suitable for displaying to the user if you decide that it is appropriate.

The Password Policy State Extended Operation

Although it’s not the focus of this blog post (maybe I’ll write another one about it in the future), I should also point out that you can also use the password policy state extended operation to obtain the list of usability errors, warnings, and notices for a user, along with a heck of a lot more information about the state of the account. You can also use it to alter the state if desired. Since it’s an extended operation, you can’t use it in the course of attempting a bind to get the authentication failure reason. However, you could use it in conjunction with the get password policy state issues control if you feel like you need additional state information about the user’s account state after parsing the information in the get password policy state issues response control.

An Example Using the UnboundID LDAP SDK for Java

I’ve written a simple program that demonstrates the use of the get password policy state issues control to obtain the authentication failure reason and password policy state issues for a specified user. You can find that example at https://github.com/dirmgr/blog-example-source-code/tree/master/password-policy-state-issues.

The Retain Identity Request Control in the Ping Identity Directory Server

Many LDAP-enabled applications use a directory server to authenticate users, which often consists of a search to find the user’s entry based on the provided login ID, followed by a bind to verify the provided credentials. Some of these applications may purely use the directory server for authentication, while others may then go ahead and perform additional operations on behalf of the logged-in users.

If the application is well designed, then it will probably maintain a pool of connections that it can repeatedly reuse rather than establishing a new connection each time it needs to perform an operation in the server. Usually, the search to find a user is performed on a connection bound as an account created for that application. And if the application performs operations on behalf of the authenticated users, then it often does so while authenticated under that user same application account, using something like the proxied authorization request control to request that the server process those operations under the appropriate user’s authority.

The problem, though, is that performing a bind operation changes the authentication identity of the connection on which it is processed. If the bind is successful, then subsequent operations on that connection will be processed under the authority of the user identified by that bind request. If the bind fails, then the connection becomes unauthenticated, so subsequent requests are processed anonymously. There are a couple of common ways to work around this problem:

  • It can maintain two different connection pools: one to use just for bind operations, and the other for all other types of operations.
  • After attempting a bind to verify a user’s credentials (whether successful or not), it can re-authenticate as the application account.

In the Ping Identity Directory Server, we offer a third option: the bind request can include the retain identity request control. This control tells the server that it should perform all of the normal processing associated with the bind (verify the user’s credentials, update any password policy state information for that user, etc.), but not change the authentication identity of the underlying connection. Regardless of whether the bind succeeds or fails, the connection will end up with the same authentication/authorization identity that it had before the bind was attempted. This allows you to use just a single connection pool that stays authenticated as the application’s account, while still being able to verify credentials without fear of interfering with access control evaluation for operations following those binds.

The retain identity request control is very easy to use. If you’re using the UnboundID LDAP SDK for Java, you can just use the RetainIdentityRequestControl class, and the Javadoc includes an example demonstrating its use. If you’re using some other API, then you just need to specify an OID of “1.3.6.1.4.1.30221.2.5.3”, and you don’t need to provide a value. We recommend making the control critical so that the bind attempt will fail if the server doesn’t support it (although we added support for this control back in 2008 when it was still the UnboundID Directory Server, so it’s been around for more than a decade).

I’m not aware of any other directory server that supports the retain identity request control (aside from the LDAP SDK’s in-memory directory server), but it’s very simple and very useful, so if you’re using some other type of server you might inquire about whether they’d implement it or something similar. Of course, you could also switch to the Ping Identity Directory Server and get support for this and lots of other helpful features that other servers don’t provide.

Programmatically Retrieving Password Quality Requirements in the Ping Identity Directory Server

When changing a user’s password, most LDAP directory servers provide some way to determine whether the new password is acceptable. For example, when allowing a user to choose a new password, you might want to ensure that new password has at least some minimum number of characters, that it’s not found in a dictionary of commonly used passwords, and that it’s not too similar to the user’s current password.

It’s important to be able to tell the user what the requirements are so that they don’t keep trying things that the server will reject. And you might also want to provide some kind of password strength meter or indicator of acceptability to let them visually see how good their password is. But you don’t want to do this with hard-coded logic in the client because different sets of users might have different password quality requirements, and because the server configuration can change, so even the requirements for a given user may change over time. What you really want is a way to programmatically determine what requirements the server will impose.

Fortunately, the Ping Identity Directory Server provides a “get password quality requirements” extended operation that can provide this information. We also have a “password validation details” control that you can use when changing a password to request information about how well the proposed password satisfies those requirements. These features were added in the 5.2.0.0 release back in 2015, so they’ve been around for several years. The UnboundID LDAP SDK for Java makes it easy to use them in Java clients, but you can make use of them in other languages if you’re willing to do your own encoding and decoding.

The Get Password Quality Requirements Extended Request

The get password quality requirements extended request allows a client to ask the server what requirements it will impose when setting a user’s password. It’s best to use before prompting for a new password so that you can display the requirements to them and potentially provide client-side feedback as to whether the proposed password is acceptable.

Since the server can enforce different requirements under different conditions, you need to tell it the context for the new password. Those contexts include:

  • Adding a new entry that includes a password. You can either indicate that the new entry will use the server’s default password policy, or that it will use a specified policy.
  • A user changing their own password. It doesn’t matter whether the password change is done by a standard LDAP modify operation that targets the password attribute or with the password modify extended operation; the requirements for a self change will be the same in either case.
  • An administrator resetting another user’s password. Again, it doesn’t matter whether it’s a regular LDAP modify or a password modify extended operation. You just need to indicate which user’s password is being reset so the server can determine which requirements will be enforced.

The UnboundID LDAP SDK for Java provides support for this request through the GetPasswordQualityRequirementsExtendedRequest class, but if you need to implement support for it in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.43 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsRequestValue ::= SEQUENCE {
     target     CHOICE {
          addWithDefaultPasswordPolicy           [0] NULL,
          addWithSpecifiedPasswordPolicy         [1] LDAPDN,
          selfChangeForAuthorizationIdentity     [2] NULL,
          selfChangeForSpecifiedUser             [3] LDAPDN,
          administrativeResetForUser             [4] LDAPDN,
          ... },
     ... }

The Get Password Quality Requirements Extended Response

The server uses the get password quality requirements extended response to tell the client what the requirements are for the target user in the indicated context. Each validator configured in a password policy can return its own password quality requirement structure, which includes the following components:

  • A human-readable description that describes the purpose of the validator in a user-friendly form. For example, “The password must contain at least 8 characters”.
  • A validation type that identifies the type of validator for client-side evaluation. For example, “length”.
  • A set of name-value pairs that provide information about the configuration of that password validator. For example, a name of “min-password-length” and a value of “8”.

A list of the validation types and corresponding properties for all the password validators included with the server is provided later in this post.

In addition to those requirements, it may provide additional information about the password change, including:

  • Whether the user will be required to provide their current password when choosing a new password. This is only applicable for a self change.
  • Whether the user will be required to choose a new password the first time they authenticate after the new password is set. This is only applicable for an add or an administrative reset, and it’s based on the password policy’s force-change-on-add or force-change-on-reset configuration.
  • The length of time that the newly set password should be considered valid. If the user will be required to change their password on the next authentication, then this will be the length of time they have before that temporary password becomes invalid. Otherwise, it specifies the length of time until the password expires.

The UnboundID LDAP SDK for Java provides support for the extended result through the GetPasswordQualityRequirementsExtendedResult class and the related PasswordQualityRequirement class. In case you need to implement support for this extended response in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.44 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsResultValue ::= SEQUENCE {
     requirements                SEQUENCE OF PasswordQualityRequirement,
     currentPasswordRequired     [0] BOOLEAN OPTIONAL,
     mustChangePassword          [1] BOOLEAN OPTIONAL,
     secondsUntilExpiration      [2] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirement ::= SEQUENCE {
     description                  OCTET STRING,
     clientSideValidationInfo     [0] SEQUENCE {
          validationType     OCTET STRING,
          properties         [0] SET OF SEQUENCE {
               name      OCTET STRING,
               value     OCTET STRING } OPTIONAL } OPTIONAL }

The Password Validation Details Request Control

As noted above, you should use the get password validation requirements extended operation before prompting a user for a new password so that they know what the requirements are in advance. But, if the server rejects the proposed password, it’s useful for the client to be able to tell exactly why it was rejected. The Ping Identity Directory Server will include helpful information in the diagnostic message, but that’s just a blob of text. You might want something more parseable so that you can provide the user with the pertinent information with better formatting. And for that, we provide the password validation details request control.

This control can be included in an add request that includes a password, a modify request that attempts to alter a password, or a password modify extended request. It tells the server that the client would like a response control (outlined below) that includes information about each of the requirements for the new password and whether that requirement was satisfied.

The UnboundID LDAP SDK for Java provides support for this request control in the PasswordValidationDetailsRequestControl class, but if you want to use it in another API, then all you need to do is to create a request control with an OID of 1.3.6.1.4.1.30221.2.5.40. The criticality can be either true or false (but it’s probably better to be false so that the server won’t reject the request if that control is not available for some reason), and it does not take a value.

The Password Validation Details Response Control

When the server processes an add, modify, or password modify request that included the password validation request control, the response that the server returns may include a corresponding password validation details response control with information about how well the proposed password satisfies each of the requirements. If present, the response control will include the following components:

  • One of the following:

    • Information about each of the requirements for the proposed password and whether that requirement was satisfied.
    • A flag that indicates that the request didn’t try to alter a password.
    • A flag that indicates that the request tried to set multiple passwords.
    • A flag that indicates that the request didn’t get to the point of trying to validate the password because some other problem was encountered first.
  • An optional flag that indicates whether the server requires the user to provide their current password when choosing a new password, but that the current password was not given. This is only applicable for self changes, and not for adds or administrative resets.
  • An optional flag that indicates whether the user will be required to change their password the next time they authenticate. This is applicable for adds and administrative resets, but not for self changes.
  • An optional value that specifies the length of time that the new password will be considered valid. If it was an add or an administrative reset and the user will be required to choose a new password the next time they authenticate, then this is the length of time that they have to do that. Otherwise, it will be the length of time until the new password expires.

The UnboundID LDAP SDK for Java provides support for this response control through the PasswordValidationDetailsResponseControl class, with the PasswordQualityRequirementValidationResult class providing information about whether each of the requirements was satisfied. If you need to implement support for this control in some other API, then it has a response OID of 1.3.6.1.4.1.30221.2.5.41 and a value with the following ASN.1 encoding:

PasswordValidationDetailsResponse ::= SEQUENCE {
     validationResult           CHOICE {
          validationDetails             [0] SEQUENCE OF
               PasswordQualityRequirementValidationResult,
          noPasswordProvided            [1] NULL,
          multiplePasswordsProvided     [2] NULL,
          noValidationAttempted         [3] NULL,
          ... },
     missingCurrentPassword     [3] BOOLEAN DEFAULT FALSE,
     mustChangePassword         [4] BOOLEAN DEFAULT FALSE,
     secondsUntilExpiration     [5] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirementValidationResult ::= SEQUENCE {
     passwordRequirement      PasswordQualityRequirement,
     requirementSatisfied     BOOLEAN,
     additionalInfo           [0] OCTET STRING OPTIONAL }

Validation Types and Properties for Available Password Validators

The information included in the get password validation details extended response is enough for the client to display a user-friendly list of the requirements that will be enforced for a new password. However, it also includes information that can be used for some client-side evaluation of how well a proposed password satisfies those requirements. This can help the client tell the user when the password isn’t good enough without having to send the request to the server, or possibly provide feedback about the strength or acceptability of the new password while they’re still typing it. This is possible because of the validation type and properties components of each password quality requirement.

Of course, you can really only take advantage of this feature if you know what the possible validation type and properties are for each of the password validators. This section provides that information for each of the types of validators included with the Ping Identity Directory Server (or, at least the ones available at the time of this writing; we may add more in the future).

Also note that for some types of password validators, you may not be able to perform client-side validation. For example, if the server is configured to reject any proposed password that it finds in a dictionary of commonly used passwords, the client can’t make that determination because it doesn’t have access to that dictionary. In such cases, it’s still possible to display the requirement to the user so that they’re aware of it in advance, and it may still be possible to perform client-side validation for other types of requirements, so there’s still benefit to using this information.

The Attribute Value Password Validator

The attribute value password validator can be used to prevent the proposed password from matching the value of another attribute in a user’s entry. You can specify which attributes to check, or it can check all user attributes in the entry (which is the default). It can be configured to reject the case in which the proposed password exactly matches a value for another attribute, but it can also be configured to reject based on substring matches (for example, if an attribute value is a substring of the proposed password, or if the proposed password is a substring of an attribute value). You can also optionally test the proposed password in reversed order.

You can perform client-side checking for this password validator if you have a copy of the target user’s entry. The validation type is “attribute-value”, and it offers the following validation properties:

  • match-attribute-{counter} — The name of an attribute whose values will be checked against the proposed password. The counter value starts at 1 and will increase sequentially for each additional attribute to be checked. For example, if the validator is configured to check the proposed password against the givenName, sn, mail, and telephoneNumber attributes, you would have a match-attribute-1 property with a value of givenName, a match-attribute-2 property with a value of sn, a match-attribute-3 property with a value of mail, and a match-attribute-4 property with a value of telephoneNumber.
  • test-password-substring-of-attribute-value — Indicates whether to check to see if the proposed password matches a substring of any of the target attributes. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-attribute-value-substring-of-password — Indicates whether to check to see if any of the target attributes matches a substring of the proposed password. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-reversed-password — Indicates whether to check the proposed password with the order of the characters reversed in addition to the order in which they were provided. If this property has a value of true, then both the forward and reversed password will be checked. If the property has a value of false, or if it is absent, then the password will only be checked in forward order.

The Character Set Password Validator

The character set password validator can be used to ensure that passwords have a minimum number of characters from each of a specified collection of character sets. For example, you could define one set with all of the lowercase letters, one with all the uppercase letters, one with all the numeric digits, and one with a set of symbols, and require that a password have at least one character from each of those sets.

You can perform client-side checking for this password validator just using the proposed password itself. The validation type is “character-set”, and it has the following validation properties:

  • set-{counter}-characters — A set of characters for which a minimum count will be enforced. The counter value starts at 1 and will increase sequentially for each additional set of characters that is defined. For example, if you had sets of lowercase letters, uppercase letters, and numbers, then you could have a set-1-characters property with a value of abcdefghijklmnopqrstuvwxyz, a set-2-characters property with a value of ABCDEFGHIJKLMNOPQRSTUVWXYZ, and a set-3-characters property with a value of 0123456789.
  • set-{counter}-min-count — The minimum number of characters that must be present from the character set identified with the corresponding counter value (so the property with a name of set-1-min-count specifies the minimum number of characters from the set-1-characters set). This value will be an integer whose value is greater than or equal to zero (with a value of zero indicating that characters from that set are allowed, but not required; this is really only applicable if allow-unclassified-characters is false).
  • allow-unclassified-characters — Indicates whether passwords should be allowed to have any characters that are not defined in any of the character sets. If this property has a value of true, then passwords will be allowed to have unclassified characters as long as they meet the minimum number of required characters from all of the specified character sets. If this property has a value of false, then passwords will only be permitted to include characters from the given character sets.

The Dictionary Password Validator

The dictionary password validator is used to ensure that users can’t choose passwords that are found in a specified dictionary file. The Ping Identity Directory Server comes with two such files: one that contains a list of over 400,000 English words, and one that contains over 500,000 of the most commonly used passwords. You can also provide your own dictionary files.

Unless the client has a copy of the same dictionary file that the server is using, then it’s not really possible for it to perform client-side validation for this validator. Nevertheless, the validator does have a validation type of “dictionary” and the following validation properties:

  • dictionary-file — The name (without path information) of the dictionary file that the password validator is using.
  • case-sensitive-validation — Indicates whether the validation should be case sensitive or insensitive. If the value is true, then a proposed password will be rejected only if it is found with exactly the same capitalization in the dictionary file. If it is false, then differences in capitalization will be ignored.
  • test-reversed-password — Indicates whether the validation should check the proposed password with the characters in reversed order as well as in the order the client provided them. If the value is true, then both the forward and reversed password will be checked. If the value is false, then the password will be checked only as it was provided by the client.

The Haystack Password Validator

The haystack password validator is based on the concept of password haystacks as described at https://www.grc.com/haystack.htm. This algorithm judges the strength of a password based on a combination of its length and the different classes of characters that it contains. For example, a password comprised of a mix of lowercase letters, uppercase letters, numeric digits, and symbols is, in general, more resistant to brute force attacks than a password of the same length made up of only lowercase letters, but a password made up of only lowercase letters can be very secure if it is long enough (and passphrases—passwords comprised of multiple words strung together—are a great example of this). The haystack validator lets users have a simpler password if it’s long enough, or a shorter password if it’s complex enough.

As long as you have a client-side implementation of the haystack logic (which is pretty simple), you can perform client-side checking for this password validator. The validator name is “haystack”, and it has the following validation properties:

  • assumed-password-guesses-per-second — The number of guesses that an attacker is assumed to be able to make per second. This value will be an integer, although it could be a very large integer, so it’s recommended to use at least a 64-bit variable to represent it.
  • minimum-acceptable-time-to-exhaust-search-space — The minimum length of time, in seconds, that is considered acceptable for an attacker to have to keep guessing (at the rate specified by the assumed-password-guesses-per-second property) before exhausting the complete search space of all possible passwords. This will also be an integer, and it’s also recommended that you use at least a 64-bit variable to hold its value.

The Length Password Validator

The length-based password validator judges the quality of a proposed password based purely on the number of characters that it contains. Note that it counts the number of UTF-8 characters rather than the number of bytes, and a password with multi-byte characters will have fewer characters than it has bytes. You can configure either or both of a minimum required length and a maximum required length.

Client-side checking is very straightforward for this validator. It uses a validator name of “length” and the following validation properties:

  • min-password-length — The minimum number of characters that a password will be required to have. If present, the value will be an integer. If it is absent, then no minimum length will be enforced.
  • max-password-length — The maximum number of characters that a password will be permitted to have. If present, the value will be an integer. If it is absent, then no maximum length will be enforced.

The Regular Expression Password Validator

The regular expression password validator can be used to require that password match a given pattern or to reject passwords that match a given pattern. As long as the client can perform regular expression matching, then client-side validation should be pretty simple. It uses a validator name of “regular-expression” and the following validation properties:

  • match-pattern — The regular expression that will be evaluated against a proposed password.
  • match-behavior — A string that indicates the behavior that the validator should observe. A value of require-match means that the validator will reject any proposed password that does not satisfy the associated match-pattern. A value of reject-match means that the validator will reject any proposed password that does match the specified match-pattern.

The Repeated Characters Password Validator

The repeated characters password validator can be used to reject a proposed password if it contains the same character, or characters in the same set, more than a specified number of times in a row without a different type of character in between. By default, it treats each type of character separately, but you can define sets of characters that will be considered equivalent. In the former case, the validator will reject a password if it contains the same character too many times in a row, whereas in the latter case, it can reject a password if it contains too many characters of the same type in a row. For example, you could define sets of lowercase letters, uppercase letters, digits, and symbols, and prevent too many characters of each type in a row.

It should be pretty straightforward to perform client-side checking for this password validator. It uses a validator name of “repeated-characters” and the following validation properties:

  • character-set-{counter} — A set of characters that should be considered equivalent. The counter will start at 1 and increment sequentially for each additional character set. This property may be absent if each character is to be treated independently.
  • max-consecutive-length — The maximum number of times that each character (or characters from the same set) may appear in a row before a proposed password will be rejected. The value will be an integer.
  • case-sensitive-validation — Indicates whether to treat characters from the password in a case-sensitive manner. A value of true indicates that values should be case-sensitive, while a value of false indicates that values should be case-insensitive.

The Similarity Password Validator

The similarity password validator can be used to reject a proposed password if it is too similar to the user’s current password. Similarity is determined by the Levenshtein distance algorithm, which is a measure of the minimum number of character insertions, deletions, or replacements needed to transform one string into another. For example, it can prevent a user from changing the password from something like “password1” to “password2”. This validator is only active for password self changes. It does not apply to add operations or administrative resets.

Because the Ping Identity Directory Server server generally stores passwords in a non-reversible form, this can only be used if the request used to change the user’s password includes both the current password and the proposed new password. You can use the password-change-requires-current-password property in the password policy configuration to require this, and if that is configured, then the get password quality requirements extended response will indicate that the current password is required when a user is performing a self change. The password modify extended request provides a field for specifying the current password when requesting a new password, but to satisfy this requirement in an LDAP modify operation, the change should be processed as a delete of the current password (with the value provided in the clear) followed by an add of the new password (also in the clear), in the same modification, like:

dn: uid=john.doe,ou=People,dc=example,dc=com
changetype: modify
delete: userPassword
userPassword: oldPassword
-
add: userPassword
userPassword: newPassword
-

If a client has the user’s current password, the proposed new password, and an implementation of the Levenshtein distance algorithm, then it can perform client-side checking for this validator. The validation type is “similarity” and the validation properties are:

  • min-password-difference — The minimum acceptable distance, as determined by the Levenshtein distance algorithm, between the user’s current password and the proposed new password. It will be an integer.

The Unique Characters Password Validator

The unique characters password validator can be used to reject a proposed password that has too few unique characters. This can prevent users from choosing simple passwords like “aaaaaaaa” or “abcabcabcabc”.

It’s easy to perform client-side checking for this validator. It has a validator name of “unique-characters” and the following properties:

  • min-unique-characters — The minimum number of unique characters that the password must contain for it to be acceptable. This is an integer value, with zero indicating no limit (although the server will require passwords to contain at least one character).
  • case-sensitive-validation — Indicates whether the validator will treat uppercase and lowercase versions of the same letter as different characters or the same. A value of true indicates that the server will perform case-sensitive validation, while a value of false indicates case-insensitive validation.

Access Control Considerations

The get password quality requirements extended operation probably isn’t something that you’ll want to open up to the world, since it could give an attacker information that they shouldn’t have, like hints that could help them better craft their attack, or information about whether a user exists or not. Since you’ll probably want some restriction on who can use this operation, and since we at Ping Identity have no idea who that might be, the server’s access control configuration does not permit anyone (or at least anyone without the bypass-acl or the bypass-read-acl privilege) to use it. If you want to make it available, then you’ll need to add a global ACI to grant access to an appropriate set of users. The same goes for the password validation details request control.

As an example, let’s say that you want to allow members of the “cn=Password Administrators,ou=Groups,dc=example,dc=com” group to use these features. To do that, you can add the following global ACIs:

(extop="1.3.6.1.4.1.30221.2.6.43")(version 3.0; acl "Allow password administrators to use the get password quality requirements extended operation"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

(targetcontrol="1.3.6.1.4.1.30221.2.5.40")(version 3.0; acl "Allow password administrators to use the password validation details request control"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

Also note that while the server does make the password modify extended operation available to anyone, there are additional requirements that must be satisfied before it can be used. In order to change your own password, you need to at least have write access to the password attribute in your own entry. And in order to change someone else’s password, not only do you need write access to the password attribute in that user’s entry, but you also need the password-reset privilege. You can grant that privilege by adding the ds-privilege-name attribute to the password resetter’s entry using a change like:

dn: uid=password.admin,ou=People,dc=example,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: password-reset
-

Example Usage With the UnboundID LDAP SDK for Java

I’ve created a simple Java program that demonstrates the use of the get password quality requirements extended operation and the password validation details control. It’s not very flashy, and it doesn’t currently attempt to perform any client-side validation of the proposed new password before sending it to the directory server, but it’s at least a good jumping-off point for someone who wants to build this functionality into their own application.

You can find this example at https://github.com/dirmgr/blog-example-source-code/tree/master/password-quality-requirements.

Configuring Two-Factor Authentication in the Ping Identity Directory Server

Passwords aren’t going anywhere anytime soon, but they’re just not good enough on their own. It is entirely possible to choose a password that is extremely resistant to dictionary and brute force attacks, but the fact is that most people pick really bad passwords. They also tend to reuse the same passwords across multiple sites, which further increases the risk that their account could be compromised. And even those people who do choose really strong passwords might still be tricked into giving that password to a lookalike site via phishing or DNS hijacking, or they may fall victim to a keylogger. For these reasons and more, it’s always a good idea to combine a password with some additional piece of information when authenticating to a site.

The Ping Identity Directory Server has included support for two-factor authentication since 2012 (back when it was still the UnboundID Directory Server). Out of the box, we currently offer four types of two-factor authentication:

  • You can combine a static password with a time-based one-time password using the standard TOTP mechanism described in RFC 6238. These are the same kind of one-time passwords that are generated by apps like Google Authenticator or Authy.
  • You can combine a static password with a one-time password generated by a YubiKey device.
  • You can combine a static password with a one-time password that gets delivered to the user through some out-of-band mechanism like a text or email message, voice call, or app notification.
  • You can combine a static password with an X.509 certificate presented to the server during TLS negotiation.

In this post, I’ll describe the process for configuring the server to enable support for each of these types of authentication. We also provide the ability to create custom extensions to implement support for other types of authentication if desired, but that’s not going to be covered here.

Time-Based One-Time Passwords

The Ping Identity Directory Server supports time-based one-time passwords through the UNBOUNDID-TOTP SASL mechanism, which is enabled by default in the server. For a user to authenticate with this mechanism, their account must contain the ds-auth-totp-shared-secret attribute whose value is the base32-encoded representation of the shared secret that should be used to generate the one-time passwords. This shared secret must be known to both the client (or at least to an app in the client’s possession, like Google Authenticator) as well as to the server.

To facilitate generating and encoding the TOTP shared secret, the Directory Server provides a “generate TOTP shared secret” extended operation. The UnboundID LDAP SDK for Java provides support for this extended operation, and the class-level Javadoc describes the encoding for this operation in case you need to implement support for it in some other API. We also offer a generate-totp-shared-secret command-line tool that can be used for testing (or I suppose you could invoke it programmatically if you’d rather do that than use the UnboundID LDAP SDK or implement support for the extended operation yourself). For the sake of convenience, I’ll use this tool for the demonstration.

There are actually a couple of ways that you can use the generate TOTP shared secret operation: for a user to generate a shared secret for their own account (in which case the user’s static password must be provided), or for an administrator (who must have the password-reset privilege) to generate a shared secret for another user. I’d expect the most common use case to be a user generating a shared secret for their own account, so that’s the approach we’ll take for this example.

Note that while the generate TOTP shared secret extended operation is enabled out of the box, the shared secrets that it generates by default are not encrypted, which could make them easier for an attacker to steal if they got access to a copy of the user data. To prevent this, if the server is configured with data encryption enabled, then you should also enable the “Encrypt TOTP Secrets and Delivered Tokens” plugin. That can be done with the following configuration change:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

If we assume that our account has a username of “jdoe”, then the command to generate a shared secret for that user would be something like:

$ bin/generate-totp-shared-secret --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authID u:jdoe \
     --promptForUserPassword
Enter the static password for user 'u:jdoe':
Successfully generated TOTP shared secret 'KATLTK5WMUSZIACLOMDP43KPSG2LUUOB'.

If we were using a nice web application to invoke the generate TOTP shared secret operation, we’d probably want to have it generate a QR code with that shared secret embedded in it so that it could be easily scanned and imported into an app like Google Authenticator (and you’d want to embed it in a URL like “otpauth://totp/jdoe%20in%20ds.example.com?secret=KATLTK5WMUSZIACLOMDP43KPSG2LUUOB”). For the sake of testing, we can either manually generate an appropriate QR code (for example, using an online utility like https://www.qr-code-generator.com), or you can just type the shared secret into the authenticator app.

Now that the account is configured for TOTP authentication, we can use the UNBOUNDID-TOTP SASL mechanism to authenticate to the server. As with the generate TOTP shared secret operation, this SASL mechanism is supported by the UnboundID LDAP SDK for Java, but most of our command-line tools should support this mechanism, so we can test it with a utility like ldapsearch. You’ll need to use the “--saslOption” command-line argument to specify a number of parameters, including “mech” (the name of the SASL mechanism to use, which should be “UNBOUNDID-TOTP”), “authID” (the authentication ID for the user that’s trying to authenticate), and “totpPassword” (for the one-time password generated by the authenticator app). For example:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-TOTP \
     --saslOption authID=u:jdoe \
     --saslOption totpPassword={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

YubiKey One-Time Passwords

If you’ve got a YubiKey device capable of generating one-time passwords in the Yubico OTP format (which should be most YubiKey devices except the ones that only support FIDO authentication), then you can use that device to generate one-time passwords to use in conjunction with your static password.

The Directory Server supports this type of authentication through the UNBOUNDID-YUBIKEY-OTP SASL mechanism. To enable support for this SASL mechanism, you first need to get an API key from Yubico, which you can get for free from https://upgrade.yubico.com/getapikey/. When you do this, you’ll get a client ID and a secret key, which gives you access to use their authentication servers. Note that you only need this for the server (and you can share the same key for all server instances); end users don’t need to worry about this. Alternately, you could stand up your own authentication server if you’d rather not rely on the Yubico servers, but we won’t go into that here.

Once you’ve got the client ID and secret key, you can enable support for the SASL mechanism with the following configuration change:

dsconfig set-sasl-mechanism-handler-prop \
     --handler-name UNBOUNDID-YUBIKEY-OTP \
     --set enabled:true \
     --set yubikey-client-id:{client-id} \
     --set yubikey-api-key:{secret-key}

To be able to authenticate a user with this mechanism, you’ll need to update their account to include a ds-auth-yubikey-public-id attribute with one or more values that represent the public IDs of the YubiKey devices that you want to use (and it might not be a bad idea to have multiple devices registered for the same account, so that you have a backup key in case you lose or break the primary key).

To get the public ID for a YubiKey device, you can use it to generate a one-time password and strip off the last 32 bytes. This isn’t considered secret information, so no encryption is necessary when storing it in an entry, and you can use a simple LDAP modify operation to manage the client IDs for a user account. Alternately, you can use the “register YubiKey OTP device” extended operation (supported and documented in the UnboundID LDAP SDK for Java) or use the register-yubikey-otp-device command-line tool. In the case of the command-line tool, you can register a device like:

$ bin/register-yubikey-otp-device --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authenticationID u:jdoe \
     --promptForUserPassword \
     --otp {one-time-password}
Enter the static password for user u:jdoe:
Successfully registered the specified YubiKey OTP device for user u:jdoe

Note that when using this tool (and the register YubiKey OTP device extended operation in general), you should provide a complete one-time password and not just the client ID.

That should be all that is necessary to allow the user to authenticate with a YubiKey one-time password. We can test it with ldapsearch like so:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-YUBIKEY-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Delivered One-Time Passwords

Delivered one-time passwords are more convenient than either time-based or YubiKey-generated one-time passwords because there’s less burden on the user. There’s no need to install an app or have any special hardware to generate one-time passwords. Instead, the server generates a one-time password and then sends it to the user through some out-of-band mechanism (that is, the user gets the one-time password through some mechanism other than LDAP). The server provides direct support for delivering these generated one-time passwords over SMS (using the Twilio service) or via email, and the UnboundID Server SDK provides an API that allows you to create your own delivery mechanisms. Note, however, that while it may be more convenient to use, it’s also generally considered less secure (especially if you’re using SMS).

There’s also more effort involved in enabling support for delivered one-time passwords than either time-based or YubiKey-generated one-time passwords. The first thing you should do is configure the server to ensure that the generated one-time password values will be encrypted (unless you already did it above for encrypting TOTP shared secrets), which you can do as follows:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

Next, we need to configure one or more delivery mechanisms. These are configured in the “OTP Delivery Mechanism” section of dsconfig. For example, to configure a delivery mechanism for email, you could use something like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name Email \
     --type email \
     --set enabled:true \
     --set 'sender-address:otp@example.com' \
     --set "message-text-before-otp:Your one-time password is ‘" \
     --set "message-text-after-otp:’."

If you’re using email, you’ll also need to configure one or more SMTP external servers and set the smtp-server property in the global configuration.

Alternately, if you’re using SMS, then you’ll need to have a Twilio account and fill in the appropriate values for the SID, auth token, and phone number fields, like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name SMS \
     --type twilio \
     --set enabled:true \
     --set twilio-account-sid:{sid} \
     --set twilio-auth-token:{auth-token} \
     --set sender-phone-number:{phone-number} \
     --set "message-text-before-otp:Your one-time password is '" \
     --set "message-text-after-otp:'."

Once the delivery mechanism(s) are configured, you can enable the delivered one-time password SASL mechanism handler as follows:

dsconfig create-sasl-mechanism-handler \
     --handler-name UNBOUNDID-DELIVERED-OTP \
     --type unboundid-delivered-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match"

You’ll also need to enable support for the deliver one-time password extended operation, which is used to request that the server generate and deliver a one-time password for a user. You can do that like:

dsconfig create-extended-operation-handler \
     --handler-name "Deliver One-Time Passwords" \
     --type deliver-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match" \
     --set "password-generator:One-Time Password Generator" \
     --set default-otp-delivery-mechanism:Email
     --set default-otp-delivery-mechanism:SMS

The process for authenticating with a delivered one-time password involves two steps. In the first step, you need to request that the server generate and deliver a one-time password, which can be accomplished with the “deliver one-time password” extended operation, which is supported and documented in the UnboundID LDAP SDK for Java and can be tested with the deliver-one-time-password command-line tool. Then, once you have that one-time password, you can use the UNBOUNDID-DELIVERED-OTP SASL mechanism to complete the authentication.

If you have multiple delivery mechanisms configured in the server, then there are several ways that the server can decide which one to use to send a one-time password to a user.

  • The server will only attempt to use a delivery mechanism that applies to the target user. For example, if a user entry has an email address but not a mobile phone number, then it won’t try to deliver a one-time password to that user via SMS.
  • The deliver one-time password extended request can be used to indicate which delivery mechanism(s) should be attempted, and in which order they should be attempted. If you’re using the deliver-one-time-password command-line tool, then you can use the –deliveryMechanism argument to specify this.
  • If the extended request doesn’t indicate which mechanisms to use, then the server will check the user’s entry to see if it has a ds-auth-preferred-otp-delivery-mechanism operational attribute. If so, then it will be used to specify the desired delivery mechanism.
  • If nothing else, then the server will use the order specified in the default-otp-delivery-mechanism property of the extended operation handler configuration.

As an example, let’s demonstrate the process of authenticating as user jdoe with a one-time password delivered via email. We can start the process using the deliver-one-time password command-line tool as follows:

$ bin/deliver-one-time-password --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --userName jdoe \
     --promptForBindPassword \
     --deliveryMechanism Email
Enter the static password for the user:

Successfully delivered a one-time password via mechanism 'Email' to 'jdoe@example.com'

Now, we can check our email, and there should be a message with the one-time password. Once we have it, we can use a tool like ldapsearch to authenticate with that one-time password using the UNBOUNDID-DELIVERED-OTP SASL mechanism, like:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-DELIVERED-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Combining Certificates and Passwords

When establishing a TLS-based connection, the server will always present its certificate to the client, and the client will decide whether it wants to trust that certificate and continue establishing the secure connection. Further, the server may optionally ask the client to provide its own certificate, and then the client may then optionally provide one. If the server requests a client certificate, and if the client provides one, then the determine whether it wants to trust that client certificate and continue the negotiation process.

If a client has provided its own certificate to the directory server and the server has accepted it, then the client can use a SASL EXTERNAL bind to request that the server use the information in the certificate to identify and authenticate the client. Most LDAP servers support this, and it can be a very strong form of single-factor authentication. However, the Ping Identity Directory Server also offers an UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism that takes this even further by combining the client certificate with a static password.

Certificate-based authentication (regardless of whether you also include a static password) isn’t something that has really caught on because of the hassle and complexity of dealing with certificates. It’s honestly probably not a great option for most end users, although it may be an attractive option for more advanced users like server administrators. But one big benefit that the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism has over the two-factor mechanisms that rely on one-time passwords is that it can be used in a completely non-interactive manner. That makes it suitable for use in authenticating one application to another.

As with the EXTERNAL mechanism, the Ping Identity Directory Server has support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism enabled out of the box. Just about the only thing you’re likely to want to configure is the certificate-mapper property in its configuration, which is used to uniquely identify the account for the user that is trying to authenticate based on the contents of the certificate. The certificate mapper that is configured by default will only work if the certificate’s subject DN matches the DN of the corresponding user entry. Other certificate mappers can be used to identify the user in other ways, including searching based on attributes in the certificate subject or searching for the owner of a certificate based on the fingerprint of that certificate.

Due to an unfortunate oversight, command-line tools currently shipped with the server do not include support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism. That will be corrected in the next release, but if you want to test with it now, you can check out the UnboundID LDAP SDK for Java from its GitHub project and build it for yourself. That will allow you to test certificate+password authentication like so:

$ tools/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --keyStorePath client-keystore \
     --promptForKeyStorePassword \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-CERTIFICATE-PLUS-PASSWORD \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the key store password:

Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 42a82498-93c6-4e62-9c6c-8fe6b33e1550
startTime: 20190107072612Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Ping Identity Directory Server 7.2.0.0

We have just released the Ping Identity Directory Server version 7.2.0.0, available for download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html. This new release offers a lot of new features, some substantial performance improvements, and a number of bug fixes. The release notes provide a pretty comprehensive overview of the changes, but here are some of the highlights:

  • Added a REST API (using JSON over HTTP) for interacting with the server data. Although we already supported the REST-based SCIM protocol, our new REST API is more feature rich, requires less administrative overhead, and isn’t limited by limitations imposed by SCIM compliance. SCIM remains supported.

  • Dramatically improved the logic that the server uses for evaluating complex filters. It now uses a number of additional metrics to make more intelligent decisions about the order in which components should be evaluated to get the biggest bang for the buck.

  • Expanded our support for composite indexes to provide support for ANDs of multiple components (for example, “(&(givenName=?)(sn=?))”). These filters can be comprised entirely of equality components, or they may combine one or more equality components with either a greater-or-equal filter, a less-or-equal filter, a bounded range filter, or a substring filter.

  • When performing a new install, the server is now configured to automatically export data to LDIF every day at 1:05 a.m. These exports will be compressed and encrypted (if encryption is enabled during the setup process), and they will be rate limited to minimize the impact on performance. We have also updated the LDIF export task to support exporting the contents of multiple backends in the same invocation.

  • Added support for a new data recovery log and a new extract-data-recovery-log-changes command-line tool. This can help administrators revert or replay a selected set of changes if the need arises (for example, if a malfunctioning application applies one or more incorrect changes to the server).

  • Added support for delaying the response to a failed bind operation, during which time no other operations will be permitted on the client connection. This can be used as an alternative to account lockout as a means of substantially inhibiting password guessing attacks without the risk of locking out the legitimate user who has the right credentials. It can also be used in conjunction with account lockout if desired.

  • Updated client connection policy support to make it possible to customize the behavior that the server exhibits if a client exceeds a configured maximum number of concurrent requests. Previously, it was only possible to reject requests with a “busy” result. It is now possible to use additional result codes when rejecting those requests, or to terminate the client connection and abandon all of its outstanding requests.

  • Added support for a new “exec” task (and recurring task) that can be used to invoke a specified command on the server system, either as a one-time event or at recurring intervals. There are several safeguards in place to prevent this from unauthorized use: the task must be enabled in the server (it is not by default), the command to be invoked must be contained in a whitelist file (no commands are whitelisted by default), and the user scheduling the task must have a special privilege that permits its use (no users, not even root users, have this privilege by default). We have also added a new schedule-exec-task tool that can make it easier to schedule an exec task.

  • Added support for a new file retention task (and recurring task) that can be used to remove files with names matching a given pattern that are outside of a provided set of retention criteria. The server is configured with instances of this task that can be used to clean up expensive operation dump, lock conflict details, and work queue backlog thread dumps (any files of each type other than the 100 most recent that are over 30 days old will be automatically removed).

  • Added support for new tasks (and recurring tasks) that can be used to force the server to enter and leave lockdown mode. While in lockdown mode, the server reports itself as unavailable to the Directory Proxy Server (and other clients that look at its availability status) and only accepts requests from a restricted set of clients.

  • Added support for a new delay task (and recurring task) that can be used to inject a delay between other tasks. The delay can be for a fixed period of time, can wait until the server is idle (that is, there are no outstanding requests and all worker threads are idle), or until a given set of search criteria matches one or more entries.

  • Added support for a new constructed virtual attribute type that can be used to dynamically construct values for an attribute using a combination of static text and the values of other attributes from the entry.

  • Improved user and group management in the delegated administration web application. Delegated administrators can create users and control group membership for selected users.

  • Added support for encrypting TOTP shared secrets, delivered one-time passwords, password reset tokens, and single-use tokens.

  • Updated the work queue implementation to improve performance and reduce contention under extreme load.

  • Updated the LDAP-accessible changelog backend to add support for searches that include the simple paged results control. This control was previously only available for searches in local DB backends.

  • Improved the server’s rebuild-index performance, especially in environments with encrypted data.

  • Added a new time limit log retention policy to support removing log files older than a specified age.

  • Updated the audit log to support including a number of additional fields, including the server product name, the server instance name, request control OIDs, details of any intermediate client or operation purpose controls in the request, the origin of the operation (whether it was replicated, an internal operation, requested via SCIM, etc.), whether an add operation was an undelete, whether a delete operation was a soft delete, and whether a delete operation was a subtree delete.

  • Improved trace logging for HTTP-based services (e.g., the REST API, SCIM, the consent API, etc.) to make it easier to correlate events across trace logs, HTTP access logs, and general access logs.

  • Updated the server so that the replication database so that it is possible to specify a minimum number of changes to retain. Previously, it was only possible to specify the minimum age for changes to retain.

  • Updated the purge expired data plugin to support deleting expired non-leaf entries. If enabled, the expired entry and all of its subordinate entries will be removed.

  • Added support for additional equality matching rules that may be used for attributes with a JSON object syntax. Previously, the server always used case-sensitive matching for field names and case-insensitive matching for string values. The new matching rules make it possible to configure any combination of case sensitivity for these components.

  • Added the ability to configure multiple instances of the SCIM servlet extension in the server, which allows multiple SCIM service configurations in the same server.

  • Updated the server to prevent the possibility of a persistent search client that is slow to consume results from interfering with other clients and operations in the server.

  • Fixed an in which global sensitive attribute restrictions could be imposed on replicated operations, which could cause some types of replicated changes to be rejected.

  • Fixed an issue that could make it difficult to use third-party tasks created with the Server SDK.

  • Fixed an issue in which the correct size and time limit constraints may not be imposed for search operations processed with an alternate authorization identity.

  • Fixed an issue with the get effective rights request control that could cause it to incorrectly report that an unauthenticated client could have read access to an entry if there were any ACIs making use of the “ldap:///all” bind rule. Note that this only affected the response to a get effective rights request, and the server did not actually expose any data to unauthorized clients.

  • Fixed an issue with the dictionary password validator that could interfere with case-insensitive validation to behave incorrectly if the provided dictionary file contained passwords with uppercase characters.

  • Fixed an issue in servers with an account status notification handler enabled. In some cases, an administrative password reset could cause a notification to be generated on each replica instead of just the server that originally processed the change.

  • Fixed a SCIM issue in which the totalResults value for a paged request could be incorrect if the SCIM resources XML file had multiple base DNs defined.

  • Added support for running on the Oracle and OpenJDK Java implementations, and the garbage-first garbage collector (G1GC) algorithm will be configured by default when installing the server with a Java 11 JVM. Java 8 (Oracle and OpenJDK distributions) remains supported.

  • Added support for the RedHat 7.5, CentOS 7.5, and Ubuntu 18.04 LTS Linux distributions. We also support RedHat 6.6, RedHat 6.8, RedHat 6.9, RedHat 7.4, CentOS 6.9, CentOS 7.4, SUSE Enterprise 11 SP4, SUSE Enterprise 12 SP3, Ubuntu 16.04 LTS, Amazon Linux, Windows Server 2012 R2 and Windows Server 2016. Supported virtualization platforms include VMWare vSphere 6.0, VMWare ESX 6.0, KVM, Amazon EC2, and Microsoft Azure.

UnboundID LDAP SDK for Java 4.0.9

We have just released version 4.0.9 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository, from the Files page of our SourceForge repository, and from the Maven Central Repository.

The most significant changes included in this release are:

  • Updated the command-line tool framework to allow tools to have descriptions that are comprised of multiple paragraphs.
  • Updated the support for passphrase-based encryption to work around an apparent JVM bug in the support for some MAC algorithms that could cause them to create an incorrect MAC.
  • Updated all existing ArgumentValueValidator instances to implement the Serializable interface. This can help avoid errors when trying to serialize an argument configured with one of those validators.
  • Updated code used to create HashSet, LinkedHashSet, HashMap, LinkedHashMap, and ConcurrentHashMap instances with a known set of elements to use better algorithms for computing the initial capacity for the map to make it less likely to require the map to be dynamically resized.
  • Updated the LDIF change record API to make it possible to obtain a copy of a change record with a given set of controls.
  • Added additional methods for obtaining a normalized string representation of JSON objects and value components. The new methods provide more control over case sensitivity of field names and string values, and over array order.
  • Improved support for running in a JVM with a security manager that prevents setting system properties (which also prevents access to the System.getProperties method because the returned map is mutable).

UnboundID LDAP SDK for Java 4.0.8

We have just released version 4.0.8 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository (https://github.com/pingidentity/ldapsdk/releases), from the Files page of our SourceForge repository (https://sourceforge.net/projects/ldap-sdk/files/), and from the Maven Central Repository (https://search.maven.org/search?q=g:com.unboundid%20AND%20a:unboundid-ldapsdk&core=gav).

The most significant changes included in this release are:

  • Fixed a bug in the modrate tool that could cause it to use a fixed string instead of a randomly generated one as the value to use in modifications.
  • Fixed an address caching bug in the RoundRobinDNSServerSet class. An inverted comparison could cause it to use cached addresses after they expired, and to cached addresses that weren’t expired.
  • Updated the ldapmodify tool to remove the restriction that prevented using arbitrary controls with an LDAP transaction or the Ping-proprietary multi-update extended operation.
  • Updated a number of locations in the code that caught Throwable so that they re-throw the original Throwable instance (after performing appropriate cleanup) if that instance was an Error or perhaps a RuntimeException.
  • Added a number of JSONObject convenience methods to make it easier to get the value of a specified field as a string, Boolean, number, object, array, or null value.
  • Added a StaticUtils.toArray convenience method that can be useful for converting a collection to an array when the type of element in the collection isn’t known at compile time.
  • Added support for parsing audit log messages generated by the Ping Identity Directory Server for versions 7.1 and later, including generating LDIF change records that can be used to revert change records (if the audit log is configured to record changes in a reversible form).