The Retain Identity Request Control in the Ping Identity Directory Server

Many LDAP-enabled applications use a directory server to authenticate users, which often consists of a search to find the user’s entry based on the provided login ID, followed by a bind to verify the provided credentials. Some of these applications may purely use the directory server for authentication, while others may then go ahead and perform additional operations on behalf of the logged-in users.

If the application is well designed, then it will probably maintain a pool of connections that it can repeatedly reuse rather than establishing a new connection each time it needs to perform an operation in the server. Usually, the search to find a user is performed on a connection bound as an account created for that application. And if the application performs operations on behalf of the authenticated users, then it often does so while authenticated under that user same application account, using something like the proxied authorization request control to request that the server process those operations under the appropriate user’s authority.

The problem, though, is that performing a bind operation changes the authentication identity of the connection on which it is processed. If the bind is successful, then subsequent operations on that connection will be processed under the authority of the user identified by that bind request. If the bind fails, then the connection becomes unauthenticated, so subsequent requests are processed anonymously. There are a couple of common ways to work around this problem:

  • It can maintain two different connection pools: one to use just for bind operations, and the other for all other types of operations.
  • After attempting a bind to verify a user’s credentials (whether successful or not), it can re-authenticate as the application account.

In the Ping Identity Directory Server, we offer a third option: the bind request can include the retain identity request control. This control tells the server that it should perform all of the normal processing associated with the bind (verify the user’s credentials, update any password policy state information for that user, etc.), but not change the authentication identity of the underlying connection. Regardless of whether the bind succeeds or fails, the connection will end up with the same authentication/authorization identity that it had before the bind was attempted. This allows you to use just a single connection pool that stays authenticated as the application’s account, while still being able to verify credentials without fear of interfering with access control evaluation for operations following those binds.

The retain identity request control is very easy to use. If you’re using the UnboundID LDAP SDK for Java, you can just use the RetainIdentityRequestControl class, and the Javadoc includes an example demonstrating its use. If you’re using some other API, then you just need to specify an OID of “1.3.6.1.4.1.30221.2.5.3”, and you don’t need to provide a value. We recommend making the control critical so that the bind attempt will fail if the server doesn’t support it (although we added support for this control back in 2008 when it was still the UnboundID Directory Server, so it’s been around for more than a decade).

I’m not aware of any other directory server that supports the retain identity request control (aside from the LDAP SDK’s in-memory directory server), but it’s very simple and very useful, so if you’re using some other type of server you might inquire about whether they’d implement it or something similar. Of course, you could also switch to the Ping Identity Directory Server and get support for this and lots of other helpful features that other servers don’t provide.

Programmatically Retrieving Password Quality Requirements in the Ping Identity Directory Server

When changing a user’s password, most LDAP directory servers provide some way to determine whether the new password is acceptable. For example, when allowing a user to choose a new password, you might want to ensure that new password has at least some minimum number of characters, that it’s not found in a dictionary of commonly used passwords, and that it’s not too similar to the user’s current password.

It’s important to be able to tell the user what the requirements are so that they don’t keep trying things that the server will reject. And you might also want to provide some kind of password strength meter or indicator of acceptability to let them visually see how good their password is. But you don’t want to do this with hard-coded logic in the client because different sets of users might have different password quality requirements, and because the server configuration can change, so even the requirements for a given user may change over time. What you really want is a way to programmatically determine what requirements the server will impose.

Fortunately, the Ping Identity Directory Server provides a “get password quality requirements” extended operation that can provide this information. We also have a “password validation details” control that you can use when changing a password to request information about how well the proposed password satisfies those requirements. These features were added in the 5.2.0.0 release back in 2015, so they’ve been around for several years. The UnboundID LDAP SDK for Java makes it easy to use them in Java clients, but you can make use of them in other languages if you’re willing to do your own encoding and decoding.

The Get Password Quality Requirements Extended Request

The get password quality requirements extended request allows a client to ask the server what requirements it will impose when setting a user’s password. It’s best to use before prompting for a new password so that you can display the requirements to them and potentially provide client-side feedback as to whether the proposed password is acceptable.

Since the server can enforce different requirements under different conditions, you need to tell it the context for the new password. Those contexts include:

  • Adding a new entry that includes a password. You can either indicate that the new entry will use the server’s default password policy, or that it will use a specified policy.
  • A user changing their own password. It doesn’t matter whether the password change is done by a standard LDAP modify operation that targets the password attribute or with the password modify extended operation; the requirements for a self change will be the same in either case.
  • An administrator resetting another user’s password. Again, it doesn’t matter whether it’s a regular LDAP modify or a password modify extended operation. You just need to indicate which user’s password is being reset so the server can determine which requirements will be enforced.

The UnboundID LDAP SDK for Java provides support for this request through the GetPasswordQualityRequirementsExtendedRequest class, but if you need to implement support for it in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.43 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsRequestValue ::= SEQUENCE {
     target     CHOICE {
          addWithDefaultPasswordPolicy           [0] NULL,
          addWithSpecifiedPasswordPolicy         [1] LDAPDN,
          selfChangeForAuthorizationIdentity     [2] NULL,
          selfChangeForSpecifiedUser             [3] LDAPDN,
          administrativeResetForUser             [4] LDAPDN,
          ... },
     ... }

The Get Password Quality Requirements Extended Response

The server uses the get password quality requirements extended response to tell the client what the requirements are for the target user in the indicated context. Each validator configured in a password policy can return its own password quality requirement structure, which includes the following components:

  • A human-readable description that describes the purpose of the validator in a user-friendly form. For example, “The password must contain at least 8 characters”.
  • A validation type that identifies the type of validator for client-side evaluation. For example, “length”.
  • A set of name-value pairs that provide information about the configuration of that password validator. For example, a name of “min-password-length” and a value of “8”.

A list of the validation types and corresponding properties for all the password validators included with the server is provided later in this post.

In addition to those requirements, it may provide additional information about the password change, including:

  • Whether the user will be required to provide their current password when choosing a new password. This is only applicable for a self change.
  • Whether the user will be required to choose a new password the first time they authenticate after the new password is set. This is only applicable for an add or an administrative reset, and it’s based on the password policy’s force-change-on-add or force-change-on-reset configuration.
  • The length of time that the newly set password should be considered valid. If the user will be required to change their password on the next authentication, then this will be the length of time they have before that temporary password becomes invalid. Otherwise, it specifies the length of time until the password expires.

The UnboundID LDAP SDK for Java provides support for the extended result through the GetPasswordQualityRequirementsExtendedResult class and the related PasswordQualityRequirement class. In case you need to implement support for this extended response in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.44 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsResultValue ::= SEQUENCE {
     requirements                SEQUENCE OF PasswordQualityRequirement,
     currentPasswordRequired     [0] BOOLEAN OPTIONAL,
     mustChangePassword          [1] BOOLEAN OPTIONAL,
     secondsUntilExpiration      [2] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirement ::= SEQUENCE {
     description                  OCTET STRING,
     clientSideValidationInfo     [0] SEQUENCE {
          validationType     OCTET STRING,
          properties         [0] SET OF SEQUENCE {
               name      OCTET STRING,
               value     OCTET STRING } OPTIONAL } OPTIONAL }

The Password Validation Details Request Control

As noted above, you should use the get password validation requirements extended operation before prompting a user for a new password so that they know what the requirements are in advance. But, if the server rejects the proposed password, it’s useful for the client to be able to tell exactly why it was rejected. The Ping Identity Directory Server will include helpful information in the diagnostic message, but that’s just a blob of text. You might want something more parseable so that you can provide the user with the pertinent information with better formatting. And for that, we provide the password validation details request control.

This control can be included in an add request that includes a password, a modify request that attempts to alter a password, or a password modify extended request. It tells the server that the client would like a response control (outlined below) that includes information about each of the requirements for the new password and whether that requirement was satisfied.

The UnboundID LDAP SDK for Java provides support for this request control in the PasswordValidationDetailsRequestControl class, but if you want to use it in another API, then all you need to do is to create a request control with an OID of 1.3.6.1.4.1.30221.2.5.40. The criticality can be either true or false (but it’s probably better to be false so that the server won’t reject the request if that control is not available for some reason), and it does not take a value.

The Password Validation Details Response Control

When the server processes an add, modify, or password modify request that included the password validation request control, the response that the server returns may include a corresponding password validation details response control with information about how well the proposed password satisfies each of the requirements. If present, the response control will include the following components:

  • One of the following:

    • Information about each of the requirements for the proposed password and whether that requirement was satisfied.
    • A flag that indicates that the request didn’t try to alter a password.
    • A flag that indicates that the request tried to set multiple passwords.
    • A flag that indicates that the request didn’t get to the point of trying to validate the password because some other problem was encountered first.
  • An optional flag that indicates whether the server requires the user to provide their current password when choosing a new password, but that the current password was not given. This is only applicable for self changes, and not for adds or administrative resets.
  • An optional flag that indicates whether the user will be required to change their password the next time they authenticate. This is applicable for adds and administrative resets, but not for self changes.
  • An optional value that specifies the length of time that the new password will be considered valid. If it was an add or an administrative reset and the user will be required to choose a new password the next time they authenticate, then this is the length of time that they have to do that. Otherwise, it will be the length of time until the new password expires.

The UnboundID LDAP SDK for Java provides support for this response control through the PasswordValidationDetailsResponseControl class, with the PasswordQualityRequirementValidationResult class providing information about whether each of the requirements was satisfied. If you need to implement support for this control in some other API, then it has a response OID of 1.3.6.1.4.1.30221.2.5.41 and a value with the following ASN.1 encoding:

PasswordValidationDetailsResponse ::= SEQUENCE {
     validationResult           CHOICE {
          validationDetails             [0] SEQUENCE OF
               PasswordQualityRequirementValidationResult,
          noPasswordProvided            [1] NULL,
          multiplePasswordsProvided     [2] NULL,
          noValidationAttempted         [3] NULL,
          ... },
     missingCurrentPassword     [3] BOOLEAN DEFAULT FALSE,
     mustChangePassword         [4] BOOLEAN DEFAULT FALSE,
     secondsUntilExpiration     [5] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirementValidationResult ::= SEQUENCE {
     passwordRequirement      PasswordQualityRequirement,
     requirementSatisfied     BOOLEAN,
     additionalInfo           [0] OCTET STRING OPTIONAL }

Validation Types and Properties for Available Password Validators

The information included in the get password validation details extended response is enough for the client to display a user-friendly list of the requirements that will be enforced for a new password. However, it also includes information that can be used for some client-side evaluation of how well a proposed password satisfies those requirements. This can help the client tell the user when the password isn’t good enough without having to send the request to the server, or possibly provide feedback about the strength or acceptability of the new password while they’re still typing it. This is possible because of the validation type and properties components of each password quality requirement.

Of course, you can really only take advantage of this feature if you know what the possible validation type and properties are for each of the password validators. This section provides that information for each of the types of validators included with the Ping Identity Directory Server (or, at least the ones available at the time of this writing; we may add more in the future).

Also note that for some types of password validators, you may not be able to perform client-side validation. For example, if the server is configured to reject any proposed password that it finds in a dictionary of commonly used passwords, the client can’t make that determination because it doesn’t have access to that dictionary. In such cases, it’s still possible to display the requirement to the user so that they’re aware of it in advance, and it may still be possible to perform client-side validation for other types of requirements, so there’s still benefit to using this information.

The Attribute Value Password Validator

The attribute value password validator can be used to prevent the proposed password from matching the value of another attribute in a user’s entry. You can specify which attributes to check, or it can check all user attributes in the entry (which is the default). It can be configured to reject the case in which the proposed password exactly matches a value for another attribute, but it can also be configured to reject based on substring matches (for example, if an attribute value is a substring of the proposed password, or if the proposed password is a substring of an attribute value). You can also optionally test the proposed password in reversed order.

You can perform client-side checking for this password validator if you have a copy of the target user’s entry. The validation type is “attribute-value”, and it offers the following validation properties:

  • match-attribute-{counter} — The name of an attribute whose values will be checked against the proposed password. The counter value starts at 1 and will increase sequentially for each additional attribute to be checked. For example, if the validator is configured to check the proposed password against the givenName, sn, mail, and telephoneNumber attributes, you would have a match-attribute-1 property with a value of givenName, a match-attribute-2 property with a value of sn, a match-attribute-3 property with a value of mail, and a match-attribute-4 property with a value of telephoneNumber.
  • test-password-substring-of-attribute-value — Indicates whether to check to see if the proposed password matches a substring of any of the target attributes. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-attribute-value-substring-of-password — Indicates whether to check to see if any of the target attributes matches a substring of the proposed password. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-reversed-password — Indicates whether to check the proposed password with the order of the characters reversed in addition to the order in which they were provided. If this property has a value of true, then both the forward and reversed password will be checked. If the property has a value of false, or if it is absent, then the password will only be checked in forward order.

The Character Set Password Validator

The character set password validator can be used to ensure that passwords have a minimum number of characters from each of a specified collection of character sets. For example, you could define one set with all of the lowercase letters, one with all the uppercase letters, one with all the numeric digits, and one with a set of symbols, and require that a password have at least one character from each of those sets.

You can perform client-side checking for this password validator just using the proposed password itself. The validation type is “character-set”, and it has the following validation properties:

  • set-{counter}-characters — A set of characters for which a minimum count will be enforced. The counter value starts at 1 and will increase sequentially for each additional set of characters that is defined. For example, if you had sets of lowercase letters, uppercase letters, and numbers, then you could have a set-1-characters property with a value of abcdefghijklmnopqrstuvwxyz, a set-2-characters property with a value of ABCDEFGHIJKLMNOPQRSTUVWXYZ, and a set-3-characters property with a value of 0123456789.
  • set-{counter}-min-count — The minimum number of characters that must be present from the character set identified with the corresponding counter value (so the property with a name of set-1-min-count specifies the minimum number of characters from the set-1-characters set). This value will be an integer whose value is greater than or equal to zero (with a value of zero indicating that characters from that set are allowed, but not required; this is really only applicable if allow-unclassified-characters is false).
  • allow-unclassified-characters — Indicates whether passwords should be allowed to have any characters that are not defined in any of the character sets. If this property has a value of true, then passwords will be allowed to have unclassified characters as long as they meet the minimum number of required characters from all of the specified character sets. If this property has a value of false, then passwords will only be permitted to include characters from the given character sets.

The Dictionary Password Validator

The dictionary password validator is used to ensure that users can’t choose passwords that are found in a specified dictionary file. The Ping Identity Directory Server comes with two such files: one that contains a list of over 400,000 English words, and one that contains over 500,000 of the most commonly used passwords. You can also provide your own dictionary files.

Unless the client has a copy of the same dictionary file that the server is using, then it’s not really possible for it to perform client-side validation for this validator. Nevertheless, the validator does have a validation type of “dictionary” and the following validation properties:

  • dictionary-file — The name (without path information) of the dictionary file that the password validator is using.
  • case-sensitive-validation — Indicates whether the validation should be case sensitive or insensitive. If the value is true, then a proposed password will be rejected only if it is found with exactly the same capitalization in the dictionary file. If it is false, then differences in capitalization will be ignored.
  • test-reversed-password — Indicates whether the validation should check the proposed password with the characters in reversed order as well as in the order the client provided them. If the value is true, then both the forward and reversed password will be checked. If the value is false, then the password will be checked only as it was provided by the client.

The Haystack Password Validator

The haystack password validator is based on the concept of password haystacks as described at https://www.grc.com/haystack.htm. This algorithm judges the strength of a password based on a combination of its length and the different classes of characters that it contains. For example, a password comprised of a mix of lowercase letters, uppercase letters, numeric digits, and symbols is, in general, more resistant to brute force attacks than a password of the same length made up of only lowercase letters, but a password made up of only lowercase letters can be very secure if it is long enough (and passphrases—passwords comprised of multiple words strung together—are a great example of this). The haystack validator lets users have a simpler password if it’s long enough, or a shorter password if it’s complex enough.

As long as you have a client-side implementation of the haystack logic (which is pretty simple), you can perform client-side checking for this password validator. The validator name is “haystack”, and it has the following validation properties:

  • assumed-password-guesses-per-second — The number of guesses that an attacker is assumed to be able to make per second. This value will be an integer, although it could be a very large integer, so it’s recommended to use at least a 64-bit variable to represent it.
  • minimum-acceptable-time-to-exhaust-search-space — The minimum length of time, in seconds, that is considered acceptable for an attacker to have to keep guessing (at the rate specified by the assumed-password-guesses-per-second property) before exhausting the complete search space of all possible passwords. This will also be an integer, and it’s also recommended that you use at least a 64-bit variable to hold its value.

The Length Password Validator

The length-based password validator judges the quality of a proposed password based purely on the number of characters that it contains. Note that it counts the number of UTF-8 characters rather than the number of bytes, and a password with multi-byte characters will have fewer characters than it has bytes. You can configure either or both of a minimum required length and a maximum required length.

Client-side checking is very straightforward for this validator. It uses a validator name of “length” and the following validation properties:

  • min-password-length — The minimum number of characters that a password will be required to have. If present, the value will be an integer. If it is absent, then no minimum length will be enforced.
  • max-password-length — The maximum number of characters that a password will be permitted to have. If present, the value will be an integer. If it is absent, then no maximum length will be enforced.

The Regular Expression Password Validator

The regular expression password validator can be used to require that password match a given pattern or to reject passwords that match a given pattern. As long as the client can perform regular expression matching, then client-side validation should be pretty simple. It uses a validator name of “regular-expression” and the following validation properties:

  • match-pattern — The regular expression that will be evaluated against a proposed password.
  • match-behavior — A string that indicates the behavior that the validator should observe. A value of require-match means that the validator will reject any proposed password that does not satisfy the associated match-pattern. A value of reject-match means that the validator will reject any proposed password that does match the specified match-pattern.

The Repeated Characters Password Validator

The repeated characters password validator can be used to reject a proposed password if it contains the same character, or characters in the same set, more than a specified number of times in a row without a different type of character in between. By default, it treats each type of character separately, but you can define sets of characters that will be considered equivalent. In the former case, the validator will reject a password if it contains the same character too many times in a row, whereas in the latter case, it can reject a password if it contains too many characters of the same type in a row. For example, you could define sets of lowercase letters, uppercase letters, digits, and symbols, and prevent too many characters of each type in a row.

It should be pretty straightforward to perform client-side checking for this password validator. It uses a validator name of “repeated-characters” and the following validation properties:

  • character-set-{counter} — A set of characters that should be considered equivalent. The counter will start at 1 and increment sequentially for each additional character set. This property may be absent if each character is to be treated independently.
  • max-consecutive-length — The maximum number of times that each character (or characters from the same set) may appear in a row before a proposed password will be rejected. The value will be an integer.
  • case-sensitive-validation — Indicates whether to treat characters from the password in a case-sensitive manner. A value of true indicates that values should be case-sensitive, while a value of false indicates that values should be case-insensitive.

The Similarity Password Validator

The similarity password validator can be used to reject a proposed password if it is too similar to the user’s current password. Similarity is determined by the Levenshtein distance algorithm, which is a measure of the minimum number of character insertions, deletions, or replacements needed to transform one string into another. For example, it can prevent a user from changing the password from something like “password1” to “password2”. This validator is only active for password self changes. It does not apply to add operations or administrative resets.

Because the Ping Identity Directory Server server generally stores passwords in a non-reversible form, this can only be used if the request used to change the user’s password includes both the current password and the proposed new password. You can use the password-change-requires-current-password property in the password policy configuration to require this, and if that is configured, then the get password quality requirements extended response will indicate that the current password is required when a user is performing a self change. The password modify extended request provides a field for specifying the current password when requesting a new password, but to satisfy this requirement in an LDAP modify operation, the change should be processed as a delete of the current password (with the value provided in the clear) followed by an add of the new password (also in the clear), in the same modification, like:

dn: uid=john.doe,ou=People,dc=example,dc=com
changetype: modify
delete: userPassword
userPassword: oldPassword
-
add: userPassword
userPassword: newPassword
-

If a client has the user’s current password, the proposed new password, and an implementation of the Levenshtein distance algorithm, then it can perform client-side checking for this validator. The validation type is “similarity” and the validation properties are:

  • min-password-difference — The minimum acceptable distance, as determined by the Levenshtein distance algorithm, between the user’s current password and the proposed new password. It will be an integer.

The Unique Characters Password Validator

The unique characters password validator can be used to reject a proposed password that has too few unique characters. This can prevent users from choosing simple passwords like “aaaaaaaa” or “abcabcabcabc”.

It’s easy to perform client-side checking for this validator. It has a validator name of “unique-characters” and the following properties:

  • min-unique-characters — The minimum number of unique characters that the password must contain for it to be acceptable. This is an integer value, with zero indicating no limit (although the server will require passwords to contain at least one character).
  • case-sensitive-validation — Indicates whether the validator will treat uppercase and lowercase versions of the same letter as different characters or the same. A value of true indicates that the server will perform case-sensitive validation, while a value of false indicates case-insensitive validation.

Access Control Considerations

The get password quality requirements extended operation probably isn’t something that you’ll want to open up to the world, since it could give an attacker information that they shouldn’t have, like hints that could help them better craft their attack, or information about whether a user exists or not. Since you’ll probably want some restriction on who can use this operation, and since we at Ping Identity have no idea who that might be, the server’s access control configuration does not permit anyone (or at least anyone without the bypass-acl or the bypass-read-acl privilege) to use it. If you want to make it available, then you’ll need to add a global ACI to grant access to an appropriate set of users. The same goes for the password validation details request control.

As an example, let’s say that you want to allow members of the “cn=Password Administrators,ou=Groups,dc=example,dc=com” group to use these features. To do that, you can add the following global ACIs:

(extop="1.3.6.1.4.1.30221.2.6.43")(version 3.0; acl "Allow password administrators to use the get password quality requirements extended operation"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

(targetcontrol="1.3.6.1.4.1.30221.2.5.40")(version 3.0; acl "Allow password administrators to use the password validation details request control"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

Also note that while the server does make the password modify extended operation available to anyone, there are additional requirements that must be satisfied before it can be used. In order to change your own password, you need to at least have write access to the password attribute in your own entry. And in order to change someone else’s password, not only do you need write access to the password attribute in that user’s entry, but you also need the password-reset privilege. You can grant that privilege by adding the ds-privilege-name attribute to the password resetter’s entry using a change like:

dn: uid=password.admin,ou=People,dc=example,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: password-reset
-

Example Usage With the UnboundID LDAP SDK for Java

I’ve created a simple Java program that demonstrates the use of the get password quality requirements extended operation and the password validation details control. It’s not very flashy, and it doesn’t currently attempt to perform any client-side validation of the proposed new password before sending it to the directory server, but it’s at least a good jumping-off point for someone who wants to build this functionality into their own application.

You can find this example at https://github.com/dirmgr/blog-example-source-code/tree/master/password-quality-requirements.

Configuring Two-Factor Authentication in the Ping Identity Directory Server

Passwords aren’t going anywhere anytime soon, but they’re just not good enough on their own. It is entirely possible to choose a password that is extremely resistant to dictionary and brute force attacks, but the fact is that most people pick really bad passwords. They also tend to reuse the same passwords across multiple sites, which further increases the risk that their account could be compromised. And even those people who do choose really strong passwords might still be tricked into giving that password to a lookalike site via phishing or DNS hijacking, or they may fall victim to a keylogger. For these reasons and more, it’s always a good idea to combine a password with some additional piece of information when authenticating to a site.

The Ping Identity Directory Server has included support for two-factor authentication since 2012 (back when it was still the UnboundID Directory Server). Out of the box, we currently offer four types of two-factor authentication:

  • You can combine a static password with a time-based one-time password using the standard TOTP mechanism described in RFC 6238. These are the same kind of one-time passwords that are generated by apps like Google Authenticator or Authy.
  • You can combine a static password with a one-time password generated by a YubiKey device.
  • You can combine a static password with a one-time password that gets delivered to the user through some out-of-band mechanism like a text or email message, voice call, or app notification.
  • You can combine a static password with an X.509 certificate presented to the server during TLS negotiation.

In this post, I’ll describe the process for configuring the server to enable support for each of these types of authentication. We also provide the ability to create custom extensions to implement support for other types of authentication if desired, but that’s not going to be covered here.

Time-Based One-Time Passwords

The Ping Identity Directory Server supports time-based one-time passwords through the UNBOUNDID-TOTP SASL mechanism, which is enabled by default in the server. For a user to authenticate with this mechanism, their account must contain the ds-auth-totp-shared-secret attribute whose value is the base32-encoded representation of the shared secret that should be used to generate the one-time passwords. This shared secret must be known to both the client (or at least to an app in the client’s possession, like Google Authenticator) as well as to the server.

To facilitate generating and encoding the TOTP shared secret, the Directory Server provides a “generate TOTP shared secret” extended operation. The UnboundID LDAP SDK for Java provides support for this extended operation, and the class-level Javadoc describes the encoding for this operation in case you need to implement support for it in some other API. We also offer a generate-totp-shared-secret command-line tool that can be used for testing (or I suppose you could invoke it programmatically if you’d rather do that than use the UnboundID LDAP SDK or implement support for the extended operation yourself). For the sake of convenience, I’ll use this tool for the demonstration.

There are actually a couple of ways that you can use the generate TOTP shared secret operation: for a user to generate a shared secret for their own account (in which case the user’s static password must be provided), or for an administrator (who must have the password-reset privilege) to generate a shared secret for another user. I’d expect the most common use case to be a user generating a shared secret for their own account, so that’s the approach we’ll take for this example.

Note that while the generate TOTP shared secret extended operation is enabled out of the box, the shared secrets that it generates by default are not encrypted, which could make them easier for an attacker to steal if they got access to a copy of the user data. To prevent this, if the server is configured with data encryption enabled, then you should also enable the “Encrypt TOTP Secrets and Delivered Tokens” plugin. That can be done with the following configuration change:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

If we assume that our account has a username of “jdoe”, then the command to generate a shared secret for that user would be something like:

$ bin/generate-totp-shared-secret --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authID u:jdoe \
     --promptForUserPassword
Enter the static password for user 'u:jdoe':
Successfully generated TOTP shared secret 'KATLTK5WMUSZIACLOMDP43KPSG2LUUOB'.

If we were using a nice web application to invoke the generate TOTP shared secret operation, we’d probably want to have it generate a QR code with that shared secret embedded in it so that it could be easily scanned and imported into an app like Google Authenticator (and you’d want to embed it in a URL like “otpauth://totp/jdoe%20in%20ds.example.com?secret=KATLTK5WMUSZIACLOMDP43KPSG2LUUOB”). For the sake of testing, we can either manually generate an appropriate QR code (for example, using an online utility like https://www.qr-code-generator.com), or you can just type the shared secret into the authenticator app.

Now that the account is configured for TOTP authentication, we can use the UNBOUNDID-TOTP SASL mechanism to authenticate to the server. As with the generate TOTP shared secret operation, this SASL mechanism is supported by the UnboundID LDAP SDK for Java, but most of our command-line tools should support this mechanism, so we can test it with a utility like ldapsearch. You’ll need to use the “--saslOption” command-line argument to specify a number of parameters, including “mech” (the name of the SASL mechanism to use, which should be “UNBOUNDID-TOTP”), “authID” (the authentication ID for the user that’s trying to authenticate), and “totpPassword” (for the one-time password generated by the authenticator app). For example:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-TOTP \
     --saslOption authID=u:jdoe \
     --saslOption totpPassword={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

YubiKey One-Time Passwords

If you’ve got a YubiKey device capable of generating one-time passwords in the Yubico OTP format (which should be most YubiKey devices except the ones that only support FIDO authentication), then you can use that device to generate one-time passwords to use in conjunction with your static password.

The Directory Server supports this type of authentication through the UNBOUNDID-YUBIKEY-OTP SASL mechanism. To enable support for this SASL mechanism, you first need to get an API key from Yubico, which you can get for free from https://upgrade.yubico.com/getapikey/. When you do this, you’ll get a client ID and a secret key, which gives you access to use their authentication servers. Note that you only need this for the server (and you can share the same key for all server instances); end users don’t need to worry about this. Alternately, you could stand up your own authentication server if you’d rather not rely on the Yubico servers, but we won’t go into that here.

Once you’ve got the client ID and secret key, you can enable support for the SASL mechanism with the following configuration change:

dsconfig set-sasl-mechanism-handler-prop \
     --handler-name UNBOUNDID-YUBIKEY-OTP \
     --set enabled:true \
     --set yubikey-client-id:{client-id} \
     --set yubikey-api-key:{secret-key}

To be able to authenticate a user with this mechanism, you’ll need to update their account to include a ds-auth-yubikey-public-id attribute with one or more values that represent the public IDs of the YubiKey devices that you want to use (and it might not be a bad idea to have multiple devices registered for the same account, so that you have a backup key in case you lose or break the primary key).

To get the public ID for a YubiKey device, you can use it to generate a one-time password and strip off the last 32 bytes. This isn’t considered secret information, so no encryption is necessary when storing it in an entry, and you can use a simple LDAP modify operation to manage the client IDs for a user account. Alternately, you can use the “register YubiKey OTP device” extended operation (supported and documented in the UnboundID LDAP SDK for Java) or use the register-yubikey-otp-device command-line tool. In the case of the command-line tool, you can register a device like:

$ bin/register-yubikey-otp-device --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authenticationID u:jdoe \
     --promptForUserPassword \
     --otp {one-time-password}
Enter the static password for user u:jdoe:
Successfully registered the specified YubiKey OTP device for user u:jdoe

Note that when using this tool (and the register YubiKey OTP device extended operation in general), you should provide a complete one-time password and not just the client ID.

That should be all that is necessary to allow the user to authenticate with a YubiKey one-time password. We can test it with ldapsearch like so:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-YUBIKEY-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Delivered One-Time Passwords

Delivered one-time passwords are more convenient than either time-based or YubiKey-generated one-time passwords because there’s less burden on the user. There’s no need to install an app or have any special hardware to generate one-time passwords. Instead, the server generates a one-time password and then sends it to the user through some out-of-band mechanism (that is, the user gets the one-time password through some mechanism other than LDAP). The server provides direct support for delivering these generated one-time passwords over SMS (using the Twilio service) or via email, and the UnboundID Server SDK provides an API that allows you to create your own delivery mechanisms. Note, however, that while it may be more convenient to use, it’s also generally considered less secure (especially if you’re using SMS).

There’s also more effort involved in enabling support for delivered one-time passwords than either time-based or YubiKey-generated one-time passwords. The first thing you should do is configure the server to ensure that the generated one-time password values will be encrypted (unless you already did it above for encrypting TOTP shared secrets), which you can do as follows:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

Next, we need to configure one or more delivery mechanisms. These are configured in the “OTP Delivery Mechanism” section of dsconfig. For example, to configure a delivery mechanism for email, you could use something like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name Email \
     --type email \
     --set enabled:true \
     --set 'sender-address:otp@example.com' \
     --set "message-text-before-otp:Your one-time password is ‘" \
     --set "message-text-after-otp:’."

If you’re using email, you’ll also need to configure one or more SMTP external servers and set the smtp-server property in the global configuration.

Alternately, if you’re using SMS, then you’ll need to have a Twilio account and fill in the appropriate values for the SID, auth token, and phone number fields, like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name SMS \
     --type twilio \
     --set enabled:true \
     --set twilio-account-sid:{sid} \
     --set twilio-auth-token:{auth-token} \
     --set sender-phone-number:{phone-number} \
     --set "message-text-before-otp:Your one-time password is '" \
     --set "message-text-after-otp:'."

Once the delivery mechanism(s) are configured, you can enable the delivered one-time password SASL mechanism handler as follows:

dsconfig create-sasl-mechanism-handler \
     --handler-name UNBOUNDID-DELIVERED-OTP \
     --type unboundid-delivered-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match"

You’ll also need to enable support for the deliver one-time password extended operation, which is used to request that the server generate and deliver a one-time password for a user. You can do that like:

dsconfig create-extended-operation-handler \
     --handler-name "Deliver One-Time Passwords" \
     --type deliver-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match" \
     --set "password-generator:One-Time Password Generator" \
     --set default-otp-delivery-mechanism:Email
     --set default-otp-delivery-mechanism:SMS

The process for authenticating with a delivered one-time password involves two steps. In the first step, you need to request that the server generate and deliver a one-time password, which can be accomplished with the “deliver one-time password” extended operation, which is supported and documented in the UnboundID LDAP SDK for Java and can be tested with the deliver-one-time-password command-line tool. Then, once you have that one-time password, you can use the UNBOUNDID-DELIVERED-OTP SASL mechanism to complete the authentication.

If you have multiple delivery mechanisms configured in the server, then there are several ways that the server can decide which one to use to send a one-time password to a user.

  • The server will only attempt to use a delivery mechanism that applies to the target user. For example, if a user entry has an email address but not a mobile phone number, then it won’t try to deliver a one-time password to that user via SMS.
  • The deliver one-time password extended request can be used to indicate which delivery mechanism(s) should be attempted, and in which order they should be attempted. If you’re using the deliver-one-time-password command-line tool, then you can use the –deliveryMechanism argument to specify this.
  • If the extended request doesn’t indicate which mechanisms to use, then the server will check the user’s entry to see if it has a ds-auth-preferred-otp-delivery-mechanism operational attribute. If so, then it will be used to specify the desired delivery mechanism.
  • If nothing else, then the server will use the order specified in the default-otp-delivery-mechanism property of the extended operation handler configuration.

As an example, let’s demonstrate the process of authenticating as user jdoe with a one-time password delivered via email. We can start the process using the deliver-one-time password command-line tool as follows:

$ bin/deliver-one-time-password --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --userName jdoe \
     --promptForBindPassword \
     --deliveryMechanism Email
Enter the static password for the user:

Successfully delivered a one-time password via mechanism 'Email' to 'jdoe@example.com'

Now, we can check our email, and there should be a message with the one-time password. Once we have it, we can use a tool like ldapsearch to authenticate with that one-time password using the UNBOUNDID-DELIVERED-OTP SASL mechanism, like:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-DELIVERED-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Combining Certificates and Passwords

When establishing a TLS-based connection, the server will always present its certificate to the client, and the client will decide whether it wants to trust that certificate and continue establishing the secure connection. Further, the server may optionally ask the client to provide its own certificate, and then the client may then optionally provide one. If the server requests a client certificate, and if the client provides one, then the determine whether it wants to trust that client certificate and continue the negotiation process.

If a client has provided its own certificate to the directory server and the server has accepted it, then the client can use a SASL EXTERNAL bind to request that the server use the information in the certificate to identify and authenticate the client. Most LDAP servers support this, and it can be a very strong form of single-factor authentication. However, the Ping Identity Directory Server also offers an UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism that takes this even further by combining the client certificate with a static password.

Certificate-based authentication (regardless of whether you also include a static password) isn’t something that has really caught on because of the hassle and complexity of dealing with certificates. It’s honestly probably not a great option for most end users, although it may be an attractive option for more advanced users like server administrators. But one big benefit that the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism has over the two-factor mechanisms that rely on one-time passwords is that it can be used in a completely non-interactive manner. That makes it suitable for use in authenticating one application to another.

As with the EXTERNAL mechanism, the Ping Identity Directory Server has support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism enabled out of the box. Just about the only thing you’re likely to want to configure is the certificate-mapper property in its configuration, which is used to uniquely identify the account for the user that is trying to authenticate based on the contents of the certificate. The certificate mapper that is configured by default will only work if the certificate’s subject DN matches the DN of the corresponding user entry. Other certificate mappers can be used to identify the user in other ways, including searching based on attributes in the certificate subject or searching for the owner of a certificate based on the fingerprint of that certificate.

Due to an unfortunate oversight, command-line tools currently shipped with the server do not include support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism. That will be corrected in the next release, but if you want to test with it now, you can check out the UnboundID LDAP SDK for Java from its GitHub project and build it for yourself. That will allow you to test certificate+password authentication like so:

$ tools/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --keyStorePath client-keystore \
     --promptForKeyStorePassword \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-CERTIFICATE-PLUS-PASSWORD \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the key store password:

Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 42a82498-93c6-4e62-9c6c-8fe6b33e1550
startTime: 20190107072612Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

The Movies I Watched in 2018

I didn’t watch as many movies in 2018 as I had in previous years, but I still did pretty well by most standards. I ended up watching 413 movies in a theater and 378 outside of a theater, for a total of 791 total.

The only movies I saw in a theater were at an Alamo Drafthouse (335 movies) or at the Austin Film Society (78 movies). 267 of those movies were projected digitally, 137 on film (126 in 35mm, 6 in 16mm, and 5 in 70mm), and 9 were on VHS.

The best new releases I saw in the theater include:

  • Bad Times at the El Royale
  • Border
  • Green Book
  • The Guilty
  • Hold the Dark
  • I Don’t Feel at Home in This World Anymore
  • Incredibles 2
  • Juliet, Naked
  • Love, Simon
  • Lowlife
  • Paddington 2
  • Princess Cyd
  • Ready Player One
  • RBG
  • A Simple Favor
  • The Shape of Water
  • Sweet Country
  • Three Identical Strangers
  • Thunder Road
  • Won’t You Be My Neighbor?

I also saw some great new movies at the Fantastic Fest film festival, but I’m not sure that they have been officially released yet. The best of them include:

  • The Boat
  • Chained for Life
  • Goliath
  • Slut in a Good Way
  • The Standoff at Sparrow Creek
  • The Unthinkable
  • The World Is Yours

And even if I didn’t think they were the best in a conventional sense, I had a heck of a good time watching a few other new releases:

  • Between Worlds (though I don’t think it’s actually out yet)
  • The Meg
  • Twisted Pair

And these surprised me by being quite a bit better than I expected, even if they’re not among the best of the year:

  • Blockers
  • Game Night
  • Tag

On the other hand, these were the new releases that everyone else seemed to love, but I strongly disliked:

  • The Favourite
  • Ghost Stories
  • Hereditary
  • Mandy
  • Mission: Impossible — Fallout
  • A Quiet Place

And one of the great things about the film scene in Austin is that we get to watch a ton of repertory films on the big screen. The best older movies I saw for the first time in 2018 included:

  • The Awful Truth (1937)
  • The Big Heat (1953)
  • Bunny Lake Is Missing (1965)
  • Colossus: The Forbin Project (1970)
  • Coming Home (1978)
  • Deadly Games (aka Dial Code Santa Claus; 1989)
  • Girlfriends (1978)
  • I Walk Alone (1947)
  • Jawbreaker (1999)
  • Kitten With a Whip (1964)
  • Lady Snowblood (1973)
  • Lake of Dracula (1971)
  • Last Night at the Alamo (1983)
  • The Last Temptation of Christ (1988)
  • The Man Who Cheated Himself (1950)
  • A Matter of Life and Death (1946)
  • Midnight Express (1978)
  • Out of the Past (1947)
  • Quiet Please, Murder (1942)
  • The Raven (1935)
  • Roadgames (1981)
  • Starstruck (1982)
  • Stranger in Our House (aka Summer of Fear; 1978)
  • Summertime (1955)
  • To Have and Have Not (1944)
  • The Unsuspected (1947)

As far as watching movies outside of a theater, I kind of went down a rabbit hole of Lifetime and Hallmark movies (especially at Christmas), so I didn’t watch as many classic films as I probably should have, and even though I enjoy them, it’s hard to put them in the same class as most movies with higher production values and less predictable plots (but I do thoroughly enjoy the Murder She Baked, Aurora Teagarden, and Jane Doe mystery series, and I thought that Dangerous Child, Evil Nanny, and Killer Body aka The Wrong Patent were among the better ones). Nevertheless, the following were among my favorite first-time watches outside of a theater in 2018:

  • Bachelor Mother (1939)
  • The Cheyenne Social Club (1970)
  • Dark Passage (1947)
  • Father Goose (1964)
  • First Reformed (2017)
  • From Beyond the Grave (1974)
  • Hearts Beat Loud (2018)
  • Psychos in Love (1987)
  • Seven Chances (1925)
  • Too Many Husbands (1940)

Ping Identity Directory Server 7.2.0.0

We have just released the Ping Identity Directory Server version 7.2.0.0, available for download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html. This new release offers a lot of new features, some substantial performance improvements, and a number of bug fixes. The release notes provide a pretty comprehensive overview of the changes, but here are some of the highlights:

  • Added a REST API (using JSON over HTTP) for interacting with the server data. Although we already supported the REST-based SCIM protocol, our new REST API is more feature rich, requires less administrative overhead, and isn’t limited by limitations imposed by SCIM compliance. SCIM remains supported.

  • Dramatically improved the logic that the server uses for evaluating complex filters. It now uses a number of additional metrics to make more intelligent decisions about the order in which components should be evaluated to get the biggest bang for the buck.

  • Expanded our support for composite indexes to provide support for ANDs of multiple components (for example, “(&(givenName=?)(sn=?))”). These filters can be comprised entirely of equality components, or they may combine one or more equality components with either a greater-or-equal filter, a less-or-equal filter, a bounded range filter, or a substring filter.

  • When performing a new install, the server is now configured to automatically export data to LDIF every day at 1:05 a.m. These exports will be compressed and encrypted (if encryption is enabled during the setup process), and they will be rate limited to minimize the impact on performance. We have also updated the LDIF export task to support exporting the contents of multiple backends in the same invocation.

  • Added support for a new data recovery log and a new extract-data-recovery-log-changes command-line tool. This can help administrators revert or replay a selected set of changes if the need arises (for example, if a malfunctioning application applies one or more incorrect changes to the server).

  • Added support for delaying the response to a failed bind operation, during which time no other operations will be permitted on the client connection. This can be used as an alternative to account lockout as a means of substantially inhibiting password guessing attacks without the risk of locking out the legitimate user who has the right credentials. It can also be used in conjunction with account lockout if desired.

  • Updated client connection policy support to make it possible to customize the behavior that the server exhibits if a client exceeds a configured maximum number of concurrent requests. Previously, it was only possible to reject requests with a “busy” result. It is now possible to use additional result codes when rejecting those requests, or to terminate the client connection and abandon all of its outstanding requests.

  • Added support for a new “exec” task (and recurring task) that can be used to invoke a specified command on the server system, either as a one-time event or at recurring intervals. There are several safeguards in place to prevent this from unauthorized use: the task must be enabled in the server (it is not by default), the command to be invoked must be contained in a whitelist file (no commands are whitelisted by default), and the user scheduling the task must have a special privilege that permits its use (no users, not even root users, have this privilege by default). We have also added a new schedule-exec-task tool that can make it easier to schedule an exec task.

  • Added support for a new file retention task (and recurring task) that can be used to remove files with names matching a given pattern that are outside of a provided set of retention criteria. The server is configured with instances of this task that can be used to clean up expensive operation dump, lock conflict details, and work queue backlog thread dumps (any files of each type other than the 100 most recent that are over 30 days old will be automatically removed).

  • Added support for new tasks (and recurring tasks) that can be used to force the server to enter and leave lockdown mode. While in lockdown mode, the server reports itself as unavailable to the Directory Proxy Server (and other clients that look at its availability status) and only accepts requests from a restricted set of clients.

  • Added support for a new delay task (and recurring task) that can be used to inject a delay between other tasks. The delay can be for a fixed period of time, can wait until the server is idle (that is, there are no outstanding requests and all worker threads are idle), or until a given set of search criteria matches one or more entries.

  • Added support for a new constructed virtual attribute type that can be used to dynamically construct values for an attribute using a combination of static text and the values of other attributes from the entry.

  • Improved user and group management in the delegated administration web application. Delegated administrators can create users and control group membership for selected users.

  • Added support for encrypting TOTP shared secrets, delivered one-time passwords, password reset tokens, and single-use tokens.

  • Updated the work queue implementation to improve performance and reduce contention under extreme load.

  • Updated the LDAP-accessible changelog backend to add support for searches that include the simple paged results control. This control was previously only available for searches in local DB backends.

  • Improved the server’s rebuild-index performance, especially in environments with encrypted data.

  • Added a new time limit log retention policy to support removing log files older than a specified age.

  • Updated the audit log to support including a number of additional fields, including the server product name, the server instance name, request control OIDs, details of any intermediate client or operation purpose controls in the request, the origin of the operation (whether it was replicated, an internal operation, requested via SCIM, etc.), whether an add operation was an undelete, whether a delete operation was a soft delete, and whether a delete operation was a subtree delete.

  • Improved trace logging for HTTP-based services (e.g., the REST API, SCIM, the consent API, etc.) to make it easier to correlate events across trace logs, HTTP access logs, and general access logs.

  • Updated the server so that the replication database so that it is possible to specify a minimum number of changes to retain. Previously, it was only possible to specify the minimum age for changes to retain.

  • Updated the purge expired data plugin to support deleting expired non-leaf entries. If enabled, the expired entry and all of its subordinate entries will be removed.

  • Added support for additional equality matching rules that may be used for attributes with a JSON object syntax. Previously, the server always used case-sensitive matching for field names and case-insensitive matching for string values. The new matching rules make it possible to configure any combination of case sensitivity for these components.

  • Added the ability to configure multiple instances of the SCIM servlet extension in the server, which allows multiple SCIM service configurations in the same server.

  • Updated the server to prevent the possibility of a persistent search client that is slow to consume results from interfering with other clients and operations in the server.

  • Fixed an in which global sensitive attribute restrictions could be imposed on replicated operations, which could cause some types of replicated changes to be rejected.

  • Fixed an issue that could make it difficult to use third-party tasks created with the Server SDK.

  • Fixed an issue in which the correct size and time limit constraints may not be imposed for search operations processed with an alternate authorization identity.

  • Fixed an issue with the get effective rights request control that could cause it to incorrectly report that an unauthenticated client could have read access to an entry if there were any ACIs making use of the “ldap:///all” bind rule. Note that this only affected the response to a get effective rights request, and the server did not actually expose any data to unauthorized clients.

  • Fixed an issue with the dictionary password validator that could interfere with case-insensitive validation to behave incorrectly if the provided dictionary file contained passwords with uppercase characters.

  • Fixed an issue in servers with an account status notification handler enabled. In some cases, an administrative password reset could cause a notification to be generated on each replica instead of just the server that originally processed the change.

  • Fixed a SCIM issue in which the totalResults value for a paged request could be incorrect if the SCIM resources XML file had multiple base DNs defined.

  • Added support for running on the Oracle and OpenJDK Java implementations, and the garbage-first garbage collector (G1GC) algorithm will be configured by default when installing the server with a Java 11 JVM. Java 8 (Oracle and OpenJDK distributions) remains supported.

  • Added support for the RedHat 7.5, CentOS 7.5, and Ubuntu 18.04 LTS Linux distributions. We also support RedHat 6.6, RedHat 6.8, RedHat 6.9, RedHat 7.4, CentOS 6.9, CentOS 7.4, SUSE Enterprise 11 SP4, SUSE Enterprise 12 SP3, Ubuntu 16.04 LTS, Amazon Linux, Windows Server 2012 R2 and Windows Server 2016. Supported virtualization platforms include VMWare vSphere 6.0, VMWare ESX 6.0, KVM, Amazon EC2, and Microsoft Azure.

UnboundID LDAP SDK for Java 4.0.9

We have just released version 4.0.9 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository, from the Files page of our SourceForge repository, and from the Maven Central Repository.

The most significant changes included in this release are:

  • Updated the command-line tool framework to allow tools to have descriptions that are comprised of multiple paragraphs.
  • Updated the support for passphrase-based encryption to work around an apparent JVM bug in the support for some MAC algorithms that could cause them to create an incorrect MAC.
  • Updated all existing ArgumentValueValidator instances to implement the Serializable interface. This can help avoid errors when trying to serialize an argument configured with one of those validators.
  • Updated code used to create HashSet, LinkedHashSet, HashMap, LinkedHashMap, and ConcurrentHashMap instances with a known set of elements to use better algorithms for computing the initial capacity for the map to make it less likely to require the map to be dynamically resized.
  • Updated the LDIF change record API to make it possible to obtain a copy of a change record with a given set of controls.
  • Added additional methods for obtaining a normalized string representation of JSON objects and value components. The new methods provide more control over case sensitivity of field names and string values, and over array order.
  • Improved support for running in a JVM with a security manager that prevents setting system properties (which also prevents access to the System.getProperties method because the returned map is mutable).

UnboundID LDAP SDK for Java 4.0.8

We have just released version 4.0.8 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository (https://github.com/pingidentity/ldapsdk/releases), from the Files page of our SourceForge repository (https://sourceforge.net/projects/ldap-sdk/files/), and from the Maven Central Repository (https://search.maven.org/search?q=g:com.unboundid%20AND%20a:unboundid-ldapsdk&core=gav).

The most significant changes included in this release are:

  • Fixed a bug in the modrate tool that could cause it to use a fixed string instead of a randomly generated one as the value to use in modifications.
  • Fixed an address caching bug in the RoundRobinDNSServerSet class. An inverted comparison could cause it to use cached addresses after they expired, and to cached addresses that weren’t expired.
  • Updated the ldapmodify tool to remove the restriction that prevented using arbitrary controls with an LDAP transaction or the Ping-proprietary multi-update extended operation.
  • Updated a number of locations in the code that caught Throwable so that they re-throw the original Throwable instance (after performing appropriate cleanup) if that instance was an Error or perhaps a RuntimeException.
  • Added a number of JSONObject convenience methods to make it easier to get the value of a specified field as a string, Boolean, number, object, array, or null value.
  • Added a StaticUtils.toArray convenience method that can be useful for converting a collection to an array when the type of element in the collection isn’t known at compile time.
  • Added support for parsing audit log messages generated by the Ping Identity Directory Server for versions 7.1 and later, including generating LDIF change records that can be used to revert change records (if the audit log is configured to record changes in a reversible form).

Ping Identity Directory Server 7.0.1.0

The Ping Identity Directory Server version 7.0.1.0 has been released and is available for download from the Ping Identity website, along with the Directory Proxy Server, Data Synchronization Server, Data Metrics Server, Server SDK, and Delegated User Admin.

The release notes include a summary of the changes included in this release, but the major enhancements include:

  • Updates to the Delegated Admin application, including managing group memberships.
  • The mirror virtual attribute has been updated to make it possible to mirror the values of a specified attribute in another entry whose DN is computed in a manner that is relative to the target entry’s DN.
  • The Directory Proxy Server’s failover load-balancing algorithm has been updated to make it possible to consistently route requests targeting different branches to different sets of servers. This is useful to help distribute load more evenly across servers while still avoiding potential problems due to propagation delay.
  • Added a new replication state detail virtual attribute that provides more detailed information about an entry’s replication state.
  • Improved the server’s behavior when attempts to write to a client are blocked.
  • Added support for unbound GSSAPI connections that are not tied to any specific server instance and work better in some kinds of load-balanced environments.
  • Updated JMX MBean support so that keys and values better conform to best practices by default.

UnboundID LDAP SDK for Java 4.0.7

We have just released the UnboundID LDAP SDK for Java version 4.0.7, available for download from the releases page of our GitHub repository, from the Files page of our SourceForge project, and from the Maven Central Repository. The most significant changes in this release include:

  • Fixed an issue in the LDAPConnectionPool and LDAPThreadLocalConnectionPool classes when created with a connection that is already established and authenticated (as opposed to being created from a server set and bind request). Internally, the LDAP SDK created its own server set and bind request from the provided connection’s state information, but it incorrectly included bind credentials in the server set. Under most circumstances, this would merely cause the LDAP SDK to send two bind requests (the second a duplicate of the first) when establishing a new connection as part of the pool. However, it caused a bigger problem when using the new setBindRequest methods that were introduced in the 4.0.6 release. Because the server set was created with bind credentials, the pool would create a connection that tried to use those old credentials before sending a second bind request with the new credentials, and this would fail if the old credentials were no longer valid.
  • Fixed an issue with the behavior that the LDAP SDK exhibited when configured to automatically follow referrals. If the server returned a search result reference that the LDAP SDK could not follow (for example, because none of the URLs were valid, none of the servers could be reached, none of the searches succeeded, in those servers, etc.), the LDAP SDK would assign a result code of “referral” to the search operation, which would cause it to throw an exception when the search completed (as is the case for most non-success result codes). The LDAP SDK will no longer override the result code for the search operation, but will instead use whatever result code the server returned in its search result done message. Any search result references that the LDAP SDK could not automatically follow will be made available to the caller through the same mechanism that would have been used if the SDK had not been configured to automatically follow referrals (that is, either hand them off to a search result listener or collect them in a list to include in the search result object). The LDAP SDK was already making the unfollowable search result references available in this manner, but the client probably wouldn’t have gotten to the point of looking for them because of the exception resulting from the overridden operation result code.
  • Added a new LDAPConnectionPoolHealthCheck.performPoolMaintenance method that can be used to perform processing on the pool itself (rather than on any individual connection) at regular intervals as specified by the connection pool’s health check interval. This method will be invoked by the health check thread after all other periodic health checking is performed.
  • Added a new PruneUnneededConnectionsLDAPConnectionPoolHealthCheck class that can be used to monitor the size of a connection pool over time, and if the number of available (that is, not currently in use) connections is consistently greater than a specified minimum for a given length of time, then the number of connections in the pool can be reduced to that minimum. This can be used to automatically shrink the size of the pool during periods of reduced activity.
  • Updated the Schema class to provide additional constructors and methods that can be used to attempt to retrieve the schema without silently ignoring errors about unparsable elements. Previously, if a schema entry contained one or more unparsable elements, they would be silently ignored. It is now possible to more easily obtain information about unparsable elements or to have the LDAP SDK throw an exception if it encounters any unparsable elements.
  • Added createSubInitialFilter, createSubAnyFilter, and createSubFinalFilter methods to the Filter class that are more convenient to use than the existing createSubstringFilter methods for substring filters that only have one type of component.
  • Updated the Entry.diff method when operating in reversible mode so that when altering the values of an existing attribute, the delete modifications will be ordered before the add modifications. Previously, the adds came before the deletes, but this could cause problems in some directory servers, especially when the modifications are intended to change the case of a value in a case-insensitive attribute (for example, the add could be ignored or rejected because the value already exists in the entry, or the delete could end up removing the value entirely). Ordering the deletes before the adds should provide much more reliable results.
  • Updated the modrate tool to add a new “--valuePattern” argument that can be used to specify the pattern to use to generate new values. This argument is an alternative to the “--valueLength” and “--characterSet” arguments and allows for more flexibility in the types of values that can be generated.
  • Updated the manage-account tool so that the arguments related to TOTP secrets are marked sensitive. This will ensure that the value is not displayed in the clear in certain cases like interactive mode output or tool invocation logging.
  • Added a new “streamfile” value pattern component that operates like the existing “sequentialfile” component except that it limits the amount of the file that is read into memory at any given time, so it is more suitable for reading values from very large files.
  • Added a new “timestamp” value pattern component that can be used to include either the current time or a randomly selected time from a given range in a variety of formats.
  • Added a new “uuid” value pattern component that can be used to include a randomly generated universally unique identifier (UUID).
  • Added a new “random” value pattern component that can be used to include a specified number of randomly selected characters from a given character set.
  • Added a StaticUtils.toUpperCase method to complement the existing StaticUtils.toLowerCase method.
  • Added Validator.ensureNotNullOrEmpty methods that work for collections, maps, arrays, and character sequences.
  • Added LDAPTestUtils methods that can be used to make assertions about the diagnostic message of an LDAP result or an LDAP exception.
  • Added client-side support for a new exec task that can be used to invoke a specified command in the Ping Identity Directory Server (subject to security restrictions imposed by the server).
  • Added client-side support for a new file retention task that can be used to examine files in a specified directory, identify files matching a given pattern, and delete any of those files that do not match count-based, age-based, or size-based criteria.
  • Added client-side support for a new delay task that can be used sleep for a specified period of time, until the server work queue reports that all worker threads are idle and there are no pending operations, or until a given search or set of searches match at least one entry. The delay task is primarily intended to be used as a spacer between other tasks in a dependency chain.
  • Updated support for the ignore NO-USER-MODIFICATION request control to make it possible to set the criticality when creating an instance of the control. Previously, new instances were always critical.
  • Updated the ldapmodify tool to include the ignore NO-USER-MODIFICATION request control in both add and modify requests if the --ignoreNoUserModification argument was provided. Previously, that argument only caused the control to be included in add requests. Further, the control will now be marked non-critical instead of critical.
  • Updated the task API to add support for a number of new properties, including the email addresses of users to notify on task start and successful completion (in addition to the existing properties specifying users to email on error or on any type of completion), and flags indicating whether the server should alert on task start, successful completion, or failure.
  • Updated the argument parser’s properties file support so that it expects the file to use the ISO 8859-1 encoding, and to support Unicode escape sequences that are comprised of a backslash followed by the letter u and four hexadecimal digits.
  • Updated the tool invocation logger to add a failsafe mechanism for preventing passwords from being included in the log. Although it will already redact the values of any arguments that are declared sensitive, it will now also redact the values of any arguments whose name suggests that their value is a password.

Ping Identity Directory Server 7.0.0.0

We have just released the Ping Identity Directory Server version 7.0.0.0, along with supporting products including the Directory Proxy Server, Data Synchronization Server, and Data Metrics Server. They’re available to download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html.

Full release notes are available at https://documentation.pingidentity.com/pingdirectory/7.0/relnotes/, and there are a lot of enhancements, fixes, and performance improvements, but some of the most significant new features are described below.

 

Improved Encryption for Data at Rest

We have always supported TLS to protect data in transit, and we carefully select from the set of available cipher suites to ensure that we only use strong encryption, preferring forward secrecy when it’s available. We also already offered protection for data at rest in the form of whole-entry encryption, encrypted backups and LDIF exports, and encrypted changelog and replication databases. In the 7.0 release, we’re improving upon this encryption for data at rest with several enhancements, including:

  • Previously, if you wanted to enable data encryption, you had to first set up the server without encryption, create an encryption settings definition, copy that definition to all servers in the topology, and export the data to LDIF and re-import it to ensure that any existing data got encrypted. With the 7.0 release, you can easily enable data encryption during the setup process, and you can provide a passphrase to use to generate the encryption key. If you supply the same passphrase when installing all of the instances, then they’ll all use the same encryption key.
  • Previously, if you enabled data encryption, the server would encrypt entries, but indexes and certain other database metadata (for example, information needed to store data compactly) remained unencrypted. In the 7.0 release, if you enable data encryption, we now encrypt index keys and that other metadata so that no potentially sensitive data is stored in the clear.
  • It was already possible to encrypt backups and LDIF exports, but you had to explicitly indicate that they should be encrypted, and the encryption was performed using a key that was shared among servers in the topology but that wasn’t available outside of the topology. In the 7.0 release, we have the option to automatically encrypt backups and LDIF exports, and that’s enabled by default if you configure encryption at setup. You also have more control over the encryption key so that encrypted backups and LDIF exports can be used outside of the topology.
  • We now support encrypted logging. Log-related tools like search-logs, sanitize-log, and summarize-access-log have been updated to support working with encrypted logs, and the UnboundID LDAP SDK for Java has been updated to support programmatically reading and parsing encrypted log files.
  • Several other tools that support reading from and writing to files have also been updated so that they can handle encrypted files. For example, tools that support reading from or writing to LDIF files (ldapsearch, ldapmodify, ldifsearch, ldifmodify, ldif-diff, transform-ldif, validate-ldif) now support encrypted LDIF.

 

Parameterized ACIs

Our server offers a rich access control mechanism that gives you fine-grained control over who has access to what data. You can define access control rules in the configuration, but it’s also possible to store rules in the data, which ensures that they are close to the data they govern and are replicated across all servers in the topology.

In many cases, it’s possible to define a small number of access control rules at the top of the DIT that govern access to all data. But there are other types of deployments (especially multi-tenant directories) where the data is highly branched, and users in one branch should have a certain amount of access to data in their own branch but no access to data in other branches. In the past, the only way to accomplish this was to define access control rules in each of the branches. This was fine from a performance and scalability perspective, but it was a management hassle, especially when creating new branches or if it became necessary to alter the rules for all of those branches.

In the 7.0 release, parameterized ACIs address many of these concerns. Parameterized ACIs make it possible to define a pattern that is automatically interpreted across a set of entries that match the parameterized content.

For example, say your directory has an “ou=Customers,dc=example,dc=com” entry, and each customer organization has its own branch below that entry. Each of those branches might have a common structure (for example, users might be below an “ou=People” subordinate entry, and groups might be below “ou=Groups”). The structure for an Acme organization might look something like:

  • dc=example,dc=com
    • ou=Customers
      • ou=Acme
        • ou=People
          • uid=amanda.adams
          • uid=bradley.baker
          • uid=carol.collins
          • uid=darren.dennings
        • ou=Groups
          • cn=Administrators
          • cn=Password Managers

If you want to create a parameterized ACI so that members of the “ou=Password Managers,ou=Groups,ou={customerName},ou=Customers,dc=example,dc=com” group have write access to the userPassword attribute in entries below “ou=People,ou={customerName},ou=Customers,dc=example,dc=com”, you might create a parameterized ACI that looks something like the following:

(target=”ldap:///ou=People,ou=($1),ou=Customers,dc=example,dc=com”)(targetattr=”userPassword”)(version 3.0; acl “Password Managers can manage passwords”; allow (write) groupdn=”ldap:///cn=Password Managers,ou=Groups,ou=($1),ou=Customers,dc=example,dc=com”;)

 

Recurring Tasks

The Directory Server supports a number of different types of administrative tasks, including:

  • Backing up one or more server backends
  • Restoring a backup
  • Exporting the contents of a backend to LDIF
  • Importing data from LDIF
  • Rebuild the contents of one or more indexes
  • Force a log file rotation

Administrative tasks can be scheduled to start immediately or at a specified time in the future, and you can define dependencies between tasks so that one task won’t be eligible to start until another one completes.

In previous versions, when you scheduled an administrative task, it would only run once. If you wanted to run it again, you needed to schedule it again. In the 7.0 release, we have added support for recurring tasks, which allow you to define a schedule that causes them to be processed on a regular basis. We have some pretty flexible scheduling logic that allows you to specify when they get run, and it’s able to handle things like daylight saving time and months with different numbers of days.

Although you can schedule just about any kind of task as a recurring task, we have enhanced support for backup and LDIF export tasks, since they’re among the most common types of tasks that we expect administrators will want to run on a recurring basis. For example, we have built-in retention support so that you can keep only the most recent backups or LDIF exports (based on either the number of older copies to retain or the age of those copies) so that you don’t have to manually free up disk space.

 

Equality Composite Indexes

The server offers a number of types of indexes that can help you ensure that various types of search operations can be processed as quickly as possible. For example, an equality attribute index maps each of the values for a specified attribute type to a list of the entries that contain that attribute value.

In the 7.0 release, we have introduced a new type of index called a composite index. When you configure a composite index, you need to define at least a filter pattern that describes the kinds of searches that will be indexed, and you can also define a base DN pattern that restricts the index to a specified portion of the DIT.

At present, we only support equality composite indexes, which allow you to index values for a single attribute, much like an equality attribute index. However, there are two key benefits of an equality composite index over an equality attribute index:

  • As previously stated, you can combine the filter pattern with a base DN pattern. This is very useful in directories that have a lot of branches (for example, a multi-tenant deployment) where searches are often constrained to one of those branches. By combining a filter pattern with a base DN pattern, the server can maintain smaller ID sets that are more efficient to process and more tightly scoped to the search being issued.
  • The way in which the server maintains the ID sets in a composite index is much more efficient for keys that match a very large number of entries than the way it maintains the ID set for an attribute index. In an attribute index, you can optimize for either read performance or write performance of a very large ID set, but not both. A composite index is very efficient for both reads and writes of very large ID sets.

In the future, we intend to offer support for additional types of composite indexes that can improve the performance for other types of searches. For example, we’re already working on AND composite indexes that allow you to index combinations of attributes.

 

Delegated Administration

We have added a new delegated administration web application that integrates with the Ping Identity Directory Server and Ping Federate products to allow a selected set of administrators to manage users in the directory. For example, help desk employees might use it to unlock a user’s account or reset their password. Administrators can be restricted to managing only a defined subset of users (based on things like their location in the DIT, entry content, or group membership), and also restricted to a specified set of attributes.

 

Automatic Entry Purging

In the past, our server has had limited support for automatically deleting data after a specified length of time. The LDAP changelog and the replication database can be set to purge old data, and we also support automatically purging soft-deleted entries (entries that have been deleted as far as most clients are concerned, but are really just hidden so that they can be recovered if the need arises).

With the 7.0 release, we’re exposing a new “purge expired data” plugin that can be used to automatically delete entries that match a given set of criteria. At a minimum, this criteria involves looking at a specified attribute or JSON object field whose value represents some kind of timestamp, but it can also be further restricted to entries in a specified portion of the DIT or entries matching a given filter. And it’s got rate limiting built in so that the background purging won’t interfere with client processing.

For example, say that you’ve got an application that generates data that represents some kind of short-lived token. You can create an instance of the purge expired data plugin with a base DN and filter that matches those types of entries, and configure it to delete entries with a createTimestamp value that is more than a specified length of time in the past.

 

Better Control over Unindexed Searches

Despite the variety of indexes defined in the server, there may be cases in which a client issues a search request that the server cannot use indexes to process efficiently. There are a variety of reasons that this may happen, including because there isn’t any applicable index defined in the server, because there so many entries that match the search criteria that the server has stopped maintaining the applicable index, or because the search targets a virtual attribute that doesn’t support efficient searching.

An unindexed search can be very expensive to process because the server needs to iterate across each entry in the scope of the search to determine whether it matches the search criteria. Processing an unindexed search can tie up a worker thread for a significant length of time, so it’s important to ensure that the server only actually processes the unindexed searches that are legitimately authorized. We already required clients to have the unindexed-search privilege, limited the number of unindexed searches that can be active at any given time, and provided an option to disable unindexed searches on a per-client-connection-policy basis.

In the 7.0 release, we’ve added additional features for limiting unindexed searches. They include:

  • We’ve added support for a new “reject unindexed searches” request control that can be included in a search request to indicate that the server should reject the request if it happens to be unindexed, even if would have otherwise been permitted. This is useful for a client that has the unindexed-search privilege but wants a measure of protection against inadvertently requesting an unindexed search.
  • We’ve added support for a new “permit unindexed searches” request control, which can be used in conjunction with a new “unindexed-search-with-control” privilege. If a client has this privilege, then only unindexed search requests that include this the permit unindexed searches control will be allowed.
  • We’ve updated the client connection policy configuration to make it possible to only allow unindexed searches that include the permit unindexed searches request control, even if the requester has the unindexed-search privilege.

 

GSSAPI Improvements

The GSSAPI SASL mechanism can be used to authenticate to the Directory Server using Kerberos V. We’ve always supported this mechanism, but the 7.0 server adds a couple of improvements to that support.

First, it’s now possible for the client to request an authorization identity that is different from the authentication identity. In the past, it was only possible to use GSSAPI if the authentication identity string exactly matched the authorization identity. Now, the server will permit the authorization identity to be different from the authentication identity (although the user specified as the authentication identity must have the proxied-auth privilege if they want to be able to use a different authorization identity).

We’ve also improved support for using GSSAPI through hardware load balancer, particularly in cases where the server uses a different FQDN than was used in the client request. This generally wasn’t an issue for the case in which a Ping Identity Directory Proxy Server was used to perform the load balancing, but it could have been a problem in some cases with hardware load balancers or other cases in which the client might connect to the server with a different name than the server thinks it’s using.

 

Tool Invocation Logging

We’ve updated our tool frameworks to add support for tool invocation logging, which can be used to record the arguments and result for any command-line tools provided with the server. By default, this feature is only enabled for tools that are likely to change the state of the server or the data contained in the server, and by default, all of those tools will use the same log file. However, you can configure which (if any) tools should be logged, and which files should be used.

Invocation logging includes two types of log messages:

  • A launch log message, which is recorded whenever the tool is first run but before it performs its actual processing. The launch log message includes the name of the tool, any arguments provided on the command line, any arguments automatically supplied from a properties file, the time the tool was run, and the username for the operating system account that ran the tool. The values of any sensitive arguments (for example, those that might be used to supply passwords) will be redacted so that information will not be recorded in the log.
  • A completion log message, which is recorded whenever the tool completes its processing, regardless of whether it completed successfully or exited with an error. This will at least include the tool’s numeric exit code, but in some cases, it might also include an exit message with additional information about the processing performed by the tool. Note that there may be some circumstances in which the completion log message may not be recorded (for example, if the tool is forcefully terminated with something like a “kill -9”).