Configuring Two-Factor Authentication in the Ping Identity Directory Server

Passwords aren’t going anywhere anytime soon, but they’re just not good enough on their own. It is entirely possible to choose a password that is extremely resistant to dictionary and brute force attacks, but the fact is that most people pick really bad passwords. They also tend to reuse the same passwords across multiple sites, which further increases the risk that their account could be compromised. And even those people who do choose really strong passwords might still be tricked into giving that password to a lookalike site via phishing or DNS hijacking, or they may fall victim to a keylogger. For these reasons and more, it’s always a good idea to combine a password with some additional piece of information when authenticating to a site.

The Ping Identity Directory Server has included support for two-factor authentication since 2012 (back when it was still the UnboundID Directory Server). Out of the box, we currently offer four types of two-factor authentication:

  • You can combine a static password with a time-based one-time password using the standard TOTP mechanism described in RFC 6238. These are the same kind of one-time passwords that are generated by apps like Google Authenticator or Authy.
  • You can combine a static password with a one-time password generated by a YubiKey device.
  • You can combine a static password with a one-time password that gets delivered to the user through some out-of-band mechanism like a text or email message, voice call, or app notification.
  • You can combine a static password with an X.509 certificate presented to the server during TLS negotiation.

In this post, I’ll describe the process for configuring the server to enable support for each of these types of authentication. We also provide the ability to create custom extensions to implement support for other types of authentication if desired, but that’s not going to be covered here.

Time-Based One-Time Passwords

The Ping Identity Directory Server supports time-based one-time passwords through the UNBOUNDID-TOTP SASL mechanism, which is enabled by default in the server. For a user to authenticate with this mechanism, their account must contain the ds-auth-totp-shared-secret attribute whose value is the base32-encoded representation of the shared secret that should be used to generate the one-time passwords. This shared secret must be known to both the client (or at least to an app in the client’s possession, like Google Authenticator) as well as to the server.

To facilitate generating and encoding the TOTP shared secret, the Directory Server provides a “generate TOTP shared secret” extended operation. The UnboundID LDAP SDK for Java provides support for this extended operation, and the class-level Javadoc describes the encoding for this operation in case you need to implement support for it in some other API. We also offer a generate-totp-shared-secret command-line tool that can be used for testing (or I suppose you could invoke it programmatically if you’d rather do that than use the UnboundID LDAP SDK or implement support for the extended operation yourself). For the sake of convenience, I’ll use this tool for the demonstration.

There are actually a couple of ways that you can use the generate TOTP shared secret operation: for a user to generate a shared secret for their own account (in which case the user’s static password must be provided), or for an administrator (who must have the password-reset privilege) to generate a shared secret for another user. I’d expect the most common use case to be a user generating a shared secret for their own account, so that’s the approach we’ll take for this example.

Note that while the generate TOTP shared secret extended operation is enabled out of the box, the shared secrets that it generates by default are not encrypted, which could make them easier for an attacker to steal if they got access to a copy of the user data. To prevent this, if the server is configured with data encryption enabled, then you should also enable the “Encrypt TOTP Secrets and Delivered Tokens” plugin. That can be done with the following configuration change:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

If we assume that our account has a username of “jdoe”, then the command to generate a shared secret for that user would be something like:

$ bin/generate-totp-shared-secret --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authID u:jdoe \
     --promptForUserPassword
Enter the static password for user 'u:jdoe':
Successfully generated TOTP shared secret 'KATLTK5WMUSZIACLOMDP43KPSG2LUUOB'.

If we were using a nice web application to invoke the generate TOTP shared secret operation, we’d probably want to have it generate a QR code with that shared secret embedded in it so that it could be easily scanned and imported into an app like Google Authenticator (and you’d want to embed it in a URL like “otpauth://totp/jdoe%20in%20ds.example.com?secret=KATLTK5WMUSZIACLOMDP43KPSG2LUUOB”). For the sake of testing, we can either manually generate an appropriate QR code (for example, using an online utility like https://www.qr-code-generator.com), or you can just type the shared secret into the authenticator app.

Now that the account is configured for TOTP authentication, we can use the UNBOUNDID-TOTP SASL mechanism to authenticate to the server. As with the generate TOTP shared secret operation, this SASL mechanism is supported by the UnboundID LDAP SDK for Java, but most of our command-line tools should support this mechanism, so we can test it with a utility like ldapsearch. You’ll need to use the “--saslOption” command-line argument to specify a number of parameters, including “mech” (the name of the SASL mechanism to use, which should be “UNBOUNDID-TOTP”), “authID” (the authentication ID for the user that’s trying to authenticate), and “totpPassword” (for the one-time password generated by the authenticator app). For example:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-TOTP \
     --saslOption authID=u:jdoe \
     --saslOption totpPassword={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

YubiKey One-Time Passwords

If you’ve got a YubiKey device capable of generating one-time passwords in the Yubico OTP format (which should be most YubiKey devices except the ones that only support FIDO authentication), then you can use that device to generate one-time passwords to use in conjunction with your static password.

The Directory Server supports this type of authentication through the UNBOUNDID-YUBIKEY-OTP SASL mechanism. To enable support for this SASL mechanism, you first need to get an API key from Yubico, which you can get for free from https://upgrade.yubico.com/getapikey/. When you do this, you’ll get a client ID and a secret key, which gives you access to use their authentication servers. Note that you only need this for the server (and you can share the same key for all server instances); end users don’t need to worry about this. Alternately, you could stand up your own authentication server if you’d rather not rely on the Yubico servers, but we won’t go into that here.

Once you’ve got the client ID and secret key, you can enable support for the SASL mechanism with the following configuration change:

dsconfig set-sasl-mechanism-handler-prop \
     --handler-name UNBOUNDID-YUBIKEY-OTP \
     --set enabled:true \
     --set yubikey-client-id:{client-id} \
     --set yubikey-api-key:{secret-key}

To be able to authenticate a user with this mechanism, you’ll need to update their account to include a ds-auth-yubikey-public-id attribute with one or more values that represent the public IDs of the YubiKey devices that you want to use (and it might not be a bad idea to have multiple devices registered for the same account, so that you have a backup key in case you lose or break the primary key).

To get the public ID for a YubiKey device, you can use it to generate a one-time password and strip off the last 32 bytes. This isn’t considered secret information, so no encryption is necessary when storing it in an entry, and you can use a simple LDAP modify operation to manage the client IDs for a user account. Alternately, you can use the “register YubiKey OTP device” extended operation (supported and documented in the UnboundID LDAP SDK for Java) or use the register-yubikey-otp-device command-line tool. In the case of the command-line tool, you can register a device like:

$ bin/register-yubikey-otp-device --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authenticationID u:jdoe \
     --promptForUserPassword \
     --otp {one-time-password}
Enter the static password for user u:jdoe:
Successfully registered the specified YubiKey OTP device for user u:jdoe

Note that when using this tool (and the register YubiKey OTP device extended operation in general), you should provide a complete one-time password and not just the client ID.

That should be all that is necessary to allow the user to authenticate with a YubiKey one-time password. We can test it with ldapsearch like so:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-YUBIKEY-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Delivered One-Time Passwords

Delivered one-time passwords are more convenient than either time-based or YubiKey-generated one-time passwords because there’s less burden on the user. There’s no need to install an app or have any special hardware to generate one-time passwords. Instead, the server generates a one-time password and then sends it to the user through some out-of-band mechanism (that is, the user gets the one-time password through some mechanism other than LDAP). The server provides direct support for delivering these generated one-time passwords over SMS (using the Twilio service) or via email, and the UnboundID Server SDK provides an API that allows you to create your own delivery mechanisms. Note, however, that while it may be more convenient to use, it’s also generally considered less secure (especially if you’re using SMS).

There’s also more effort involved in enabling support for delivered one-time passwords than either time-based or YubiKey-generated one-time passwords. The first thing you should do is configure the server to ensure that the generated one-time password values will be encrypted (unless you already did it above for encrypting TOTP shared secrets), which you can do as follows:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

Next, we need to configure one or more delivery mechanisms. These are configured in the “OTP Delivery Mechanism” section of dsconfig. For example, to configure a delivery mechanism for email, you could use something like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name Email \
     --type email \
     --set enabled:true \
     --set 'sender-address:otp@example.com' \
     --set "message-text-before-otp:Your one-time password is ‘" \
     --set "message-text-after-otp:’."

If you’re using email, you’ll also need to configure one or more SMTP external servers and set the smtp-server property in the global configuration.

Alternately, if you’re using SMS, then you’ll need to have a Twilio account and fill in the appropriate values for the SID, auth token, and phone number fields, like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name SMS \
     --type twilio \
     --set enabled:true \
     --set twilio-account-sid:{sid} \
     --set twilio-auth-token:{auth-token} \
     --set sender-phone-number:{phone-number} \
     --set "message-text-before-otp:Your one-time password is '" \
     --set "message-text-after-otp:'."

Once the delivery mechanism(s) are configured, you can enable the delivered one-time password SASL mechanism handler as follows:

dsconfig create-sasl-mechanism-handler \
     --handler-name UNBOUNDID-DELIVERED-OTP \
     --type unboundid-delivered-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match"

You’ll also need to enable support for the deliver one-time password extended operation, which is used to request that the server generate and deliver a one-time password for a user. You can do that like:

dsconfig create-extended-operation-handler \
     --handler-name "Deliver One-Time Passwords" \
     --type deliver-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match" \
     --set "password-generator:One-Time Password Generator" \
     --set default-otp-delivery-mechanism:Email
     --set default-otp-delivery-mechanism:SMS

The process for authenticating with a delivered one-time password involves two steps. In the first step, you need to request that the server generate and deliver a one-time password, which can be accomplished with the “deliver one-time password” extended operation, which is supported and documented in the UnboundID LDAP SDK for Java and can be tested with the deliver-one-time-password command-line tool. Then, once you have that one-time password, you can use the UNBOUNDID-DELIVERED-OTP SASL mechanism to complete the authentication.

If you have multiple delivery mechanisms configured in the server, then there are several ways that the server can decide which one to use to send a one-time password to a user.

  • The server will only attempt to use a delivery mechanism that applies to the target user. For example, if a user entry has an email address but not a mobile phone number, then it won’t try to deliver a one-time password to that user via SMS.
  • The deliver one-time password extended request can be used to indicate which delivery mechanism(s) should be attempted, and in which order they should be attempted. If you’re using the deliver-one-time-password command-line tool, then you can use the –deliveryMechanism argument to specify this.
  • If the extended request doesn’t indicate which mechanisms to use, then the server will check the user’s entry to see if it has a ds-auth-preferred-otp-delivery-mechanism operational attribute. If so, then it will be used to specify the desired delivery mechanism.
  • If nothing else, then the server will use the order specified in the default-otp-delivery-mechanism property of the extended operation handler configuration.

As an example, let’s demonstrate the process of authenticating as user jdoe with a one-time password delivered via email. We can start the process using the deliver-one-time password command-line tool as follows:

$ bin/deliver-one-time-password --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --userName jdoe \
     --promptForBindPassword \
     --deliveryMechanism Email
Enter the static password for the user:

Successfully delivered a one-time password via mechanism 'Email' to 'jdoe@example.com'

Now, we can check our email, and there should be a message with the one-time password. Once we have it, we can use a tool like ldapsearch to authenticate with that one-time password using the UNBOUNDID-DELIVERED-OTP SASL mechanism, like:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-DELIVERED-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Combining Certificates and Passwords

When establishing a TLS-based connection, the server will always present its certificate to the client, and the client will decide whether it wants to trust that certificate and continue establishing the secure connection. Further, the server may optionally ask the client to provide its own certificate, and then the client may then optionally provide one. If the server requests a client certificate, and if the client provides one, then the determine whether it wants to trust that client certificate and continue the negotiation process.

If a client has provided its own certificate to the directory server and the server has accepted it, then the client can use a SASL EXTERNAL bind to request that the server use the information in the certificate to identify and authenticate the client. Most LDAP servers support this, and it can be a very strong form of single-factor authentication. However, the Ping Identity Directory Server also offers an UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism that takes this even further by combining the client certificate with a static password.

Certificate-based authentication (regardless of whether you also include a static password) isn’t something that has really caught on because of the hassle and complexity of dealing with certificates. It’s honestly probably not a great option for most end users, although it may be an attractive option for more advanced users like server administrators. But one big benefit that the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism has over the two-factor mechanisms that rely on one-time passwords is that it can be used in a completely non-interactive manner. That makes it suitable for use in authenticating one application to another.

As with the EXTERNAL mechanism, the Ping Identity Directory Server has support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism enabled out of the box. Just about the only thing you’re likely to want to configure is the certificate-mapper property in its configuration, which is used to uniquely identify the account for the user that is trying to authenticate based on the contents of the certificate. The certificate mapper that is configured by default will only work if the certificate’s subject DN matches the DN of the corresponding user entry. Other certificate mappers can be used to identify the user in other ways, including searching based on attributes in the certificate subject or searching for the owner of a certificate based on the fingerprint of that certificate.

Due to an unfortunate oversight, command-line tools currently shipped with the server do not include support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism. That will be corrected in the next release, but if you want to test with it now, you can check out the UnboundID LDAP SDK for Java from its GitHub project and build it for yourself. That will allow you to test certificate+password authentication like so:

$ tools/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --keyStorePath client-keystore \
     --promptForKeyStorePassword \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-CERTIFICATE-PLUS-PASSWORD \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the key store password:

Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 42a82498-93c6-4e62-9c6c-8fe6b33e1550
startTime: 20190107072612Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

The Movies I Watched in 2018

I didn’t watch as many movies in 2018 as I had in previous years, but I still did pretty well by most standards. I ended up watching 413 movies in a theater and 378 outside of a theater, for a total of 791 total.

The only movies I saw in a theater were at an Alamo Drafthouse (335 movies) or at the Austin Film Society (78 movies). 267 of those movies were projected digitally, 137 on film (126 in 35mm, 6 in 16mm, and 5 in 70mm), and 9 were on VHS.

The best new releases I saw in the theater include:

  • Bad Times at the El Royale
  • Border
  • Green Book
  • The Guilty
  • Hold the Dark
  • I Don’t Feel at Home in This World Anymore
  • Incredibles 2
  • Juliet, Naked
  • Love, Simon
  • Lowlife
  • Paddington 2
  • Princess Cyd
  • Ready Player One
  • RBG
  • A Simple Favor
  • The Shape of Water
  • Sweet Country
  • Three Identical Strangers
  • Thunder Road
  • Won’t You Be My Neighbor?

I also saw some great new movies at the Fantastic Fest film festival, but I’m not sure that they have been officially released yet. The best of them include:

  • The Boat
  • Chained for Life
  • Goliath
  • Slut in a Good Way
  • The Standoff at Sparrow Creek
  • The Unthinkable
  • The World Is Yours

And even if I didn’t think they were the best in a conventional sense, I had a heck of a good time watching a few other new releases:

  • Between Worlds (though I don’t think it’s actually out yet)
  • The Meg
  • Twisted Pair

And these surprised me by being quite a bit better than I expected, even if they’re not among the best of the year:

  • Blockers
  • Game Night
  • Tag

On the other hand, these were the new releases that everyone else seemed to love, but I strongly disliked:

  • The Favourite
  • Ghost Stories
  • Hereditary
  • Mandy
  • Mission: Impossible — Fallout
  • A Quiet Place

And one of the great things about the film scene in Austin is that we get to watch a ton of repertory films on the big screen. The best older movies I saw for the first time in 2018 included:

  • The Awful Truth (1937)
  • The Big Heat (1953)
  • Bunny Lake Is Missing (1965)
  • Colossus: The Forbin Project (1970)
  • Coming Home (1978)
  • Deadly Games (aka Dial Code Santa Claus; 1989)
  • Girlfriends (1978)
  • I Walk Alone (1947)
  • Jawbreaker (1999)
  • Kitten With a Whip (1964)
  • Lady Snowblood (1973)
  • Lake of Dracula (1971)
  • Last Night at the Alamo (1983)
  • The Last Temptation of Christ (1988)
  • The Man Who Cheated Himself (1950)
  • A Matter of Life and Death (1946)
  • Midnight Express (1978)
  • Out of the Past (1947)
  • Quiet Please, Murder (1942)
  • The Raven (1935)
  • Roadgames (1981)
  • Starstruck (1982)
  • Stranger in Our House (aka Summer of Fear; 1978)
  • Summertime (1955)
  • To Have and Have Not (1944)
  • The Unsuspected (1947)

As far as watching movies outside of a theater, I kind of went down a rabbit hole of Lifetime and Hallmark movies (especially at Christmas), so I didn’t watch as many classic films as I probably should have, and even though I enjoy them, it’s hard to put them in the same class as most movies with higher production values and less predictable plots (but I do thoroughly enjoy the Murder She Baked, Aurora Teagarden, and Jane Doe mystery series, and I thought that Dangerous Child, Evil Nanny, and Killer Body aka The Wrong Patent were among the better ones). Nevertheless, the following were among my favorite first-time watches outside of a theater in 2018:

  • Bachelor Mother (1939)
  • The Cheyenne Social Club (1970)
  • Dark Passage (1947)
  • Father Goose (1964)
  • First Reformed (2017)
  • From Beyond the Grave (1974)
  • Hearts Beat Loud (2018)
  • Psychos in Love (1987)
  • Seven Chances (1925)
  • Too Many Husbands (1940)

Ping Identity Directory Server 7.2.0.0

We have just released the Ping Identity Directory Server version 7.2.0.0, available for download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html. This new release offers a lot of new features, some substantial performance improvements, and a number of bug fixes. The release notes provide a pretty comprehensive overview of the changes, but here are some of the highlights:

  • Added a REST API (using JSON over HTTP) for interacting with the server data. Although we already supported the REST-based SCIM protocol, our new REST API is more feature rich, requires less administrative overhead, and isn’t limited by limitations imposed by SCIM compliance. SCIM remains supported.

  • Dramatically improved the logic that the server uses for evaluating complex filters. It now uses a number of additional metrics to make more intelligent decisions about the order in which components should be evaluated to get the biggest bang for the buck.

  • Expanded our support for composite indexes to provide support for ANDs of multiple components (for example, “(&(givenName=?)(sn=?))”). These filters can be comprised entirely of equality components, or they may combine one or more equality components with either a greater-or-equal filter, a less-or-equal filter, a bounded range filter, or a substring filter.

  • When performing a new install, the server is now configured to automatically export data to LDIF every day at 1:05 a.m. These exports will be compressed and encrypted (if encryption is enabled during the setup process), and they will be rate limited to minimize the impact on performance. We have also updated the LDIF export task to support exporting the contents of multiple backends in the same invocation.

  • Added support for a new data recovery log and a new extract-data-recovery-log-changes command-line tool. This can help administrators revert or replay a selected set of changes if the need arises (for example, if a malfunctioning application applies one or more incorrect changes to the server).

  • Added support for delaying the response to a failed bind operation, during which time no other operations will be permitted on the client connection. This can be used as an alternative to account lockout as a means of substantially inhibiting password guessing attacks without the risk of locking out the legitimate user who has the right credentials. It can also be used in conjunction with account lockout if desired.

  • Updated client connection policy support to make it possible to customize the behavior that the server exhibits if a client exceeds a configured maximum number of concurrent requests. Previously, it was only possible to reject requests with a “busy” result. It is now possible to use additional result codes when rejecting those requests, or to terminate the client connection and abandon all of its outstanding requests.

  • Added support for a new “exec” task (and recurring task) that can be used to invoke a specified command on the server system, either as a one-time event or at recurring intervals. There are several safeguards in place to prevent this from unauthorized use: the task must be enabled in the server (it is not by default), the command to be invoked must be contained in a whitelist file (no commands are whitelisted by default), and the user scheduling the task must have a special privilege that permits its use (no users, not even root users, have this privilege by default). We have also added a new schedule-exec-task tool that can make it easier to schedule an exec task.

  • Added support for a new file retention task (and recurring task) that can be used to remove files with names matching a given pattern that are outside of a provided set of retention criteria. The server is configured with instances of this task that can be used to clean up expensive operation dump, lock conflict details, and work queue backlog thread dumps (any files of each type other than the 100 most recent that are over 30 days old will be automatically removed).

  • Added support for new tasks (and recurring tasks) that can be used to force the server to enter and leave lockdown mode. While in lockdown mode, the server reports itself as unavailable to the Directory Proxy Server (and other clients that look at its availability status) and only accepts requests from a restricted set of clients.

  • Added support for a new delay task (and recurring task) that can be used to inject a delay between other tasks. The delay can be for a fixed period of time, can wait until the server is idle (that is, there are no outstanding requests and all worker threads are idle), or until a given set of search criteria matches one or more entries.

  • Added support for a new constructed virtual attribute type that can be used to dynamically construct values for an attribute using a combination of static text and the values of other attributes from the entry.

  • Improved user and group management in the delegated administration web application. Delegated administrators can create users and control group membership for selected users.

  • Added support for encrypting TOTP shared secrets, delivered one-time passwords, password reset tokens, and single-use tokens.

  • Updated the work queue implementation to improve performance and reduce contention under extreme load.

  • Updated the LDAP-accessible changelog backend to add support for searches that include the simple paged results control. This control was previously only available for searches in local DB backends.

  • Improved the server’s rebuild-index performance, especially in environments with encrypted data.

  • Added a new time limit log retention policy to support removing log files older than a specified age.

  • Updated the audit log to support including a number of additional fields, including the server product name, the server instance name, request control OIDs, details of any intermediate client or operation purpose controls in the request, the origin of the operation (whether it was replicated, an internal operation, requested via SCIM, etc.), whether an add operation was an undelete, whether a delete operation was a soft delete, and whether a delete operation was a subtree delete.

  • Improved trace logging for HTTP-based services (e.g., the REST API, SCIM, the consent API, etc.) to make it easier to correlate events across trace logs, HTTP access logs, and general access logs.

  • Updated the server so that the replication database so that it is possible to specify a minimum number of changes to retain. Previously, it was only possible to specify the minimum age for changes to retain.

  • Updated the purge expired data plugin to support deleting expired non-leaf entries. If enabled, the expired entry and all of its subordinate entries will be removed.

  • Added support for additional equality matching rules that may be used for attributes with a JSON object syntax. Previously, the server always used case-sensitive matching for field names and case-insensitive matching for string values. The new matching rules make it possible to configure any combination of case sensitivity for these components.

  • Added the ability to configure multiple instances of the SCIM servlet extension in the server, which allows multiple SCIM service configurations in the same server.

  • Updated the server to prevent the possibility of a persistent search client that is slow to consume results from interfering with other clients and operations in the server.

  • Fixed an in which global sensitive attribute restrictions could be imposed on replicated operations, which could cause some types of replicated changes to be rejected.

  • Fixed an issue that could make it difficult to use third-party tasks created with the Server SDK.

  • Fixed an issue in which the correct size and time limit constraints may not be imposed for search operations processed with an alternate authorization identity.

  • Fixed an issue with the get effective rights request control that could cause it to incorrectly report that an unauthenticated client could have read access to an entry if there were any ACIs making use of the “ldap:///all” bind rule. Note that this only affected the response to a get effective rights request, and the server did not actually expose any data to unauthorized clients.

  • Fixed an issue with the dictionary password validator that could interfere with case-insensitive validation to behave incorrectly if the provided dictionary file contained passwords with uppercase characters.

  • Fixed an issue in servers with an account status notification handler enabled. In some cases, an administrative password reset could cause a notification to be generated on each replica instead of just the server that originally processed the change.

  • Fixed a SCIM issue in which the totalResults value for a paged request could be incorrect if the SCIM resources XML file had multiple base DNs defined.

  • Added support for running on the Oracle and OpenJDK Java implementations, and the garbage-first garbage collector (G1GC) algorithm will be configured by default when installing the server with a Java 11 JVM. Java 8 (Oracle and OpenJDK distributions) remains supported.

  • Added support for the RedHat 7.5, CentOS 7.5, and Ubuntu 18.04 LTS Linux distributions. We also support RedHat 6.6, RedHat 6.8, RedHat 6.9, RedHat 7.4, CentOS 6.9, CentOS 7.4, SUSE Enterprise 11 SP4, SUSE Enterprise 12 SP3, Ubuntu 16.04 LTS, Amazon Linux, Windows Server 2012 R2 and Windows Server 2016. Supported virtualization platforms include VMWare vSphere 6.0, VMWare ESX 6.0, KVM, Amazon EC2, and Microsoft Azure.

UnboundID LDAP SDK for Java 4.0.9

We have just released version 4.0.9 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository, from the Files page of our SourceForge repository, and from the Maven Central Repository.

The most significant changes included in this release are:

  • Updated the command-line tool framework to allow tools to have descriptions that are comprised of multiple paragraphs.
  • Updated the support for passphrase-based encryption to work around an apparent JVM bug in the support for some MAC algorithms that could cause them to create an incorrect MAC.
  • Updated all existing ArgumentValueValidator instances to implement the Serializable interface. This can help avoid errors when trying to serialize an argument configured with one of those validators.
  • Updated code used to create HashSet, LinkedHashSet, HashMap, LinkedHashMap, and ConcurrentHashMap instances with a known set of elements to use better algorithms for computing the initial capacity for the map to make it less likely to require the map to be dynamically resized.
  • Updated the LDIF change record API to make it possible to obtain a copy of a change record with a given set of controls.
  • Added additional methods for obtaining a normalized string representation of JSON objects and value components. The new methods provide more control over case sensitivity of field names and string values, and over array order.
  • Improved support for running in a JVM with a security manager that prevents setting system properties (which also prevents access to the System.getProperties method because the returned map is mutable).

UnboundID LDAP SDK for Java 4.0.8

We have just released version 4.0.8 of the UnboundID LDAP SDK for Java. It is available for download from the releases page of our GitHub repository (https://github.com/pingidentity/ldapsdk/releases), from the Files page of our SourceForge repository (https://sourceforge.net/projects/ldap-sdk/files/), and from the Maven Central Repository (https://search.maven.org/search?q=g:com.unboundid%20AND%20a:unboundid-ldapsdk&core=gav).

The most significant changes included in this release are:

  • Fixed a bug in the modrate tool that could cause it to use a fixed string instead of a randomly generated one as the value to use in modifications.
  • Fixed an address caching bug in the RoundRobinDNSServerSet class. An inverted comparison could cause it to use cached addresses after they expired, and to cached addresses that weren’t expired.
  • Updated the ldapmodify tool to remove the restriction that prevented using arbitrary controls with an LDAP transaction or the Ping-proprietary multi-update extended operation.
  • Updated a number of locations in the code that caught Throwable so that they re-throw the original Throwable instance (after performing appropriate cleanup) if that instance was an Error or perhaps a RuntimeException.
  • Added a number of JSONObject convenience methods to make it easier to get the value of a specified field as a string, Boolean, number, object, array, or null value.
  • Added a StaticUtils.toArray convenience method that can be useful for converting a collection to an array when the type of element in the collection isn’t known at compile time.
  • Added support for parsing audit log messages generated by the Ping Identity Directory Server for versions 7.1 and later, including generating LDIF change records that can be used to revert change records (if the audit log is configured to record changes in a reversible form).

Ping Identity Directory Server 7.0.1.0

The Ping Identity Directory Server version 7.0.1.0 has been released and is available for download from the Ping Identity website, along with the Directory Proxy Server, Data Synchronization Server, Data Metrics Server, Server SDK, and Delegated User Admin.

The release notes include a summary of the changes included in this release, but the major enhancements include:

  • Updates to the Delegated Admin application, including managing group memberships.
  • The mirror virtual attribute has been updated to make it possible to mirror the values of a specified attribute in another entry whose DN is computed in a manner that is relative to the target entry’s DN.
  • The Directory Proxy Server’s failover load-balancing algorithm has been updated to make it possible to consistently route requests targeting different branches to different sets of servers. This is useful to help distribute load more evenly across servers while still avoiding potential problems due to propagation delay.
  • Added a new replication state detail virtual attribute that provides more detailed information about an entry’s replication state.
  • Improved the server’s behavior when attempts to write to a client are blocked.
  • Added support for unbound GSSAPI connections that are not tied to any specific server instance and work better in some kinds of load-balanced environments.
  • Updated JMX MBean support so that keys and values better conform to best practices by default.

UnboundID LDAP SDK for Java 4.0.7

We have just released the UnboundID LDAP SDK for Java version 4.0.7, available for download from the releases page of our GitHub repository, from the Files page of our SourceForge project, and from the Maven Central Repository. The most significant changes in this release include:

  • Fixed an issue in the LDAPConnectionPool and LDAPThreadLocalConnectionPool classes when created with a connection that is already established and authenticated (as opposed to being created from a server set and bind request). Internally, the LDAP SDK created its own server set and bind request from the provided connection’s state information, but it incorrectly included bind credentials in the server set. Under most circumstances, this would merely cause the LDAP SDK to send two bind requests (the second a duplicate of the first) when establishing a new connection as part of the pool. However, it caused a bigger problem when using the new setBindRequest methods that were introduced in the 4.0.6 release. Because the server set was created with bind credentials, the pool would create a connection that tried to use those old credentials before sending a second bind request with the new credentials, and this would fail if the old credentials were no longer valid.
  • Fixed an issue with the behavior that the LDAP SDK exhibited when configured to automatically follow referrals. If the server returned a search result reference that the LDAP SDK could not follow (for example, because none of the URLs were valid, none of the servers could be reached, none of the searches succeeded, in those servers, etc.), the LDAP SDK would assign a result code of “referral” to the search operation, which would cause it to throw an exception when the search completed (as is the case for most non-success result codes). The LDAP SDK will no longer override the result code for the search operation, but will instead use whatever result code the server returned in its search result done message. Any search result references that the LDAP SDK could not automatically follow will be made available to the caller through the same mechanism that would have been used if the SDK had not been configured to automatically follow referrals (that is, either hand them off to a search result listener or collect them in a list to include in the search result object). The LDAP SDK was already making the unfollowable search result references available in this manner, but the client probably wouldn’t have gotten to the point of looking for them because of the exception resulting from the overridden operation result code.
  • Added a new LDAPConnectionPoolHealthCheck.performPoolMaintenance method that can be used to perform processing on the pool itself (rather than on any individual connection) at regular intervals as specified by the connection pool’s health check interval. This method will be invoked by the health check thread after all other periodic health checking is performed.
  • Added a new PruneUnneededConnectionsLDAPConnectionPoolHealthCheck class that can be used to monitor the size of a connection pool over time, and if the number of available (that is, not currently in use) connections is consistently greater than a specified minimum for a given length of time, then the number of connections in the pool can be reduced to that minimum. This can be used to automatically shrink the size of the pool during periods of reduced activity.
  • Updated the Schema class to provide additional constructors and methods that can be used to attempt to retrieve the schema without silently ignoring errors about unparsable elements. Previously, if a schema entry contained one or more unparsable elements, they would be silently ignored. It is now possible to more easily obtain information about unparsable elements or to have the LDAP SDK throw an exception if it encounters any unparsable elements.
  • Added createSubInitialFilter, createSubAnyFilter, and createSubFinalFilter methods to the Filter class that are more convenient to use than the existing createSubstringFilter methods for substring filters that only have one type of component.
  • Updated the Entry.diff method when operating in reversible mode so that when altering the values of an existing attribute, the delete modifications will be ordered before the add modifications. Previously, the adds came before the deletes, but this could cause problems in some directory servers, especially when the modifications are intended to change the case of a value in a case-insensitive attribute (for example, the add could be ignored or rejected because the value already exists in the entry, or the delete could end up removing the value entirely). Ordering the deletes before the adds should provide much more reliable results.
  • Updated the modrate tool to add a new “--valuePattern” argument that can be used to specify the pattern to use to generate new values. This argument is an alternative to the “--valueLength” and “--characterSet” arguments and allows for more flexibility in the types of values that can be generated.
  • Updated the manage-account tool so that the arguments related to TOTP secrets are marked sensitive. This will ensure that the value is not displayed in the clear in certain cases like interactive mode output or tool invocation logging.
  • Added a new “streamfile” value pattern component that operates like the existing “sequentialfile” component except that it limits the amount of the file that is read into memory at any given time, so it is more suitable for reading values from very large files.
  • Added a new “timestamp” value pattern component that can be used to include either the current time or a randomly selected time from a given range in a variety of formats.
  • Added a new “uuid” value pattern component that can be used to include a randomly generated universally unique identifier (UUID).
  • Added a new “random” value pattern component that can be used to include a specified number of randomly selected characters from a given character set.
  • Added a StaticUtils.toUpperCase method to complement the existing StaticUtils.toLowerCase method.
  • Added Validator.ensureNotNullOrEmpty methods that work for collections, maps, arrays, and character sequences.
  • Added LDAPTestUtils methods that can be used to make assertions about the diagnostic message of an LDAP result or an LDAP exception.
  • Added client-side support for a new exec task that can be used to invoke a specified command in the Ping Identity Directory Server (subject to security restrictions imposed by the server).
  • Added client-side support for a new file retention task that can be used to examine files in a specified directory, identify files matching a given pattern, and delete any of those files that do not match count-based, age-based, or size-based criteria.
  • Added client-side support for a new delay task that can be used sleep for a specified period of time, until the server work queue reports that all worker threads are idle and there are no pending operations, or until a given search or set of searches match at least one entry. The delay task is primarily intended to be used as a spacer between other tasks in a dependency chain.
  • Updated support for the ignore NO-USER-MODIFICATION request control to make it possible to set the criticality when creating an instance of the control. Previously, new instances were always critical.
  • Updated the ldapmodify tool to include the ignore NO-USER-MODIFICATION request control in both add and modify requests if the --ignoreNoUserModification argument was provided. Previously, that argument only caused the control to be included in add requests. Further, the control will now be marked non-critical instead of critical.
  • Updated the task API to add support for a number of new properties, including the email addresses of users to notify on task start and successful completion (in addition to the existing properties specifying users to email on error or on any type of completion), and flags indicating whether the server should alert on task start, successful completion, or failure.
  • Updated the argument parser’s properties file support so that it expects the file to use the ISO 8859-1 encoding, and to support Unicode escape sequences that are comprised of a backslash followed by the letter u and four hexadecimal digits.
  • Updated the tool invocation logger to add a failsafe mechanism for preventing passwords from being included in the log. Although it will already redact the values of any arguments that are declared sensitive, it will now also redact the values of any arguments whose name suggests that their value is a password.

Ping Identity Directory Server 7.0.0.0

We have just released the Ping Identity Directory Server version 7.0.0.0, along with supporting products including the Directory Proxy Server, Data Synchronization Server, and Data Metrics Server. They’re available to download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html.

Full release notes are available at https://documentation.pingidentity.com/pingdirectory/7.0/relnotes/, and there are a lot of enhancements, fixes, and performance improvements, but some of the most significant new features are described below.

 

Improved Encryption for Data at Rest

We have always supported TLS to protect data in transit, and we carefully select from the set of available cipher suites to ensure that we only use strong encryption, preferring forward secrecy when it’s available. We also already offered protection for data at rest in the form of whole-entry encryption, encrypted backups and LDIF exports, and encrypted changelog and replication databases. In the 7.0 release, we’re improving upon this encryption for data at rest with several enhancements, including:

  • Previously, if you wanted to enable data encryption, you had to first set up the server without encryption, create an encryption settings definition, copy that definition to all servers in the topology, and export the data to LDIF and re-import it to ensure that any existing data got encrypted. With the 7.0 release, you can easily enable data encryption during the setup process, and you can provide a passphrase to use to generate the encryption key. If you supply the same passphrase when installing all of the instances, then they’ll all use the same encryption key.
  • Previously, if you enabled data encryption, the server would encrypt entries, but indexes and certain other database metadata (for example, information needed to store data compactly) remained unencrypted. In the 7.0 release, if you enable data encryption, we now encrypt index keys and that other metadata so that no potentially sensitive data is stored in the clear.
  • It was already possible to encrypt backups and LDIF exports, but you had to explicitly indicate that they should be encrypted, and the encryption was performed using a key that was shared among servers in the topology but that wasn’t available outside of the topology. In the 7.0 release, we have the option to automatically encrypt backups and LDIF exports, and that’s enabled by default if you configure encryption at setup. You also have more control over the encryption key so that encrypted backups and LDIF exports can be used outside of the topology.
  • We now support encrypted logging. Log-related tools like search-logs, sanitize-log, and summarize-access-log have been updated to support working with encrypted logs, and the UnboundID LDAP SDK for Java has been updated to support programmatically reading and parsing encrypted log files.
  • Several other tools that support reading from and writing to files have also been updated so that they can handle encrypted files. For example, tools that support reading from or writing to LDIF files (ldapsearch, ldapmodify, ldifsearch, ldifmodify, ldif-diff, transform-ldif, validate-ldif) now support encrypted LDIF.

 

Parameterized ACIs

Our server offers a rich access control mechanism that gives you fine-grained control over who has access to what data. You can define access control rules in the configuration, but it’s also possible to store rules in the data, which ensures that they are close to the data they govern and are replicated across all servers in the topology.

In many cases, it’s possible to define a small number of access control rules at the top of the DIT that govern access to all data. But there are other types of deployments (especially multi-tenant directories) where the data is highly branched, and users in one branch should have a certain amount of access to data in their own branch but no access to data in other branches. In the past, the only way to accomplish this was to define access control rules in each of the branches. This was fine from a performance and scalability perspective, but it was a management hassle, especially when creating new branches or if it became necessary to alter the rules for all of those branches.

In the 7.0 release, parameterized ACIs address many of these concerns. Parameterized ACIs make it possible to define a pattern that is automatically interpreted across a set of entries that match the parameterized content.

For example, say your directory has an “ou=Customers,dc=example,dc=com” entry, and each customer organization has its own branch below that entry. Each of those branches might have a common structure (for example, users might be below an “ou=People” subordinate entry, and groups might be below “ou=Groups”). The structure for an Acme organization might look something like:

  • dc=example,dc=com
    • ou=Customers
      • ou=Acme
        • ou=People
          • uid=amanda.adams
          • uid=bradley.baker
          • uid=carol.collins
          • uid=darren.dennings
        • ou=Groups
          • cn=Administrators
          • cn=Password Managers

If you want to create a parameterized ACI so that members of the “ou=Password Managers,ou=Groups,ou={customerName},ou=Customers,dc=example,dc=com” group have write access to the userPassword attribute in entries below “ou=People,ou={customerName},ou=Customers,dc=example,dc=com”, you might create a parameterized ACI that looks something like the following:

(target=”ldap:///ou=People,ou=($1),ou=Customers,dc=example,dc=com”)(targetattr=”userPassword”)(version 3.0; acl “Password Managers can manage passwords”; allow (write) groupdn=”ldap:///cn=Password Managers,ou=Groups,ou=($1),ou=Customers,dc=example,dc=com”;)

 

Recurring Tasks

The Directory Server supports a number of different types of administrative tasks, including:

  • Backing up one or more server backends
  • Restoring a backup
  • Exporting the contents of a backend to LDIF
  • Importing data from LDIF
  • Rebuild the contents of one or more indexes
  • Force a log file rotation

Administrative tasks can be scheduled to start immediately or at a specified time in the future, and you can define dependencies between tasks so that one task won’t be eligible to start until another one completes.

In previous versions, when you scheduled an administrative task, it would only run once. If you wanted to run it again, you needed to schedule it again. In the 7.0 release, we have added support for recurring tasks, which allow you to define a schedule that causes them to be processed on a regular basis. We have some pretty flexible scheduling logic that allows you to specify when they get run, and it’s able to handle things like daylight saving time and months with different numbers of days.

Although you can schedule just about any kind of task as a recurring task, we have enhanced support for backup and LDIF export tasks, since they’re among the most common types of tasks that we expect administrators will want to run on a recurring basis. For example, we have built-in retention support so that you can keep only the most recent backups or LDIF exports (based on either the number of older copies to retain or the age of those copies) so that you don’t have to manually free up disk space.

 

Equality Composite Indexes

The server offers a number of types of indexes that can help you ensure that various types of search operations can be processed as quickly as possible. For example, an equality attribute index maps each of the values for a specified attribute type to a list of the entries that contain that attribute value.

In the 7.0 release, we have introduced a new type of index called a composite index. When you configure a composite index, you need to define at least a filter pattern that describes the kinds of searches that will be indexed, and you can also define a base DN pattern that restricts the index to a specified portion of the DIT.

At present, we only support equality composite indexes, which allow you to index values for a single attribute, much like an equality attribute index. However, there are two key benefits of an equality composite index over an equality attribute index:

  • As previously stated, you can combine the filter pattern with a base DN pattern. This is very useful in directories that have a lot of branches (for example, a multi-tenant deployment) where searches are often constrained to one of those branches. By combining a filter pattern with a base DN pattern, the server can maintain smaller ID sets that are more efficient to process and more tightly scoped to the search being issued.
  • The way in which the server maintains the ID sets in a composite index is much more efficient for keys that match a very large number of entries than the way it maintains the ID set for an attribute index. In an attribute index, you can optimize for either read performance or write performance of a very large ID set, but not both. A composite index is very efficient for both reads and writes of very large ID sets.

In the future, we intend to offer support for additional types of composite indexes that can improve the performance for other types of searches. For example, we’re already working on AND composite indexes that allow you to index combinations of attributes.

 

Delegated Administration

We have added a new delegated administration web application that integrates with the Ping Identity Directory Server and Ping Federate products to allow a selected set of administrators to manage users in the directory. For example, help desk employees might use it to unlock a user’s account or reset their password. Administrators can be restricted to managing only a defined subset of users (based on things like their location in the DIT, entry content, or group membership), and also restricted to a specified set of attributes.

 

Automatic Entry Purging

In the past, our server has had limited support for automatically deleting data after a specified length of time. The LDAP changelog and the replication database can be set to purge old data, and we also support automatically purging soft-deleted entries (entries that have been deleted as far as most clients are concerned, but are really just hidden so that they can be recovered if the need arises).

With the 7.0 release, we’re exposing a new “purge expired data” plugin that can be used to automatically delete entries that match a given set of criteria. At a minimum, this criteria involves looking at a specified attribute or JSON object field whose value represents some kind of timestamp, but it can also be further restricted to entries in a specified portion of the DIT or entries matching a given filter. And it’s got rate limiting built in so that the background purging won’t interfere with client processing.

For example, say that you’ve got an application that generates data that represents some kind of short-lived token. You can create an instance of the purge expired data plugin with a base DN and filter that matches those types of entries, and configure it to delete entries with a createTimestamp value that is more than a specified length of time in the past.

 

Better Control over Unindexed Searches

Despite the variety of indexes defined in the server, there may be cases in which a client issues a search request that the server cannot use indexes to process efficiently. There are a variety of reasons that this may happen, including because there isn’t any applicable index defined in the server, because there so many entries that match the search criteria that the server has stopped maintaining the applicable index, or because the search targets a virtual attribute that doesn’t support efficient searching.

An unindexed search can be very expensive to process because the server needs to iterate across each entry in the scope of the search to determine whether it matches the search criteria. Processing an unindexed search can tie up a worker thread for a significant length of time, so it’s important to ensure that the server only actually processes the unindexed searches that are legitimately authorized. We already required clients to have the unindexed-search privilege, limited the number of unindexed searches that can be active at any given time, and provided an option to disable unindexed searches on a per-client-connection-policy basis.

In the 7.0 release, we’ve added additional features for limiting unindexed searches. They include:

  • We’ve added support for a new “reject unindexed searches” request control that can be included in a search request to indicate that the server should reject the request if it happens to be unindexed, even if would have otherwise been permitted. This is useful for a client that has the unindexed-search privilege but wants a measure of protection against inadvertently requesting an unindexed search.
  • We’ve added support for a new “permit unindexed searches” request control, which can be used in conjunction with a new “unindexed-search-with-control” privilege. If a client has this privilege, then only unindexed search requests that include this the permit unindexed searches control will be allowed.
  • We’ve updated the client connection policy configuration to make it possible to only allow unindexed searches that include the permit unindexed searches request control, even if the requester has the unindexed-search privilege.

 

GSSAPI Improvements

The GSSAPI SASL mechanism can be used to authenticate to the Directory Server using Kerberos V. We’ve always supported this mechanism, but the 7.0 server adds a couple of improvements to that support.

First, it’s now possible for the client to request an authorization identity that is different from the authentication identity. In the past, it was only possible to use GSSAPI if the authentication identity string exactly matched the authorization identity. Now, the server will permit the authorization identity to be different from the authentication identity (although the user specified as the authentication identity must have the proxied-auth privilege if they want to be able to use a different authorization identity).

We’ve also improved support for using GSSAPI through hardware load balancer, particularly in cases where the server uses a different FQDN than was used in the client request. This generally wasn’t an issue for the case in which a Ping Identity Directory Proxy Server was used to perform the load balancing, but it could have been a problem in some cases with hardware load balancers or other cases in which the client might connect to the server with a different name than the server thinks it’s using.

 

Tool Invocation Logging

We’ve updated our tool frameworks to add support for tool invocation logging, which can be used to record the arguments and result for any command-line tools provided with the server. By default, this feature is only enabled for tools that are likely to change the state of the server or the data contained in the server, and by default, all of those tools will use the same log file. However, you can configure which (if any) tools should be logged, and which files should be used.

Invocation logging includes two types of log messages:

  • A launch log message, which is recorded whenever the tool is first run but before it performs its actual processing. The launch log message includes the name of the tool, any arguments provided on the command line, any arguments automatically supplied from a properties file, the time the tool was run, and the username for the operating system account that ran the tool. The values of any sensitive arguments (for example, those that might be used to supply passwords) will be redacted so that information will not be recorded in the log.
  • A completion log message, which is recorded whenever the tool completes its processing, regardless of whether it completed successfully or exited with an error. This will at least include the tool’s numeric exit code, but in some cases, it might also include an exit message with additional information about the processing performed by the tool. Note that there may be some circumstances in which the completion log message may not be recorded (for example, if the tool is forcefully terminated with something like a “kill -9”).

UnboundID LDAP SDK for Java 4.0.6

We have just released the UnboundID LDAP SDK for Java version 4.0.6, available for download from the releases page of our GitHub repository, from the Files page of our SourceForge project, and from the Maven Central Repository. The most significant changes in this release include:

  • We fixed a number of issues in the way that the LDAP SDK handled characters whose UTF-8 representation requires more than two bytes (and therefore requires two Java chars to represent a single character). Issues related to these characters were found in code for matching rules, DNs and RDNs, and search filters.
  • We fixed an issue in the ldapsearch tool that could cause it to use an incorrect scope when constructing search requests from LDAP URLs that were read from a file.
  • We fixed a bug in schema handling that could arise if an object class definition did not explicitly specify an object class type (STRUCTURAL, AUXILIARY, or ABSTRACT). In some cases, the type could be incorrectly inherited from the superclass rather than assuming the default type of STRUCTURAL.
  • We updated the LDAPConnectionPool and LDAPThreadLocalConnectionPool classes to add new setServerSet and setBindRequest methods. These new methods make it possible to update an existing pool to change the logic that it uses for establishing and authenticating new connections.
  • We added a new LDAPRequest.setReferralConnector method that makes it possible to set a custom referral connector on a per-request basis. We also added a new RetainConnectExceptionReferralConnector class that makes it easier to obtain the exception (if any) that was caught on the last attempt to establish a connection for the purpose of following a referral.
  • Updated the in-memory directory server to better handle any java.lang.Errors that occur while interacting with a client connection. These kinds of errors should not happen under normal circumstances but may be generated by third-party code (for example, an InMemoryOperationInterceptor), and it is possible for the JVM to generate them in extraordinary circumstances like running out of memory. In such cases, the thread responsible for interacting with that client would exit without returning a response for the operation being processed and without closing the operation. The LDAP SDK will now attempt to return an error (if appropriate for the type of operation being processed) and close the connection.
  • Updated the manage-certificates tool to fix an incorrect interpretation of the path length element of a basic constraints extension.
  • Updated manage-certificates to add support for importing PEM-encoded RSA private keys that are not wrapped in a PKCS #8 envelope (that is, from a file whose header contains “BEGIN RSA PRIVATE KEY” instead of “BEGIN PRIVATE KEY”). Previously, it was only possible to import private keys using the PKCS #8 format.
  • Updated manage-certificates to add an --allow-sha-1-signature-for-issuer-certificates argument to the check-certificate-usability subcommand. If this argument is provided, then the tool will continue to call out issuer certificates whose signature is based on the now-considered-weak SHA-1 digest algorithm, but it will no longer cause the tool to exit with an error just because of that issue. This argument has no effect for certificates that use a signature based on the extremely weak MD5 digest, and it also does not have any effect if the certificate at the head of the chain (that is, the server certificate rather than the root certificate) has a SHA-1-based signature.
  • Added client-side support for a new “reload HTTP connection handler certificates” task that may be used in some Ping Identity server products to request that the server dynamically reload the certificate key and trust stores used by all HTTP connection handler instances that provide support for HTTPS.

CVE-2018-1000134 and the UnboundID LDAP SDK for Java

On Friday, March 16, 2018, CVE-2018-1000134 was published, describing a vulnerability in the UnboundID LDAP SDK for Java. The vulnerability has been fixed in LDAP SDK version 4.0.5, which is available for immediate download from the LDAP.com website, from the releases page of our GitHub repository, from the Files page of our SourceForge project, and from the Maven Central Repository.

This post will explain the issue in detail (see the release notes for information about other changes in LDAP SDK version 4.0.5). However, to quickly determine whether your application is vulnerable, you should check to see if all of the following conditions are true:

  • You are using the LDAP SDK in synchronous mode. Although this mode is recommended for applications that do not require asynchronous functionality, the LDAP SDK does not use this mode by default.
  • You use the LDAP SDK to perform simple bind operations for the purpose of authenticating users to a directory server. This is a very common use case for LDAP-enabled applications.
  • Your application does not attempt to verify whether the user actually provided a password. This is unfortunately all too common for LDAP-enabled applications.
  • The simple bind requests are sent to a directory server that does not follow the RFC 4513 section 5.1.2 recommendation to reject simple bind requests with a non-empty DN and an empty password. Although this recommendation is part of the revised LDAPv3 specification published in 2006, there are apparently some directory servers that still do not follow this recommendation by default.

If your application meets all of these criteria, then you should take action immediately to protect yourself. The simplest way to fix the vulnerability in your application is to update it to use the 4.0.5 release of the LDAP SDK. However, you should also ensure that your applications properly validate all user input, and it may also be a good idea to consider switching to a more modern directory server.

The Vulnerability in LDAPv3

The original LDAPv3 protocol specification was published as RFC 2251 in December 1997. LDAPv3 is a very impressive protocol in most regards, but perhaps the most glaring problem in the specification lies in the following paragraph in section 4.2.2:

If no authentication is to be performed, then the simple authentication option MUST be chosen, and the password be of zero length. (This is often done by LDAPv2 clients.) Typically the DN is also of zero length.

It’s that word “typically” in this last sentence that has been the source of a great many vulnerabilities in LDAP-enabled applications. Usually, when you want to perform an anonymous simple bind, you provide an empty string for both the DN and the password. However, according to the letter of the specification above, you don’t have to provide an empty DN. As long as the password is empty, the server will treat it as an anonymous simple bind.

In applications that use an LDAP simple bind to authenticate users, it’s a very common practice to provide two fields on the login form: one for the username (or email address or phone number or some other kind of identifier), and one for the password. The application first performs a search to see if they can map that username to exactly one user in the directory, and if so, then it performs a simple bind with the DN of that user’s entry and the provided password. As long as that the server returns a “success” response to the bind request, then the application considers the user authenticated and will grant them whatever access that user is supposed to have.

However, a problem can arise if the application just blindly takes whatever password was provided in the login form and plugs it into the simple bind request without actually checking to see whether the user provided any password at all. In such cases, if the user provided a valid username but an empty password, then the application will perform a simple bind request with a valid DN but no password. The directory server will interpret that as an anonymous simple bind and will return a success result, and the application will assume that the user is authenticated even though they didn’t actually provide any password at all.

This is such a big problem in LDAP-enabled applications that it was specifically addressed in the updated LDAPv3 specification published in June 2006. RFC 4513 section 5.1.2 states the following:

Unauthenticated Bind operations can have significant security issues (see Section 6.3.1). In particular, users intending to perform Name/Password authentication may inadvertently provide an empty password and thus cause poorly implemented clients to request Unauthenticated access. Clients SHOULD be implemented to require user selection of the Unauthenticated Authentication Mechanism by means other than user input of an empty password. Clients SHOULD disallow an empty password input to a Name/Password Authentication user interface. Additionally, Servers SHOULD by default fail Unauthenticated Bind requests with a resultCode of unwillingToPerform.

Further, section 6.3.1 of the same RFC states:

Operational experience shows that clients can (and frequently do) misuse the unauthenticated access mechanism of the simple Bind method (see Section 5.1.2). For example, a client program might make a decision to grant access to non-directory information on the basis of successfully completing a Bind operation. LDAP server implementations may return a success response to an unauthenticated Bind request. This may erroneously leave the client with the impression that the server has successfully authenticated the identity represented by the distinguished name when in reality, an anonymous authorization state has been established. Clients that use the results from a simple Bind operation to make authorization decisions should actively detect unauthenticated Bind requests (by verifying that the supplied password is not empty) and react appropriately.

In directory servers that follow the recommendation from RFC 4513 section 5.1.2, clients can perform an anonymous simple bind by providing an empty DN and an empty password, but an attempt to bind with a non-empty DN and an empty password will be rejected. This very good recommendation was made over ten years ago, and the code change needed to implement it is probably very simple. However, for some reason, there are directory server implementations out there that haven’t been updated to follow this recommendation, and therefore leave client applications open to this inadvertent vulnerability.

The Vulnerability in the UnboundID LDAP SDK for Java

Ever since its initial release, the UnboundID LDAP SDK for Java has attempted to protect against simple bind requests that include a non-empty DN with an empty password. The LDAPConnectionOptions class provides a setBindWithDNRequiresPassword(boolean) method that you can use to indicate whether the LDAP SDK will reject a simple bind request that has a non-empty DN with an empty password. If you don’t explicitly use this option, then the LDAP SDK will assume a default value of true. If you try to send a simple bind request that includes a non-empty DN and an empty password, then the LDAP SDK won’t actually send any request to the server but will instead throw an LDAPException with a result code of ResultCode.PARAM_ERROR and a message of “Simple bind operations are not allowed to contain a bind DN without a password.”

Or at least, that’s the intended behavior. And that is the behavior that you’ll get if you send the bind request in the asynchronous mode that the LDAP SDK uses by default. However, Stanis Shkel created GitHub issue #40 (“processSync in SimpleBindRequest allows empty password with set bindDN”), which points out that this check was skipped for connections operating in synchronous mode.

LDAP is an asynchronous protocol. With a few exceptions, it’s possible to have multiple operations in progress simultaneously over the same LDAP connection. To support that asynchronous capability, the LDAP SDK maintains an extra background thread that constantly read data from a connection and makes sure that any data sent from the server gets delivered to whichever thread is waiting for it. This is just fine most of the time, but it does come at the cost of increased resource consumption, and a small performance hit from handing off data from one thread to another. To minimize this impact for applications that don’t take advantage of the asynchronous capabilities that LDAP provides, we added a synchronous mode to the LDAP SDK way back in version 0.9.10 (released in July of 2009). In this mode, the same thread that sends a request to the server is the one that waits for and reads the response. This can provide better performance and lower resource consumption, but you have to explicitly enable it using the LDAPConnectionOptions.setUseSynchronousMode(boolean) method before establishing a connection.

In the course of implementing support for the synchronous mode for a simple bind request, we incorrectly put the check for synchronous mode before the check for an empty password. For a connection operating in synchronous mode, we branched off to another part of the code and skipped the check for an empty password. The fix for the problem was simple: move the check for an empty password above the check for synchronous mode, and it was committed about three and a half hours after the issue was reported, including a unit test to ensure that a simple bind request with a non-empty DN and an empty password is properly rejected when operating in synchronous mode (there was already a test to ensure the correct behavior in the default asynchronous mode).

Conditions Necessary for the Vulnerability

Although there was unquestionably a bug in the LDAP SDK that created the possibility for this bug, there are a number of factors that could have prevented an application from being susceptible to it. Only an application that meets all of the following conditions would have been vulnerable:

  • The application must have explicitly enabled the use of synchronous mode when creating an LDAP connection or connection pool. If the application was using the default asynchronous mode, it would not have been vulnerable.
  • The application must have created simple bind requests from untrusted and unverified user input. If the application did not create simple bind requests (for example, because it did not perform binds at all, or because it used SASL authentication instead of simple), then it would not have been vulnerable. Alternately, if the application validated the user input to ensure that it would not attempt to bind with an empty password, then it would not have been vulnerable.
  • The application must have sent the simple bind request to a server that does not follow the RFC 4513 recommendations. If the server is configured to reject simple bind requests that contain a non-empty DN with an empty password, then an application communicating with that server would not have been vulnerable.

While we strongly recommend updating to LDAP SDK version 4.0.5, which no longer has the bug described in CVE-2018-1000134, we also strongly recommend ensuring that applications properly validate all user input as additional mitigation against problems like this. And if you’re using a directory server that hasn’t been updated to apply a very simple update to avoid a problem that has been well known and clearly documented for well over a decade, then perhaps you should consider updating to a directory server that takes security and standards compliance more seriously.