Ping Identity Directory Server 7.3.0.0

We have just released version 7.3.0.0 of the Ping Identity Directory Server, and it’s available for download now, along with the companion Directory Proxy Server, Data Synchronization Server, Metrics Engine, and Server SDK products. The release notes contain a blow-by-blow listing of the new features, enhancements, and fixes that it contains, but here are some of the highlights:

  • Added support for Red Hat Enterprise Linux 7.6, CentOS 7.6, Amazon Linux 2, and Windows Server 2019.
  • Added support for Docker 18.09.0 on Ubuntu 18.04 LTS.
  • Enable support for TLSv1.3 by default on JVMs that support it (which should be Java 11 and higher).
  • Added support for server profiles and a new manage-profile tool that can help install and manage the server using the DevOps “infrastructure as code” principle. A server profile can encapsulate the setup commands, configuration changes, Server SDK extensions, additional server root files, and other components of an installation, and it can be used in conjunction with orchestration frameworks to create a new instance with a given profile or to update an existing instance (for example, in a Blue/Green deployment) to apply a new profile.
  • Updated the server to support encrypting the contents of PIN files needed to unlock the certificate key and trust stores. If data encryption is enabled during setup, then the default PIN files will be automatically encrypted. We also updated the command-line tool framework so that files containing passwords can be encrypted, as well as the entire tools.properties file.
  • Added a cipher stream provider that can be used to protect the contents of the encryption settings database with a key from the Amazon Key Management Service.
  • Added a cipher stream provider that can be used to protect the contents of the encryption settings database with a passphrase obtained from a HashiCorp Vault server.
  • Added a pass-through authentication plugin that can make it possible to authenticate accounts with credentials from the PingOne for Customers service.
  • Added support for insignificant configuration archive attributes. Updates to the configuration that only involve one or more of these attributes may not be permanently stored in the configuration archive to prevent it from growing too large over time. For example, if last login time tracking is enabled and does not explicitly exclude root users, then any time a root user authenticated, their entry could be updated with the new last login time, and that update could have previously been stored in the configuration archive. Such updates will no longer be archived by default.
  • Enabled assured replication by default for all add, delete, and modify DN operations. Enabled assured replication by default for all modify operations that alter passwords or key password policy state attributes. With these changes, the server will now delay the response to a matching write operation until it has confirmed that the change has been replicated to all local servers, up to a maximum delay of one second.
  • Updated the server to automatically remove references to obsolete replicas. A replica is obsolete when it has been disabled and all changes from it are older than the replication purge delay.
  • Updated the changelog and replication databases to add a target-database-size configuration property that makes it possible to control purging based on the size of the database in addition to the age of the changes that they contain.
  • Updated the behavior the server exhibits when it encounters a database reference to an attribute type definition that has been removed from the schema. In the 7.2 release, the server could fail when attempting to open a backend with a reference to a nonexistent attribute type. The server will now try to prevent the removal of attribute type definitions that are referenced by one or more backends, but if it does encounter a reference to a no-longer-existent attribute type, it will raise an administrative alert and continue processing the operation under the assumption that the missing attribute type uses a directory string syntax with case-ignore matching.
  • Added an HTTP servlet extension that can be used to retrieve the server’s current availability state. It accepts GET, POST, or HEAD requests sent to a specified endpoint and returns a minimal response whose HTTP status code may be used to determine whether the server considers itself to be AVAILABLE, DEGRADED, or UNAVAILABLE. This may be useful when routing HTTP requests through a load-balancer that can use requests of this type to assess the health of backend servers. It may also be helpful for orchestration frameworks that may wish to destroy and replace instances that become unavailable.
  • Added support for new plugin types that can be used to clean up directory entries for expired or inactive PingFederate persistent sessions.
  • Updated the server to prevent creating virtual attributes that attempt to generate values for the aci or ds-cfg-global-aci attribute types, as access control rules cannot be defined as virtual attributes. Also, prevented creating virtual attributes that attempt to generate values for the member or uniqueMember attribute types, as static group membership cannot be altered using virtual attributes.
  • Added logging for DNS lookups that take longer than a warning threshold (10 seconds by default). DNS resolution timing is also available in a new monitor entry.
  • Updated the topology management framework to make it easier to diagnose connection errors, including adding monitoring information for all failed outbound connections, and raising alarms and alerts when a server fails to connect to a peer server within a configured grace period.
  • Updated the dsreplication tool so that it can work with a node that is currently out of sync with the topology master.
  • Updated the dsreplication tool to allow removing a defunct server even if that server is currently online. Also, added the ability to automatically retry a failed attempt to remove a defunct server.
  • Updated the server to automatically remove a server from the topology when dsreplication disable is used to disable replication for the last non-schema domain.
  • Updated the encrypt-file tool to display a notice recommending the use of the --decompress-input argument when decrypting a file that also appears to be GZIP-compressed.
  • Updated the result code map to make it possible to override the default result code that the server returns when a client tries to perform a password-based bind against an account that does not have a password.
  • Updated the ldapdelete command-line tool to add support for client-side subtree delete, following referrals, deleting entries that match search filters, recording failures in a rejects file, rate limiting, and a variety of additional controls.
  • Updated the HTTP configuration so that the server no longer includes stack traces in generated error pages by default.
  • Updated the “Debug Trace Logger” and “File-Based Trace Logger” log publishers so that they exclude Admin Console activity by default.
  • Updated trace log publishers to support recording events related to access token validation.
  • Updated the file retention recurring task so that it no longer logs an informational message if there are no matching files to delete.
  • Added a correlation-id-response-header property to HTTP servlet extension configuration objects that can be used to set the response header used for correlation IDs. If set for a servlet extension, this value will override the value that would have otherwise been inherited from the HTTP connection handler.
  • Added an indent-ldap-filter tool that can make it easier to visualize the structure and components of a complex search filter.
  • Updated the setup utility to add a --skipHostnameCheck argument that can be used to bypass validation of the provided server hostname.
  • Updated the docs/build-info.txt endpoint to remove version information. That version information is now available in the build-info.txt file in the server root directory.
  • Updated the Directory Proxy Server to change the default load-balancing algorithm that it uses for directing client requests to backend servers. Previously, it always used a fewest operations strategy to send each request to the server with the smallest number of outstanding requests. The new strategy still chooses the server with the fewest outstanding operations for read operations but uses a failover strategy to consistently send write operations to the same server. When combined with the Directory Server’s default use of assured replication, this load-balancing strategy can dramatically reduce the likelihood of replication or uniqueness conflicts while minimizing the performance impact of a purely failover-based approach.
  • Updated the Directory Proxy Server to reduce the default maximum connection age from one hour to ten minutes. This should help avoid problems resulting from firewalls or other networking equipment that silently close connections that have been open for too long.
  • Update the Directory Proxy Server to add an index-priming-idle-listener-timeout property to the entry-balancing request processor configuration. This property specifies the maximum length of time that the server will wait for a response to an attempt to prime the global index before it will give up and retry the attempt.
  • Updated the Directory Proxy Server to reduce the likelihood of lock contention in health checks used to check the status of replication in a backend server.
  • Updated the Data Synchronization Server to support the PingOne for Customers service as a sync source. It was already possible to use PingOne for Customers as a sync destination.
  • Updated the Data Synchronization Server’s support for PingOne for Customers as a sync destination so that it is possible to specify a default population using the name of the population as an alternative to its ID. Population names can also be used in attribute mappings.
  • Updated the Data Synchronization Server to support Apache Kafka as a sync destination. Changes are provided as JSON-formatted representations of the entries before and after the change was applied.
  • Updated the Data Synchronization Server to make it possible to impose a rate limit on a sync pipe so that it does not adversely impact the performance of the destination server.
  • Updated the Data Synchronization Server to make it easier to construct JSON objects to store in the value of a specified attribute in the sync destination.
  • Updated the Data Synchronization Server to support LDAP filters that use extensible-match or approximate-match components. The Data Synchronization Server supports the same set of matching rules as the Directory Server.
  • Updated the Data Synchronization Server to add an attribute-comparison-method configuration property to sync classes. This property can be used to indicate whether to perform syntax-based or byte-for-byte comparisons when identifying what content was updated by a change.
  • Updated the Data Synchronization Server to add base64-encode-value and base64-decode-value properties to direct attribute mappings to facilitate synchronizing binary data.
  • Updated the Delegated Administration configuration. Delegated Admin Resource Types have been removed and replaced by REST Resource Types. Delegated Administrators and Delegated Group Administrators were removed and replaced by Delegated Admin Rights and Delegated Admin Resource Rights. When updating an existing server, older definitions will automatically be converted to their appropriate new versions.
  • Updated the Server SDK to make it easier to read and write data encrypted with keys from the server’s encryption settings database, and for obtaining information about the set of encryption settings definitions available in the server.
  • Updated the Server SDK to make it possible for a pre-parse bind plugin to convert a bind request from simple to SASL, or vice-versa. Added an example SASL mechanism handler that can be used to provide details of a successful or failed bind using attachments to an internal operation. Fixed an issue in the SASL bind result factory that could prevent a matched DN from being included in the response. Added the ability to include additional text in the access log message for an operation without having that text included in the response to the client.
  • Updated the Server SDK to provide support for access token validators for all types of products. It was previously only available for the Data Governance Server.
  • Improved the diagnostic message that the server returns when rejecting a proxied authorization attempt because the target account’s password policy state does not permit that user to authenticate.
  • Fixed an issue in which the server could incorrectly reject an attempt to change a user’s password in a single modify operation that included a delete modification with no values (indicating that all existing password values should be removed) followed by an add modification to supply the desired new password.
  • Fixed an issue that could cause an error when generating an encrypted LDIF export of a data set with a very large number of non-leaf entries. In such cases, the data is written to multiple files that are merged at the end of the process, but a problem could have prevented those files from being properly merged. This did not affect the usability or integrity of the exported data; it merely required the administrator to explicitly specify each of the files in the appropriate order when performing the import.
  • Fixed an issue in the access control handler in which it could incorrectly require the “export” and “import” rights for a modify DN request that includes a newSuperior that matches the DN of the entry’s current parent (which matches the behavior it exhibited for modify DN requests that did not include the newSuperior element). The export and import rights should only be required if the entry is being moved beneath a new parent.
  • Fixed an issue that could allow a modify operation to alter an entry in a way that left it without one or more of the superior object classes that it should have.
  • Fixed an issue in which changes to a dynamic group’s memberURL attribute sometimes did not take effect until after a restart.
  • Fixed an issue in which the server may not enter lockdown mode if it is missing replication changes that are no longer available in the topology, and if it has been restarted without addressing that problem.
  • Fixed an issue that could interfere with the operation of the stop-server.bat command on Windows systems configured with a locale that uses a comma instead of a period as the decimal separator.
  • Fixed issues that could interfere with the parsability of the periodic stats logger output when the server is run on systems configured with a locale that uses a comma instead of a period as the decimal separator.
  • Fixed an issue in which certain component initialization failure messages were written with a log level that was too low to prevent them from being recorded in the server’s error log by default, making it difficult to diagnose problems with those components.
  • Fixed the ordering of the consent-service-cfg.dsconfig batch commands so that bearer token authentication is enabled after the unprivileged consent on which it depends.
  • Fixed an issue that could cause a negative etime to appear in the access log when using assured replication.
  • Fixed an issue that could prevent dsreplication disable from removing replica IDs from the topology when one or more replication domains are disabled.
  • Fixed an issue that could cause the server to report an error when enabling or disabling a backend if there were any disabled notification managers defined in the server.
  • Fixed an issue in which the Directory Proxy Server could reject add attempts if all servers in an entry-balancing backend set had a health check state of DEGRADED or UNAVAILABLE.
  • Fixed an issue in which backups of the encryption settings database could be encrypted with a key from the encryption settings database.
  • Fixed an issue that could interfere with the ability to assign privileges via the mirror virtual attribute if the values to mirror were contained in another entry and were not accessible to unauthenticated clients.
  • Fixed an issue that could interfere with the ability to delete an entry containing uncached content if the LDAP changelog was enabled and configured to record changes in reversible form.
  • Fixed an issue that could prevent JMX clients from establishing SSL-encrypted connections.
  • Fixed an issue that could prevent HTTP-based connections from being associated with a client connection policy.
  • Fixed an issue in which a SCIM client was not permitted to add a member to a groupOfNames or groupOfEntries group.
  • Fixed an issue in which the startIndex value for a SCIM request could be incorrect if the server was configured with more than one base DN in the scim-resources.xml file.
  • Fixed an issue in which the config-diff tool may not identify differences that result from changing the order of values in an order-dependent property.
  • Fixed an issue in the Data Synchronization Server in which operational attributes may not be requested from an LDAP sync source if the LDAP filter was a nested filter.
  • Fixed an issue in the Data Synchronization Server in which a resync attempt could fail against Active Directory or PingOne for Customers when run with multiple passes.
  • Fixed an issue with the client-side validation properties that the haystack password validator would return in a get password quality requirements extended response. The values were human-readable descriptions of the validation properties rather than machine-parsable values.
  • Fixed an issue that could cause the server to encounter an internal error when processing a set subtree accessibility extended operation against an empty backend.
  • Fixed an issue in which a cryptographic error could interfere with inter-server authentication for sharing mirrored configuration data.
  • Fixed an installer issue in which the Admin Console’s trust store type could be set incorrectly if it was different from the key store type.
  • Disabled the fingerprint and subject attribute to user attribute certificate mappers by default for new installations (upgrades of existing installations will not be affected). These certificate mappers are rarely used and require the server to be configured with additional indexing before they can be used, and the lack of those indexes caused internal errors to be raised in the server on startup.

Ping Identity Directory Server versions 7.2.1.1 and 7.0.1.3

Ping Identity Directory Server versions 7.2.1.1 and 7.0.1.3 have been released. These are security updates, and customers running 7.x versions are strongly encouraged to upgrade.

The most important update included in these releases is a fix for a critical security issue introduced in the 7.0.0.0 version that could cause certain passwords to be recorded in the clear on the server filesystem. There are two instances in which this could have occurred:

  • When creating an encrypted backup of the alarms, alerts, configuration, encryption settings, schema, tasks, or trust store backends, the backup descriptor was supposed to include the identifier of the encryption settings definition that was used to protect the contents of the backup. Instead of this identifier, the server would incorrectly include the password that backed that encryption settings definition. This issue did not affect backups of local DB backends (like userRoot), the LDAP-accessible changelog, or the replication database.

  • The server maintains a tool invocation log (logs/tools/tool-invocation.log), which keeps track of certain commands that are run on the system, especially those that may be used to alter the server configuration or data. Among other things, this tool includes the name of the tool and the arguments used to run it. Sensitive arguments, like those used to provide passwords, should automatically be redacted. However, if the tool is run with an argument that provides the path to a file containing a password, a bug could have caused the tool invocation log to record the contents of the first line of that file (which usually contains the password itself) rather than the path to that file. The following command-line tools were affected by this issue:

    • backup
    • create-initial-config
    • create-initial-proxy-config
    • dsreplication
    • enter-lockdown-mode
    • export-ldif
    • import-ldif
    • ldappasswordmodify
    • leave-lockdown-mode
    • manage-tasks
    • manage-topology
    • migrate-ldap-schema
    • parallel-update
    • prepare-endpoint-server
    • prepare-external-server
    • realtime-sync
    • rebuild-index
    • re-encode-entries
    • reload-http-connection-handler-certificates
    • reload-index
    • remove-defunct-server
    • restore
    • rotate-log
    • stop-server

Other tools were not affected by this second issue. Also note that this issue only involved passwords provided in files that were directly referenced as arguments on the command line. Passwords that were provided directly on the command line, and passwords that were automatically included because of their presence in a tools.properties file, were properly redacted. Because of the nature of this issue, regular user passwords are not likely to have been exposed, but the passwords of administrators that may have run commands on the server system could have been recorded.

In both issues above, the passwords were written to a file on the server filesystem with permissions that made them only accessible to the account used to run the server. Other accounts on the system should not have been able to read the contents of those files. Nevertheless, if you believe that any passwords may have been compromised, we recommend taking the following steps to mitigate the risk:

  1. Update the server to a version that includes the fix for this issue. If you’re running version a 7.2 version, then you should upgrade to the 7.2.1.1 release. If you’re running a 7.0 version, then you should upgrade to either version 7.2.1.1 or version 7.0.1.3.
  2. If you believe that any user passwords may have been exposed in the logs/tools/tool-invocation.log file, then change the passwords for those users and sanitize or delete that log file.
  3. If you believe that an encryption settings definition password may have been exposed in a backup descriptor, then create a new encryption settings definition, set it as the preferred definition for all subsequent encryption operations, export your data to LDIF, and re-import the data so that it is re-encrypted with the new definition. Create new backups, and destroy old backups with the compromised password.

In addition to fixing the bugs that led to the potential exposure of these passwords, we have added additional automated tests to help ensure that other problems like this do not occur in the future.

Other Changes Included in the 7.2.1.1 Release

The following additional fixes have been included in the 7.2.1.1 release:

  • Updated the behavior that the server exhibits if an attribute type is removed from the schema while that attribute type is still referenced by one or more server backends. In earlier releases, the server could fail to open a backend that referenced an attribute type that is no longer defined in the schema. The server will now permit the backend to be opened, but will generate an alert about any missing attribute type definitions on startup, and will also generate an alert on any access to an entry that contains a reference to a missing attribute type. The server will also attempt to prevent the removal of an attribute type that is still referenced by any of the backends.
  • Fixed an issue in which the stop-server.bat batch file may not function properly on Windows systems with a locale that uses a character other than a period as a decimal separator.
  • Fixed an issue in which the periodic stats logger output could have been difficult to parse on systems with a locale that uses a character other than the period as a decimal separator.
  • Fixed an issue that prevented creating a constructed virtual attribute for an attribute that was marked SINGLE-VALUE in the server schema.
  • Fixed an issue in which backups of the server’s encryption settings database could have been (automatically or explicitly) encrypted with a key from the encryption settings database.

Other Changes Included in the 7.0.1.3 Release

The following additional fixes have been included in the 7.0.1.3 release:

  • Added debug logging for DNS lookups that take longer than a configured length of time (10 seconds by default). A new “DNS Resolution” monitor entry is available to provide information about DNS lookups performed by the server.
  • Fixed an issue in which SCIM searches could have an incorrect startIndex value if the scim-resources.xml file was configured with multiple base DNs.
  • Fixed an issue that could cause an error while performing an encrypted LDIF export of a directory with a very large number of non-leaf entries. In such cases, the LDIF export will be split into multiple files, but the attempt to merge those files at the end of processing would fail. This error would not result in any data loss or exposure, and the exported data could still be imported by either providing all of the files to the import-ldif utility with separate –ldifFile arguments or by manually merging the files.

Ping Identity Directory Server 7.2.1.0

We have just released the Ping Identity Directory Server version 7.2.1.0, available for download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html. This is primarily a bugfix release, but it does offer a couple of significant new features. The release notes provide a pretty comprehensive overview of the changes, but the most significant updates are:

  • Fixed an issue that could cause an error during an LDIF export of a data set with a large number of non-leaf entries. In such cases, the LDIF data may be split into multiple files to make the LDIF process faster. If the data is split into multiple files, and if the LDIF export was encrypted, then an error may have prevented merging those files at the end of the export process. The exported data was still valid and could still be successfully imported, but with additional effort required.
  • Updated the LDAP pass-through authentication plugin to add an option to construct the DN to use to authenticate to the remote server from information in the local entry. Further, it is now possible to authenticate to the remote server with a bind DN value that may not be a valid LDAP distinguished name (for example, using the user principal name when passing through authentication to an Active Directory server).
  • Updated the LDAP pass-through authentication to add an included-local-entry-base-dn configuration property that makes it easier to identify which local users for which pass-through authentication may be attempted. If pass-through authentication is enabled, it will no longer be attempted by default for root users or topology administrators.
  • Fixed a number of issues in the LDAP pass-through authentication plugin. It will now use separate connections for search and bind operations. It will now make better use of multiple servers for improved availability, and can re-try a failed operation when only a single server is configured. Improved the troubleshooting information that is available when a problem is encountered during pass-through authentication processing.
  • Fixed an issue that could cause entryUUID mismatches across servers if the server is configured to automatically use entryUUD as the naming attribute for entries matching a given set of criteria.
  • Updated the server to ensure that information about missing replication changes persistent across restarts. If the server has been offline for longer than the replication purge delay, then replication will be unable to automatically bring that server back in sync with the other servers in the topology. However, if the server had been restarted after that problem was identified, the record of the missing changes could be inadvertently cleared.
  • Updated the dsreplication tool to allow enabling replication on a node whose topology information is out of sync with the topology master.
  • Updated the topology manager to make it easier to diagnose connection errors between servers in the topology.
  • Added logging for DNS lookups that take longer than expected to complete (10 seconds by default). This can make it easier to identify problems with DNS issues cause connectivity problems or slowness.
  • The delegated administration configuration has changed significantly. When updating an existing installation, the update tool will automatically convert the old configuration model to the new one.
  • The Data Synchronization Server has been updated to support bidirectional synchronization with the PingOne for Customers hosted directory service. The 7.2.0.0 release added support for the PingOne for Customers service as a sync destination. With the 7.2.1.0 release, it is now also possible to use PingOne for Customers as a sync source.

Password Retirement in the Ping Identity Directory Server

Changing the password for an account stored in an LDAP directory server can sometimes be a race against time, especially for accounts that are used by applications. For example, let’s say that you’ve got a web application that uses a directory server to authenticate users and store their profile information. That application probably has its own account that it uses to authenticate to the directory server, and you’ve probably got several instances of that same application running on different servers all sharing that same account. If you need to change the password for that application account, then you risk breaking any instances of the application that need to authenticate to the server between the time that you change the password and the time that you can update the application with the new password.

It would be nice if there were some kind of grace period around password changes, in which the new password is immediately available to use, but the old password still works for a limited period of time. It just so happens that the Ping Identity Directory Server provides this capability through a feature that we call password retirement. It’s disabled by default, but you can enable it by adding one or more values for the password-retirement-behavior property in the password policy that governs the desired user account. The allowed values for this property are:

  • retire-on-self-change — Indicates that the server should automatically retire a user’s previous password whenever they change their own password.
  • retire-on-administrative-reset — Indicates that the server should automatically retire a user’s previous password whenever an administrator resets their password.
  • retire-on-request-with-control — Indicates that the server should retire a user’s previous password whenever the operation used to change the password includes the retire password request control.

The password policy also offers a max-retired-password-age configuration property, which specifies the length of time that a retired password should be considered valid.

As an example, let’s say that you want to enable automatic password retirement whenever a user changes their own password and when a client issues a request that includes the retire password request control, and you want the previous password to remain valid for one hour. If you want to make that change in the default password policy, the command to do that would be:

dsconfig set-password-policy-prop \
     --policy-name "Default Password Policy" \
     --set password-retirement-behavior:retire-on-self-change \
     --set password-retirement-behavior:retire-on-request-with-control \
     --set "max-retired-password-age:1 h"

Note that if you successfully authenticate with a retired password, the server will include the password expiring request control (as described in draft-vchu-ldap-pwd-policy-00.txt) in the bind response. This response control indicates that the password is only valid for a limited period of time, and its value specifies the number of seconds that the password will remain valid.

The Retire Password Request Control

The retire password request control can be included in either an LDAP modify request or in a password modify extended request. It explicitly indicates that the server should retire the user’s old password so that it can continue to be used for a limited period of time. The control has an OID of “1.3.6.1.4.1.30221.2.5.31”, and it does not take a value. The UnboundID LDAP SDK for Java provides support for this control via the RetirePasswordRequestControl class, but since it doesn’t require a value, it’s easy to use in any other LDAP API using just the OID.

For the server to honor the retire password request control, the target user’s password policy does need to be configured with retire-on-request-with-control as one of the values for the password-retirement-behavior property. If the password policy’s retirement behavior would have automatically retired the former password anyway, then including the retire password request control in the request used to change the password isn’t necessary, but it won’t hurt anything.

The Purge Password Request Control

The purge password request control can also be included in either an LDAP modify request or in a password modify extended request. It explicitly indicates that the server should purge the user’s former password when setting a new one. This can be useful, for example, if you suspect that the user’s password might have been compromised and you don’t want to allow it to be used after the password change. The purge password request control has an OID of “1.3.6.1.4.1.30221.2.5.32”, and it does not require a value. The UnboundID LDAP SDK for Java provides support for this control through the PurgePasswordRequestControl class, but it’s easy to use the control in other LDAP APIs with just the request OID.

If it is present in a request, then the purge password request control will override the password-retirement-behavior configuration in the password policy. You can use it to ensure that the former password won’t be retired, even if the server would have automatically retired the password without this control.

Using the Retire and Purge Password Controls With ldapmodify or ldappasswordmodify

Both the ldapmodify tool (which allows for requesting add, delete, modify, and modify DN operations) and the ldappasswordmodify tool (which allows for requesting the password modify extended operation) support both the retire password request control and the purge password request control. The controls can be included in applicable requests using the --retireCurrentPassword or --purgeCurrentPassword arguments, respectively.

For example, let’s say that the user “uid=jdoe,ou=People,dc=example,dc=com” currently has a password of “originalPassword”. If we want to use an LDAP modify operation to perform a self-password change to make it “secondPassword”, and if we want to include the retire password request control in the modify request, then we can do that with the following command:

$ bin/ldapmodify --hostname ldap.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword originalPassword \
     --retireCurrentPassword
# Successfully connected to ldap.example.com:636.

dn: uid=jdoe,ou=People,dc=example,dc=com
changetype: modify
delete: userPassword
userPassword: originalPassword
-
add: userPassword
userPassword: secondPassword
-

# Modifying entry uid=jdoe,ou=People,dc=example,dc=com ...
# Result Code:  0 (success)

We can use the ldapsearch tool to verify that the user can now use either the new password or the former password to authenticate:

$ bin/ldapsearch --hostname ldap.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword secondPassword \
     --baseDN "dc=example,dc=com" \
     --scope base \
     "(objectClass=*)"
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example

# Result Code:  0 (success)
# Number of Entries Returned:  1


$ bin/ldapsearch --hostname ldap.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword originalPassword \
     --baseDN "dc=example,dc=com" \
     --scope base \
     "(objectClass=*)"
# Bind Result:
#      Result Code:  0 (success)
#      Password Expiring Response Control:
#           OID:  2.16.840.1.113730.3.4.5
#           Seconds Until Expiration:  3317

dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example

# Result Code:  0 (success)
# Number of Entries Returned:  1

As you can see, in this second case when we used the original password rather than the new one, the server returned the password expiring response control indicating that the former password was only valid for another 3317 seconds.

If we wanted to perform another self-change, this time using the password modify extended operation to use a self-change for a new password of “thirdPassword”, and we wanted to include the purge password request control, we could accomplish that as follows:

$ bin/ldappasswordmodify --hostname ldap.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authzID "dn:uid=jdoe,ou=People,dc=example,dc=com" \
     --currentPassword "secondPassword" \
     --newPassword "thirdPassword" \
     --purgeCurrentPassword
The LDAP password modify operation was successful

After this, ldapsearch shows that we can successfully authenticate with the new password, but not with either of the previous old passwords because they have been purged:

$ bin/ldapsearch --hostname ldap.example.com
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword thirdPassword \
     --baseDN "dc=example,dc=com" \
     --scope base \
     "(objectClass=*)"
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example

# Result Code:  0 (success)
# Number of Entries Returned:  1


$ bin/ldapsearch --hostname ldap.example.com
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword secondPassword \
     --baseDN "dc=example,dc=com" \
     --scope base \
     "(objectClass=*)"
# Bind Result:
# Result Code:  49 (invalid credentials)

# An error occurred while attempting to create a connection pool to communicate with the directory server:
# LDAPException(resultCode=49 (invalid credentials), errorMessage='invalid credentials', ldapSDKVersion=4.0.10,
# revision=c8659b0364e0ccaec7a4925f47c184907557a5db)


$ bin/ldapsearch --hostname ldap.example.com
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --bindDN "uid=jdoe,ou=People,dc=example,dc=com" \
     --bindPassword originalPassword \
     --baseDN "dc=example,dc=com" \
     --scope base \
     "(objectClass=*)"
# Bind Result:
# Result Code:  49 (invalid credentials)

# An error occurred while attempting to create a connection pool to communicate with the directory server:
# LDAPException(resultCode=49 (invalid credentials), errorMessage='invalid credentials', ldapSDKVersion=4.0.10,
# revision=c8659b0364e0ccaec7a4925f47c184907557a5db)

The Get Password Policy State Issues Control in the Ping Identity Directory Server

In the Ping Identity Directory Server, we’re very serious when it comes to security. We make it easy to encrypt all your data, including the database contents (and the in-memory database cache), network communication, backups, LDIF exports, and even log files. We’ve got lots of password policy features, like strong password encoding, many password validation options, and ways to help thwart password guessing attempts. We offer several two-factor authentication options. We have a powerful access control subsystem that is augmented with additional features like sensitive attributes and privileges. We have lots of monitoring and alerting features so that you can be notified of any problems as soon as (or, in many cases, before) they arise so that your service remains available. Security was a key focus back when I started writing OpenDS (which is the ancestor of the Ping Identity Directory Server), and it’s still a key focus today.

One small aspect of this focus on security is that, by default, we don’t divulge any information about the reason for a failed authentication attempt. Maybe the account doesn’t exist, or maybe it’s locked or administratively disabled. Maybe the password was wrong, or maybe it’s expired. Maybe the user isn’t allowed to authenticate from that client system. In all of these cases, and for other types of authentication failures, the server will just return a bind result with a result code of invalidCredentials and no diagnostic message. The server will include the exact reason for the authentication failure in the audit log so that it’s available for administrators, but we won’t return it to the client so that a malicious user can’t use that to better craft their attack.

Now, if you don’t care about this and want the server to just go ahead and provide the message to the client, then you can do that with the following configuration change:

dsconfig set-global-configuration-prop --set return-bind-error-messages:true

However, that may not be the best option because it applies equally to all authentication requests for all clients, and because the output is human-readable but not very machine parseable. It’s not easy for a client to programmatically determine what the reason for the failure is. For that, your best option is the get password policy state issues control.

The get password policy state issues control indicates that you want the server to return information about the nature of the authentication failure, and details of the user’s password policy state that might interfere with authentication either now or in the future. This information is easy to consume programmatically, but it also contains user-friendly representations of those conditions as well. We intend for this control to be used by applications that authenticate users, and that can decide what information they want to make available to the end user.

Restrictions Around the Control’s Use

As previously mentioned, we might not always want to divulge the reason for a failed authentication attempt to the end user. As such, if we allowed just anyone to use this control, then that would get thrown out the window since a malicious client could just always include that control and get some helpful information in the response. So we don’t do that. Instead, this control will only be permitted if all of the following conditions are met:

  • The server’s access control handler must allow the get password policy state issues request control to be included in bind requests. This control is allowed in bind request by default, but you can disable it if you want to.
  • A bind request that includes the get password policy state issues request control must be received on a connection that is already authenticated as a user who has the permit-get-password-policy-state-issues privilege.

Since we intend this feature to be used by applications that authenticate users, we expect that any application that is to be authorized to use it will have an account with the necessary privilege. And since the get password policy state issues control is a proprietary feature, we expect that any application that knows how to use it can also easily include the retain identity request control in those same bind requests.

The Get Password Policy State Issues Request Control

The get password policy state issues request control is very simple: it’s got a request OID of 1.3.6.1.4.1.30221.2.5.46 and no value. This control is only intended to be included in bind requests, and it’s really just asking the server to include the corresponding response control in the bind result message.

It’s easy enough to use this request control any LDAP API, but if you’re using the UnboundID LDAP SDK for Java, then we provide the GetPasswordPolicyStateIssuesRequestControl class to make it even easier.

The Get Password Policy State Issues Response Control

The get password policy state issues response control is more complicated than the request control. It has an OID of 1.3.6.1.4.1.30221.2.5.47 and a value with the following ASN.1 encoding:

GetPasswordPolicyStateIssuesResponse ::= SEQUENCE {
     notices               [0] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     warnings              [1] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     errors                [2] SEQUENCE OF SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     authFailureReason     [3] SEQUENCE {
          type        INTEGER,
          name        OCTET STRING,
          message     OCTET STRING OPTIONAL } OPTIONAL,
     ... }

If you’re using the UnboundID LDAP SDK for Java, then you can use the GetPasswordPolicyStateIssuesResponseControl class to do all the heavy lifting for you. If you’re using some other API, then you’ll probably have to decode the value for yourself.

There are four basic components to the get password policy state issues response control:

  • A set of error conditions in the user’s password policy state that will either prevent that user from authenticating, or that will prevent them from using their account until they take some action. In the UnboundID LDAP SDK for Java, this we offer the PasswordPolicyStateAccountUsabilityError class to make it easier to interpret these errors. Possible password policy state error conditions include:

    • The account is administratively disabled.
    • The account has expired.
    • The account is not yet active.
    • The account is permanently locked (or at least until an administrator unlocks it) after too many failed authentication attempts.
    • The account is temporarily locked after too many failed authentication attempts.
    • The account is locked because it’s been idle for too long.
    • The account is locked because the password was administratively reset, but the user didn’t choose a new password quickly enough.
    • The password is expired.
    • The password is expired, but there are one or more grace logins remaining. Authenticating with a grace login will only permit them to bind for the purpose of changing the password.
    • The password has been administratively reset and must be changed before the user will be allowed to do anything else.
    • The password policy was configured so that all users governed by that policy must change their passwords by a specified time, but the user attempting to authenticate failed to do so.
  • A set of warning conditions in the user’s password policy state that won’t immediately impact their ability to use their account, but that may impact their ability to use the account in the near future unless they take some action. In the UnboundID LDAP SDK for Java, we offer the PasswordPolicyStateAccountUsabilityWarning class to make it easier to interpret these warnings. Possible password policy state warning conditions include:

    • The account will expire in the near future.
    • The password will expire in the near future.
    • The account has been idle for too long and will be locked unless they successfully authenticate in the near future.
    • The account has outstanding authentication failures and may be locked if there are too many more failed attempts.
    • The password policy was configured so that all users governed by that policy must change their password by a specified time, but the user attempting to authenticate has not yet done so.
  • A set of notice conditions that additional information about the user’s password policy state that may be helpful for applications or the end user to know. The UnboundID LDAP SDK for Java provides the PasswordPolicyStateAccountUsabilityNotice class to make it easier to interpret these notices. Possible password policy state notices include:

    • A minimum password age has been configured in the password policy governing the user, and it has been less than that length of time since the user last changed their password. The user will not be permitted to change their password again until the minimum age period has elapsed.
    • The account does not have a static password, so it will not be allowed to authenticate using any password-based authentication mechanism.
    • The account has an outstanding delivered one-time password that has not yet been consumed and is not yet expired.
    • The account has an outstanding password reset token that has not yet been consumed and is not yet expired.
    • The account has an outstanding retired password that has not yet expired and may still be used to authenticate.
  • An authentication failure reason, which provides information about the reason that the bind attempt failed. The UnboundID LDAP SDK for Java offers the AuthenticationFailureReason class to help make it easier to use this information. Possible authentication failure reasons include:

    • The server could not find the account for the user that is trying to authenticate (e.g., the user doesn’t exist, or the authentication ID does not uniquely identify the user).
    • The password or other provided credentials were not correct.
    • There was something wrong with the SASL credentials provided by the client (e.g., they were malformed or out of sequence).
    • The account isn’t configured to support the requested authentication type (e.g., they attempted a password-based bind, but the user doesn’t have a password).
    • The account is in an unusable state. The password policy error conditions should encapsulate the reasons that the account is not usable.
    • The server is configured to require the client to authenticate securely, but the authentication attempt was not secure.
    • The account is not permitted to authenticate in the requested manner (e.g., from the client address or using the attempted authentication type).
    • The bind request was rejected by the server’s access control handle.
    • The authentication attempt failed because a problem was encountered while processing one of the controls included in the bind request.
    • The server is currently in lockdown mode and will only permit a limited set of users to authenticate.
    • The server could not assign a client connection policy to the account.
    • The authentication attempt used a SASL mechanism that was implemented in a third-party extension, and that extension encountered an error while processing the bind request.
    • The server encountered an internal error while processing the bind request.

Each password policy state error, warning, and notice, as well as the authentication failure reason, is identified by a name and a numeric type, and also includes a human-readable message suitable for displaying to the user if you decide that it is appropriate.

The Password Policy State Extended Operation

Although it’s not the focus of this blog post (maybe I’ll write another one about it in the future), I should also point out that you can also use the password policy state extended operation to obtain the list of usability errors, warnings, and notices for a user, along with a heck of a lot more information about the state of the account. You can also use it to alter the state if desired. Since it’s an extended operation, you can’t use it in the course of attempting a bind to get the authentication failure reason. However, you could use it in conjunction with the get password policy state issues control if you feel like you need additional state information about the user’s account state after parsing the information in the get password policy state issues response control.

An Example Using the UnboundID LDAP SDK for Java

I’ve written a simple program that demonstrates the use of the get password policy state issues control to obtain the authentication failure reason and password policy state issues for a specified user. You can find that example at https://github.com/dirmgr/blog-example-source-code/tree/master/password-policy-state-issues.

The Retain Identity Request Control in the Ping Identity Directory Server

Many LDAP-enabled applications use a directory server to authenticate users, which often consists of a search to find the user’s entry based on the provided login ID, followed by a bind to verify the provided credentials. Some of these applications may purely use the directory server for authentication, while others may then go ahead and perform additional operations on behalf of the logged-in users.

If the application is well designed, then it will probably maintain a pool of connections that it can repeatedly reuse rather than establishing a new connection each time it needs to perform an operation in the server. Usually, the search to find a user is performed on a connection bound as an account created for that application. And if the application performs operations on behalf of the authenticated users, then it often does so while authenticated under that user same application account, using something like the proxied authorization request control to request that the server process those operations under the appropriate user’s authority.

The problem, though, is that performing a bind operation changes the authentication identity of the connection on which it is processed. If the bind is successful, then subsequent operations on that connection will be processed under the authority of the user identified by that bind request. If the bind fails, then the connection becomes unauthenticated, so subsequent requests are processed anonymously. There are a couple of common ways to work around this problem:

  • It can maintain two different connection pools: one to use just for bind operations, and the other for all other types of operations.
  • After attempting a bind to verify a user’s credentials (whether successful or not), it can re-authenticate as the application account.

In the Ping Identity Directory Server, we offer a third option: the bind request can include the retain identity request control. This control tells the server that it should perform all of the normal processing associated with the bind (verify the user’s credentials, update any password policy state information for that user, etc.), but not change the authentication identity of the underlying connection. Regardless of whether the bind succeeds or fails, the connection will end up with the same authentication/authorization identity that it had before the bind was attempted. This allows you to use just a single connection pool that stays authenticated as the application’s account, while still being able to verify credentials without fear of interfering with access control evaluation for operations following those binds.

The retain identity request control is very easy to use. If you’re using the UnboundID LDAP SDK for Java, you can just use the RetainIdentityRequestControl class, and the Javadoc includes an example demonstrating its use. If you’re using some other API, then you just need to specify an OID of “1.3.6.1.4.1.30221.2.5.3”, and you don’t need to provide a value. We recommend making the control critical so that the bind attempt will fail if the server doesn’t support it (although we added support for this control back in 2008 when it was still the UnboundID Directory Server, so it’s been around for more than a decade).

I’m not aware of any other directory server that supports the retain identity request control (aside from the LDAP SDK’s in-memory directory server), but it’s very simple and very useful, so if you’re using some other type of server you might inquire about whether they’d implement it or something similar. Of course, you could also switch to the Ping Identity Directory Server and get support for this and lots of other helpful features that other servers don’t provide.

Programmatically Retrieving Password Quality Requirements in the Ping Identity Directory Server

When changing a user’s password, most LDAP directory servers provide some way to determine whether the new password is acceptable. For example, when allowing a user to choose a new password, you might want to ensure that new password has at least some minimum number of characters, that it’s not found in a dictionary of commonly used passwords, and that it’s not too similar to the user’s current password.

It’s important to be able to tell the user what the requirements are so that they don’t keep trying things that the server will reject. And you might also want to provide some kind of password strength meter or indicator of acceptability to let them visually see how good their password is. But you don’t want to do this with hard-coded logic in the client because different sets of users might have different password quality requirements, and because the server configuration can change, so even the requirements for a given user may change over time. What you really want is a way to programmatically determine what requirements the server will impose.

Fortunately, the Ping Identity Directory Server provides a “get password quality requirements” extended operation that can provide this information. We also have a “password validation details” control that you can use when changing a password to request information about how well the proposed password satisfies those requirements. These features were added in the 5.2.0.0 release back in 2015, so they’ve been around for several years. The UnboundID LDAP SDK for Java makes it easy to use them in Java clients, but you can make use of them in other languages if you’re willing to do your own encoding and decoding.

The Get Password Quality Requirements Extended Request

The get password quality requirements extended request allows a client to ask the server what requirements it will impose when setting a user’s password. It’s best to use before prompting for a new password so that you can display the requirements to them and potentially provide client-side feedback as to whether the proposed password is acceptable.

Since the server can enforce different requirements under different conditions, you need to tell it the context for the new password. Those contexts include:

  • Adding a new entry that includes a password. You can either indicate that the new entry will use the server’s default password policy, or that it will use a specified policy.
  • A user changing their own password. It doesn’t matter whether the password change is done by a standard LDAP modify operation that targets the password attribute or with the password modify extended operation; the requirements for a self change will be the same in either case.
  • An administrator resetting another user’s password. Again, it doesn’t matter whether it’s a regular LDAP modify or a password modify extended operation. You just need to indicate which user’s password is being reset so the server can determine which requirements will be enforced.

The UnboundID LDAP SDK for Java provides support for this request through the GetPasswordQualityRequirementsExtendedRequest class, but if you need to implement support for it in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.43 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsRequestValue ::= SEQUENCE {
     target     CHOICE {
          addWithDefaultPasswordPolicy           [0] NULL,
          addWithSpecifiedPasswordPolicy         [1] LDAPDN,
          selfChangeForAuthorizationIdentity     [2] NULL,
          selfChangeForSpecifiedUser             [3] LDAPDN,
          administrativeResetForUser             [4] LDAPDN,
          ... },
     ... }

The Get Password Quality Requirements Extended Response

The server uses the get password quality requirements extended response to tell the client what the requirements are for the target user in the indicated context. Each validator configured in a password policy can return its own password quality requirement structure, which includes the following components:

  • A human-readable description that describes the purpose of the validator in a user-friendly form. For example, “The password must contain at least 8 characters”.
  • A validation type that identifies the type of validator for client-side evaluation. For example, “length”.
  • A set of name-value pairs that provide information about the configuration of that password validator. For example, a name of “min-password-length” and a value of “8”.

A list of the validation types and corresponding properties for all the password validators included with the server is provided later in this post.

In addition to those requirements, it may provide additional information about the password change, including:

  • Whether the user will be required to provide their current password when choosing a new password. This is only applicable for a self change.
  • Whether the user will be required to choose a new password the first time they authenticate after the new password is set. This is only applicable for an add or an administrative reset, and it’s based on the password policy’s force-change-on-add or force-change-on-reset configuration.
  • The length of time that the newly set password should be considered valid. If the user will be required to change their password on the next authentication, then this will be the length of time they have before that temporary password becomes invalid. Otherwise, it specifies the length of time until the password expires.

The UnboundID LDAP SDK for Java provides support for the extended result through the GetPasswordQualityRequirementsExtendedResult class and the related PasswordQualityRequirement class. In case you need to implement support for this extended response in some other API, it has an OID of 1.3.6.1.4.1.30221.2.6.44 and a value with the following ASN.1 encoding:

GetPasswordQualityRequirementsResultValue ::= SEQUENCE {
     requirements                SEQUENCE OF PasswordQualityRequirement,
     currentPasswordRequired     [0] BOOLEAN OPTIONAL,
     mustChangePassword          [1] BOOLEAN OPTIONAL,
     secondsUntilExpiration      [2] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirement ::= SEQUENCE {
     description                  OCTET STRING,
     clientSideValidationInfo     [0] SEQUENCE {
          validationType     OCTET STRING,
          properties         [0] SET OF SEQUENCE {
               name      OCTET STRING,
               value     OCTET STRING } OPTIONAL } OPTIONAL }

The Password Validation Details Request Control

As noted above, you should use the get password validation requirements extended operation before prompting a user for a new password so that they know what the requirements are in advance. But, if the server rejects the proposed password, it’s useful for the client to be able to tell exactly why it was rejected. The Ping Identity Directory Server will include helpful information in the diagnostic message, but that’s just a blob of text. You might want something more parseable so that you can provide the user with the pertinent information with better formatting. And for that, we provide the password validation details request control.

This control can be included in an add request that includes a password, a modify request that attempts to alter a password, or a password modify extended request. It tells the server that the client would like a response control (outlined below) that includes information about each of the requirements for the new password and whether that requirement was satisfied.

The UnboundID LDAP SDK for Java provides support for this request control in the PasswordValidationDetailsRequestControl class, but if you want to use it in another API, then all you need to do is to create a request control with an OID of 1.3.6.1.4.1.30221.2.5.40. The criticality can be either true or false (but it’s probably better to be false so that the server won’t reject the request if that control is not available for some reason), and it does not take a value.

The Password Validation Details Response Control

When the server processes an add, modify, or password modify request that included the password validation request control, the response that the server returns may include a corresponding password validation details response control with information about how well the proposed password satisfies each of the requirements. If present, the response control will include the following components:

  • One of the following:

    • Information about each of the requirements for the proposed password and whether that requirement was satisfied.
    • A flag that indicates that the request didn’t try to alter a password.
    • A flag that indicates that the request tried to set multiple passwords.
    • A flag that indicates that the request didn’t get to the point of trying to validate the password because some other problem was encountered first.
  • An optional flag that indicates whether the server requires the user to provide their current password when choosing a new password, but that the current password was not given. This is only applicable for self changes, and not for adds or administrative resets.
  • An optional flag that indicates whether the user will be required to change their password the next time they authenticate. This is applicable for adds and administrative resets, but not for self changes.
  • An optional value that specifies the length of time that the new password will be considered valid. If it was an add or an administrative reset and the user will be required to choose a new password the next time they authenticate, then this is the length of time that they have to do that. Otherwise, it will be the length of time until the new password expires.

The UnboundID LDAP SDK for Java provides support for this response control through the PasswordValidationDetailsResponseControl class, with the PasswordQualityRequirementValidationResult class providing information about whether each of the requirements was satisfied. If you need to implement support for this control in some other API, then it has a response OID of 1.3.6.1.4.1.30221.2.5.41 and a value with the following ASN.1 encoding:

PasswordValidationDetailsResponse ::= SEQUENCE {
     validationResult           CHOICE {
          validationDetails             [0] SEQUENCE OF
               PasswordQualityRequirementValidationResult,
          noPasswordProvided            [1] NULL,
          multiplePasswordsProvided     [2] NULL,
          noValidationAttempted         [3] NULL,
          ... },
     missingCurrentPassword     [3] BOOLEAN DEFAULT FALSE,
     mustChangePassword         [4] BOOLEAN DEFAULT FALSE,
     secondsUntilExpiration     [5] INTEGER OPTIONAL,
     ... }

PasswordQualityRequirementValidationResult ::= SEQUENCE {
     passwordRequirement      PasswordQualityRequirement,
     requirementSatisfied     BOOLEAN,
     additionalInfo           [0] OCTET STRING OPTIONAL }

Validation Types and Properties for Available Password Validators

The information included in the get password validation details extended response is enough for the client to display a user-friendly list of the requirements that will be enforced for a new password. However, it also includes information that can be used for some client-side evaluation of how well a proposed password satisfies those requirements. This can help the client tell the user when the password isn’t good enough without having to send the request to the server, or possibly provide feedback about the strength or acceptability of the new password while they’re still typing it. This is possible because of the validation type and properties components of each password quality requirement.

Of course, you can really only take advantage of this feature if you know what the possible validation type and properties are for each of the password validators. This section provides that information for each of the types of validators included with the Ping Identity Directory Server (or, at least the ones available at the time of this writing; we may add more in the future).

Also note that for some types of password validators, you may not be able to perform client-side validation. For example, if the server is configured to reject any proposed password that it finds in a dictionary of commonly used passwords, the client can’t make that determination because it doesn’t have access to that dictionary. In such cases, it’s still possible to display the requirement to the user so that they’re aware of it in advance, and it may still be possible to perform client-side validation for other types of requirements, so there’s still benefit to using this information.

The Attribute Value Password Validator

The attribute value password validator can be used to prevent the proposed password from matching the value of another attribute in a user’s entry. You can specify which attributes to check, or it can check all user attributes in the entry (which is the default). It can be configured to reject the case in which the proposed password exactly matches a value for another attribute, but it can also be configured to reject based on substring matches (for example, if an attribute value is a substring of the proposed password, or if the proposed password is a substring of an attribute value). You can also optionally test the proposed password in reversed order.

You can perform client-side checking for this password validator if you have a copy of the target user’s entry. The validation type is “attribute-value”, and it offers the following validation properties:

  • match-attribute-{counter} — The name of an attribute whose values will be checked against the proposed password. The counter value starts at 1 and will increase sequentially for each additional attribute to be checked. For example, if the validator is configured to check the proposed password against the givenName, sn, mail, and telephoneNumber attributes, you would have a match-attribute-1 property with a value of givenName, a match-attribute-2 property with a value of sn, a match-attribute-3 property with a value of mail, and a match-attribute-4 property with a value of telephoneNumber.
  • test-password-substring-of-attribute-value — Indicates whether to check to see if the proposed password matches a substring of any of the target attributes. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-attribute-value-substring-of-password — Indicates whether to check to see if any of the target attributes matches a substring of the proposed password. If this property has a value of true, then this substring check will be performed. If the property has a value of false, or if it is absent, then the substring check will not be performed.
  • test-reversed-password — Indicates whether to check the proposed password with the order of the characters reversed in addition to the order in which they were provided. If this property has a value of true, then both the forward and reversed password will be checked. If the property has a value of false, or if it is absent, then the password will only be checked in forward order.

The Character Set Password Validator

The character set password validator can be used to ensure that passwords have a minimum number of characters from each of a specified collection of character sets. For example, you could define one set with all of the lowercase letters, one with all the uppercase letters, one with all the numeric digits, and one with a set of symbols, and require that a password have at least one character from each of those sets.

You can perform client-side checking for this password validator just using the proposed password itself. The validation type is “character-set”, and it has the following validation properties:

  • set-{counter}-characters — A set of characters for which a minimum count will be enforced. The counter value starts at 1 and will increase sequentially for each additional set of characters that is defined. For example, if you had sets of lowercase letters, uppercase letters, and numbers, then you could have a set-1-characters property with a value of abcdefghijklmnopqrstuvwxyz, a set-2-characters property with a value of ABCDEFGHIJKLMNOPQRSTUVWXYZ, and a set-3-characters property with a value of 0123456789.
  • set-{counter}-min-count — The minimum number of characters that must be present from the character set identified with the corresponding counter value (so the property with a name of set-1-min-count specifies the minimum number of characters from the set-1-characters set). This value will be an integer whose value is greater than or equal to zero (with a value of zero indicating that characters from that set are allowed, but not required; this is really only applicable if allow-unclassified-characters is false).
  • allow-unclassified-characters — Indicates whether passwords should be allowed to have any characters that are not defined in any of the character sets. If this property has a value of true, then passwords will be allowed to have unclassified characters as long as they meet the minimum number of required characters from all of the specified character sets. If this property has a value of false, then passwords will only be permitted to include characters from the given character sets.

The Dictionary Password Validator

The dictionary password validator is used to ensure that users can’t choose passwords that are found in a specified dictionary file. The Ping Identity Directory Server comes with two such files: one that contains a list of over 400,000 English words, and one that contains over 500,000 of the most commonly used passwords. You can also provide your own dictionary files.

Unless the client has a copy of the same dictionary file that the server is using, then it’s not really possible for it to perform client-side validation for this validator. Nevertheless, the validator does have a validation type of “dictionary” and the following validation properties:

  • dictionary-file — The name (without path information) of the dictionary file that the password validator is using.
  • case-sensitive-validation — Indicates whether the validation should be case sensitive or insensitive. If the value is true, then a proposed password will be rejected only if it is found with exactly the same capitalization in the dictionary file. If it is false, then differences in capitalization will be ignored.
  • test-reversed-password — Indicates whether the validation should check the proposed password with the characters in reversed order as well as in the order the client provided them. If the value is true, then both the forward and reversed password will be checked. If the value is false, then the password will be checked only as it was provided by the client.

The Haystack Password Validator

The haystack password validator is based on the concept of password haystacks as described at https://www.grc.com/haystack.htm. This algorithm judges the strength of a password based on a combination of its length and the different classes of characters that it contains. For example, a password comprised of a mix of lowercase letters, uppercase letters, numeric digits, and symbols is, in general, more resistant to brute force attacks than a password of the same length made up of only lowercase letters, but a password made up of only lowercase letters can be very secure if it is long enough (and passphrases—passwords comprised of multiple words strung together—are a great example of this). The haystack validator lets users have a simpler password if it’s long enough, or a shorter password if it’s complex enough.

As long as you have a client-side implementation of the haystack logic (which is pretty simple), you can perform client-side checking for this password validator. The validator name is “haystack”, and it has the following validation properties:

  • assumed-password-guesses-per-second — The number of guesses that an attacker is assumed to be able to make per second. This value will be an integer, although it could be a very large integer, so it’s recommended to use at least a 64-bit variable to represent it.
  • minimum-acceptable-time-to-exhaust-search-space — The minimum length of time, in seconds, that is considered acceptable for an attacker to have to keep guessing (at the rate specified by the assumed-password-guesses-per-second property) before exhausting the complete search space of all possible passwords. This will also be an integer, and it’s also recommended that you use at least a 64-bit variable to hold its value.

The Length Password Validator

The length-based password validator judges the quality of a proposed password based purely on the number of characters that it contains. Note that it counts the number of UTF-8 characters rather than the number of bytes, and a password with multi-byte characters will have fewer characters than it has bytes. You can configure either or both of a minimum required length and a maximum required length.

Client-side checking is very straightforward for this validator. It uses a validator name of “length” and the following validation properties:

  • min-password-length — The minimum number of characters that a password will be required to have. If present, the value will be an integer. If it is absent, then no minimum length will be enforced.
  • max-password-length — The maximum number of characters that a password will be permitted to have. If present, the value will be an integer. If it is absent, then no maximum length will be enforced.

The Regular Expression Password Validator

The regular expression password validator can be used to require that password match a given pattern or to reject passwords that match a given pattern. As long as the client can perform regular expression matching, then client-side validation should be pretty simple. It uses a validator name of “regular-expression” and the following validation properties:

  • match-pattern — The regular expression that will be evaluated against a proposed password.
  • match-behavior — A string that indicates the behavior that the validator should observe. A value of require-match means that the validator will reject any proposed password that does not satisfy the associated match-pattern. A value of reject-match means that the validator will reject any proposed password that does match the specified match-pattern.

The Repeated Characters Password Validator

The repeated characters password validator can be used to reject a proposed password if it contains the same character, or characters in the same set, more than a specified number of times in a row without a different type of character in between. By default, it treats each type of character separately, but you can define sets of characters that will be considered equivalent. In the former case, the validator will reject a password if it contains the same character too many times in a row, whereas in the latter case, it can reject a password if it contains too many characters of the same type in a row. For example, you could define sets of lowercase letters, uppercase letters, digits, and symbols, and prevent too many characters of each type in a row.

It should be pretty straightforward to perform client-side checking for this password validator. It uses a validator name of “repeated-characters” and the following validation properties:

  • character-set-{counter} — A set of characters that should be considered equivalent. The counter will start at 1 and increment sequentially for each additional character set. This property may be absent if each character is to be treated independently.
  • max-consecutive-length — The maximum number of times that each character (or characters from the same set) may appear in a row before a proposed password will be rejected. The value will be an integer.
  • case-sensitive-validation — Indicates whether to treat characters from the password in a case-sensitive manner. A value of true indicates that values should be case-sensitive, while a value of false indicates that values should be case-insensitive.

The Similarity Password Validator

The similarity password validator can be used to reject a proposed password if it is too similar to the user’s current password. Similarity is determined by the Levenshtein distance algorithm, which is a measure of the minimum number of character insertions, deletions, or replacements needed to transform one string into another. For example, it can prevent a user from changing the password from something like “password1” to “password2”. This validator is only active for password self changes. It does not apply to add operations or administrative resets.

Because the Ping Identity Directory Server server generally stores passwords in a non-reversible form, this can only be used if the request used to change the user’s password includes both the current password and the proposed new password. You can use the password-change-requires-current-password property in the password policy configuration to require this, and if that is configured, then the get password quality requirements extended response will indicate that the current password is required when a user is performing a self change. The password modify extended request provides a field for specifying the current password when requesting a new password, but to satisfy this requirement in an LDAP modify operation, the change should be processed as a delete of the current password (with the value provided in the clear) followed by an add of the new password (also in the clear), in the same modification, like:

dn: uid=john.doe,ou=People,dc=example,dc=com
changetype: modify
delete: userPassword
userPassword: oldPassword
-
add: userPassword
userPassword: newPassword
-

If a client has the user’s current password, the proposed new password, and an implementation of the Levenshtein distance algorithm, then it can perform client-side checking for this validator. The validation type is “similarity” and the validation properties are:

  • min-password-difference — The minimum acceptable distance, as determined by the Levenshtein distance algorithm, between the user’s current password and the proposed new password. It will be an integer.

The Unique Characters Password Validator

The unique characters password validator can be used to reject a proposed password that has too few unique characters. This can prevent users from choosing simple passwords like “aaaaaaaa” or “abcabcabcabc”.

It’s easy to perform client-side checking for this validator. It has a validator name of “unique-characters” and the following properties:

  • min-unique-characters — The minimum number of unique characters that the password must contain for it to be acceptable. This is an integer value, with zero indicating no limit (although the server will require passwords to contain at least one character).
  • case-sensitive-validation — Indicates whether the validator will treat uppercase and lowercase versions of the same letter as different characters or the same. A value of true indicates that the server will perform case-sensitive validation, while a value of false indicates case-insensitive validation.

Access Control Considerations

The get password quality requirements extended operation probably isn’t something that you’ll want to open up to the world, since it could give an attacker information that they shouldn’t have, like hints that could help them better craft their attack, or information about whether a user exists or not. Since you’ll probably want some restriction on who can use this operation, and since we at Ping Identity have no idea who that might be, the server’s access control configuration does not permit anyone (or at least anyone without the bypass-acl or the bypass-read-acl privilege) to use it. If you want to make it available, then you’ll need to add a global ACI to grant access to an appropriate set of users. The same goes for the password validation details request control.

As an example, let’s say that you want to allow members of the “cn=Password Administrators,ou=Groups,dc=example,dc=com” group to use these features. To do that, you can add the following global ACIs:

(extop="1.3.6.1.4.1.30221.2.6.43")(version 3.0; acl "Allow password administrators to use the get password quality requirements extended operation"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

(targetcontrol="1.3.6.1.4.1.30221.2.5.40")(version 3.0; acl "Allow password administrators to use the password validation details request control"; allow (read) groupdn="ldap:///cn=Password Administrators,ou=Groups,dc=example,dc=com";)

Also note that while the server does make the password modify extended operation available to anyone, there are additional requirements that must be satisfied before it can be used. In order to change your own password, you need to at least have write access to the password attribute in your own entry. And in order to change someone else’s password, not only do you need write access to the password attribute in that user’s entry, but you also need the password-reset privilege. You can grant that privilege by adding the ds-privilege-name attribute to the password resetter’s entry using a change like:

dn: uid=password.admin,ou=People,dc=example,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: password-reset
-

Example Usage With the UnboundID LDAP SDK for Java

I’ve created a simple Java program that demonstrates the use of the get password quality requirements extended operation and the password validation details control. It’s not very flashy, and it doesn’t currently attempt to perform any client-side validation of the proposed new password before sending it to the directory server, but it’s at least a good jumping-off point for someone who wants to build this functionality into their own application.

You can find this example at https://github.com/dirmgr/blog-example-source-code/tree/master/password-quality-requirements.

Configuring Two-Factor Authentication in the Ping Identity Directory Server

Passwords aren’t going anywhere anytime soon, but they’re just not good enough on their own. It is entirely possible to choose a password that is extremely resistant to dictionary and brute force attacks, but the fact is that most people pick really bad passwords. They also tend to reuse the same passwords across multiple sites, which further increases the risk that their account could be compromised. And even those people who do choose really strong passwords might still be tricked into giving that password to a lookalike site via phishing or DNS hijacking, or they may fall victim to a keylogger. For these reasons and more, it’s always a good idea to combine a password with some additional piece of information when authenticating to a site.

The Ping Identity Directory Server has included support for two-factor authentication since 2012 (back when it was still the UnboundID Directory Server). Out of the box, we currently offer four types of two-factor authentication:

  • You can combine a static password with a time-based one-time password using the standard TOTP mechanism described in RFC 6238. These are the same kind of one-time passwords that are generated by apps like Google Authenticator or Authy.
  • You can combine a static password with a one-time password generated by a YubiKey device.
  • You can combine a static password with a one-time password that gets delivered to the user through some out-of-band mechanism like a text or email message, voice call, or app notification.
  • You can combine a static password with an X.509 certificate presented to the server during TLS negotiation.

In this post, I’ll describe the process for configuring the server to enable support for each of these types of authentication. We also provide the ability to create custom extensions to implement support for other types of authentication if desired, but that’s not going to be covered here.

Time-Based One-Time Passwords

The Ping Identity Directory Server supports time-based one-time passwords through the UNBOUNDID-TOTP SASL mechanism, which is enabled by default in the server. For a user to authenticate with this mechanism, their account must contain the ds-auth-totp-shared-secret attribute whose value is the base32-encoded representation of the shared secret that should be used to generate the one-time passwords. This shared secret must be known to both the client (or at least to an app in the client’s possession, like Google Authenticator) as well as to the server.

To facilitate generating and encoding the TOTP shared secret, the Directory Server provides a “generate TOTP shared secret” extended operation. The UnboundID LDAP SDK for Java provides support for this extended operation, and the class-level Javadoc describes the encoding for this operation in case you need to implement support for it in some other API. We also offer a generate-totp-shared-secret command-line tool that can be used for testing (or I suppose you could invoke it programmatically if you’d rather do that than use the UnboundID LDAP SDK or implement support for the extended operation yourself). For the sake of convenience, I’ll use this tool for the demonstration.

There are actually a couple of ways that you can use the generate TOTP shared secret operation: for a user to generate a shared secret for their own account (in which case the user’s static password must be provided), or for an administrator (who must have the password-reset privilege) to generate a shared secret for another user. I’d expect the most common use case to be a user generating a shared secret for their own account, so that’s the approach we’ll take for this example.

Note that while the generate TOTP shared secret extended operation is enabled out of the box, the shared secrets that it generates by default are not encrypted, which could make them easier for an attacker to steal if they got access to a copy of the user data. To prevent this, if the server is configured with data encryption enabled, then you should also enable the “Encrypt TOTP Secrets and Delivered Tokens” plugin. That can be done with the following configuration change:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

If we assume that our account has a username of “jdoe”, then the command to generate a shared secret for that user would be something like:

$ bin/generate-totp-shared-secret --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authID u:jdoe \
     --promptForUserPassword
Enter the static password for user 'u:jdoe':
Successfully generated TOTP shared secret 'KATLTK5WMUSZIACLOMDP43KPSG2LUUOB'.

If we were using a nice web application to invoke the generate TOTP shared secret operation, we’d probably want to have it generate a QR code with that shared secret embedded in it so that it could be easily scanned and imported into an app like Google Authenticator (and you’d want to embed it in a URL like “otpauth://totp/jdoe%20in%20ds.example.com?secret=KATLTK5WMUSZIACLOMDP43KPSG2LUUOB”). For the sake of testing, we can either manually generate an appropriate QR code (for example, using an online utility like https://www.qr-code-generator.com), or you can just type the shared secret into the authenticator app.

Now that the account is configured for TOTP authentication, we can use the UNBOUNDID-TOTP SASL mechanism to authenticate to the server. As with the generate TOTP shared secret operation, this SASL mechanism is supported by the UnboundID LDAP SDK for Java, but most of our command-line tools should support this mechanism, so we can test it with a utility like ldapsearch. You’ll need to use the “--saslOption” command-line argument to specify a number of parameters, including “mech” (the name of the SASL mechanism to use, which should be “UNBOUNDID-TOTP”), “authID” (the authentication ID for the user that’s trying to authenticate), and “totpPassword” (for the one-time password generated by the authenticator app). For example:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-TOTP \
     --saslOption authID=u:jdoe \
     --saslOption totpPassword={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

YubiKey One-Time Passwords

If you’ve got a YubiKey device capable of generating one-time passwords in the Yubico OTP format (which should be most YubiKey devices except the ones that only support FIDO authentication), then you can use that device to generate one-time passwords to use in conjunction with your static password.

The Directory Server supports this type of authentication through the UNBOUNDID-YUBIKEY-OTP SASL mechanism. To enable support for this SASL mechanism, you first need to get an API key from Yubico, which you can get for free from https://upgrade.yubico.com/getapikey/. When you do this, you’ll get a client ID and a secret key, which gives you access to use their authentication servers. Note that you only need this for the server (and you can share the same key for all server instances); end users don’t need to worry about this. Alternately, you could stand up your own authentication server if you’d rather not rely on the Yubico servers, but we won’t go into that here.

Once you’ve got the client ID and secret key, you can enable support for the SASL mechanism with the following configuration change:

dsconfig set-sasl-mechanism-handler-prop \
     --handler-name UNBOUNDID-YUBIKEY-OTP \
     --set enabled:true \
     --set yubikey-client-id:{client-id} \
     --set yubikey-api-key:{secret-key}

To be able to authenticate a user with this mechanism, you’ll need to update their account to include a ds-auth-yubikey-public-id attribute with one or more values that represent the public IDs of the YubiKey devices that you want to use (and it might not be a bad idea to have multiple devices registered for the same account, so that you have a backup key in case you lose or break the primary key).

To get the public ID for a YubiKey device, you can use it to generate a one-time password and strip off the last 32 bytes. This isn’t considered secret information, so no encryption is necessary when storing it in an entry, and you can use a simple LDAP modify operation to manage the client IDs for a user account. Alternately, you can use the “register YubiKey OTP device” extended operation (supported and documented in the UnboundID LDAP SDK for Java) or use the register-yubikey-otp-device command-line tool. In the case of the command-line tool, you can register a device like:

$ bin/register-yubikey-otp-device --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --authenticationID u:jdoe \
     --promptForUserPassword \
     --otp {one-time-password}
Enter the static password for user u:jdoe:
Successfully registered the specified YubiKey OTP device for user u:jdoe

Note that when using this tool (and the register YubiKey OTP device extended operation in general), you should provide a complete one-time password and not just the client ID.

That should be all that is necessary to allow the user to authenticate with a YubiKey one-time password. We can test it with ldapsearch like so:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-YUBIKEY-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Delivered One-Time Passwords

Delivered one-time passwords are more convenient than either time-based or YubiKey-generated one-time passwords because there’s less burden on the user. There’s no need to install an app or have any special hardware to generate one-time passwords. Instead, the server generates a one-time password and then sends it to the user through some out-of-band mechanism (that is, the user gets the one-time password through some mechanism other than LDAP). The server provides direct support for delivering these generated one-time passwords over SMS (using the Twilio service) or via email, and the UnboundID Server SDK provides an API that allows you to create your own delivery mechanisms. Note, however, that while it may be more convenient to use, it’s also generally considered less secure (especially if you’re using SMS).

There’s also more effort involved in enabling support for delivered one-time passwords than either time-based or YubiKey-generated one-time passwords. The first thing you should do is configure the server to ensure that the generated one-time password values will be encrypted (unless you already did it above for encrypting TOTP shared secrets), which you can do as follows:

dsconfig set-plugin-prop \
     --plugin-name "Encrypt TOTP Secrets and Delivered Tokens" \
     --set enabled:true

Next, we need to configure one or more delivery mechanisms. These are configured in the “OTP Delivery Mechanism” section of dsconfig. For example, to configure a delivery mechanism for email, you could use something like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name Email \
     --type email \
     --set enabled:true \
     --set 'sender-address:otp@example.com' \
     --set "message-text-before-otp:Your one-time password is ‘" \
     --set "message-text-after-otp:’."

If you’re using email, you’ll also need to configure one or more SMTP external servers and set the smtp-server property in the global configuration.

Alternately, if you’re using SMS, then you’ll need to have a Twilio account and fill in the appropriate values for the SID, auth token, and phone number fields, like:

dsconfig create-otp-delivery-mechanism \
     --mechanism-name SMS \
     --type twilio \
     --set enabled:true \
     --set twilio-account-sid:{sid} \
     --set twilio-auth-token:{auth-token} \
     --set sender-phone-number:{phone-number} \
     --set "message-text-before-otp:Your one-time password is '" \
     --set "message-text-after-otp:'."

Once the delivery mechanism(s) are configured, you can enable the delivered one-time password SASL mechanism handler as follows:

dsconfig create-sasl-mechanism-handler \
     --handler-name UNBOUNDID-DELIVERED-OTP \
     --type unboundid-delivered-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match"

You’ll also need to enable support for the deliver one-time password extended operation, which is used to request that the server generate and deliver a one-time password for a user. You can do that like:

dsconfig create-extended-operation-handler \
     --handler-name "Deliver One-Time Passwords" \
     --type deliver-otp \
     --set enabled:true \
     --set "identity-mapper:Exact Match" \
     --set "password-generator:One-Time Password Generator" \
     --set default-otp-delivery-mechanism:Email
     --set default-otp-delivery-mechanism:SMS

The process for authenticating with a delivered one-time password involves two steps. In the first step, you need to request that the server generate and deliver a one-time password, which can be accomplished with the “deliver one-time password” extended operation, which is supported and documented in the UnboundID LDAP SDK for Java and can be tested with the deliver-one-time-password command-line tool. Then, once you have that one-time password, you can use the UNBOUNDID-DELIVERED-OTP SASL mechanism to complete the authentication.

If you have multiple delivery mechanisms configured in the server, then there are several ways that the server can decide which one to use to send a one-time password to a user.

  • The server will only attempt to use a delivery mechanism that applies to the target user. For example, if a user entry has an email address but not a mobile phone number, then it won’t try to deliver a one-time password to that user via SMS.
  • The deliver one-time password extended request can be used to indicate which delivery mechanism(s) should be attempted, and in which order they should be attempted. If you’re using the deliver-one-time-password command-line tool, then you can use the –deliveryMechanism argument to specify this.
  • If the extended request doesn’t indicate which mechanisms to use, then the server will check the user’s entry to see if it has a ds-auth-preferred-otp-delivery-mechanism operational attribute. If so, then it will be used to specify the desired delivery mechanism.
  • If nothing else, then the server will use the order specified in the default-otp-delivery-mechanism property of the extended operation handler configuration.

As an example, let’s demonstrate the process of authenticating as user jdoe with a one-time password delivered via email. We can start the process using the deliver-one-time password command-line tool as follows:

$ bin/deliver-one-time-password --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --userName jdoe \
     --promptForBindPassword \
     --deliveryMechanism Email
Enter the static password for the user:

Successfully delivered a one-time password via mechanism 'Email' to 'jdoe@example.com'

Now, we can check our email, and there should be a message with the one-time password. Once we have it, we can use a tool like ldapsearch to authenticate with that one-time password using the UNBOUNDID-DELIVERED-OTP SASL mechanism, like:

$ bin/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-DELIVERED-OTP \
     --saslOption authID=u:jdoe \
     --saslOption otp={one-time-password} \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 9d48d347-cd9e-428a-bedc-e6027b30b8ac
startTime: 20190107014535Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Combining Certificates and Passwords

When establishing a TLS-based connection, the server will always present its certificate to the client, and the client will decide whether it wants to trust that certificate and continue establishing the secure connection. Further, the server may optionally ask the client to provide its own certificate, and then the client may then optionally provide one. If the server requests a client certificate, and if the client provides one, then the determine whether it wants to trust that client certificate and continue the negotiation process.

If a client has provided its own certificate to the directory server and the server has accepted it, then the client can use a SASL EXTERNAL bind to request that the server use the information in the certificate to identify and authenticate the client. Most LDAP servers support this, and it can be a very strong form of single-factor authentication. However, the Ping Identity Directory Server also offers an UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism that takes this even further by combining the client certificate with a static password.

Certificate-based authentication (regardless of whether you also include a static password) isn’t something that has really caught on because of the hassle and complexity of dealing with certificates. It’s honestly probably not a great option for most end users, although it may be an attractive option for more advanced users like server administrators. But one big benefit that the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism has over the two-factor mechanisms that rely on one-time passwords is that it can be used in a completely non-interactive manner. That makes it suitable for use in authenticating one application to another.

As with the EXTERNAL mechanism, the Ping Identity Directory Server has support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD mechanism enabled out of the box. Just about the only thing you’re likely to want to configure is the certificate-mapper property in its configuration, which is used to uniquely identify the account for the user that is trying to authenticate based on the contents of the certificate. The certificate mapper that is configured by default will only work if the certificate’s subject DN matches the DN of the corresponding user entry. Other certificate mappers can be used to identify the user in other ways, including searching based on attributes in the certificate subject or searching for the owner of a certificate based on the fingerprint of that certificate.

Due to an unfortunate oversight, command-line tools currently shipped with the server do not include support for the UNBOUNDID-CERTIFICATE-PLUS-PASSWORD SASL mechanism. That will be corrected in the next release, but if you want to test with it now, you can check out the UnboundID LDAP SDK for Java from its GitHub project and build it for yourself. That will allow you to test certificate+password authentication like so:

$ tools/ldapsearch --hostname ds.example.com \
     --port 636 \
     --useSSL \
     --keyStorePath client-keystore \
     --promptForKeyStorePassword \
     --trustStorePath config/truststore \
     --saslOption mech=UNBOUNDID-CERTIFICATE-PLUS-PASSWORD \
     --promptForBindPassword \
     --baseDN "" \
     --scope base \
     "(objectClass=*)"
Enter the key store password:

Enter the bind password:

dn:
objectClass: top
objectClass: ds-root-dse
startupUUID: 42a82498-93c6-4e62-9c6c-8fe6b33e1550
startTime: 20190107072612Z

# Result Code:  0 (success)
# Number of Entries Returned:  1

Ping Identity Directory Server 7.2.0.0

We have just released the Ping Identity Directory Server version 7.2.0.0, available for download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html. This new release offers a lot of new features, some substantial performance improvements, and a number of bug fixes. The release notes provide a pretty comprehensive overview of the changes, but here are some of the highlights:

  • Added a REST API (using JSON over HTTP) for interacting with the server data. Although we already supported the REST-based SCIM protocol, our new REST API is more feature rich, requires less administrative overhead, and isn’t limited by limitations imposed by SCIM compliance. SCIM remains supported.

  • Dramatically improved the logic that the server uses for evaluating complex filters. It now uses a number of additional metrics to make more intelligent decisions about the order in which components should be evaluated to get the biggest bang for the buck.

  • Expanded our support for composite indexes to provide support for ANDs of multiple components (for example, “(&(givenName=?)(sn=?))”). These filters can be comprised entirely of equality components, or they may combine one or more equality components with either a greater-or-equal filter, a less-or-equal filter, a bounded range filter, or a substring filter.

  • When performing a new install, the server is now configured to automatically export data to LDIF every day at 1:05 a.m. These exports will be compressed and encrypted (if encryption is enabled during the setup process), and they will be rate limited to minimize the impact on performance. We have also updated the LDIF export task to support exporting the contents of multiple backends in the same invocation.

  • Added support for a new data recovery log and a new extract-data-recovery-log-changes command-line tool. This can help administrators revert or replay a selected set of changes if the need arises (for example, if a malfunctioning application applies one or more incorrect changes to the server).

  • Added support for delaying the response to a failed bind operation, during which time no other operations will be permitted on the client connection. This can be used as an alternative to account lockout as a means of substantially inhibiting password guessing attacks without the risk of locking out the legitimate user who has the right credentials. It can also be used in conjunction with account lockout if desired.

  • Updated client connection policy support to make it possible to customize the behavior that the server exhibits if a client exceeds a configured maximum number of concurrent requests. Previously, it was only possible to reject requests with a “busy” result. It is now possible to use additional result codes when rejecting those requests, or to terminate the client connection and abandon all of its outstanding requests.

  • Added support for a new “exec” task (and recurring task) that can be used to invoke a specified command on the server system, either as a one-time event or at recurring intervals. There are several safeguards in place to prevent this from unauthorized use: the task must be enabled in the server (it is not by default), the command to be invoked must be contained in a whitelist file (no commands are whitelisted by default), and the user scheduling the task must have a special privilege that permits its use (no users, not even root users, have this privilege by default). We have also added a new schedule-exec-task tool that can make it easier to schedule an exec task.

  • Added support for a new file retention task (and recurring task) that can be used to remove files with names matching a given pattern that are outside of a provided set of retention criteria. The server is configured with instances of this task that can be used to clean up expensive operation dump, lock conflict details, and work queue backlog thread dumps (any files of each type other than the 100 most recent that are over 30 days old will be automatically removed).

  • Added support for new tasks (and recurring tasks) that can be used to force the server to enter and leave lockdown mode. While in lockdown mode, the server reports itself as unavailable to the Directory Proxy Server (and other clients that look at its availability status) and only accepts requests from a restricted set of clients.

  • Added support for a new delay task (and recurring task) that can be used to inject a delay between other tasks. The delay can be for a fixed period of time, can wait until the server is idle (that is, there are no outstanding requests and all worker threads are idle), or until a given set of search criteria matches one or more entries.

  • Added support for a new constructed virtual attribute type that can be used to dynamically construct values for an attribute using a combination of static text and the values of other attributes from the entry.

  • Improved user and group management in the delegated administration web application. Delegated administrators can create users and control group membership for selected users.

  • Added support for encrypting TOTP shared secrets, delivered one-time passwords, password reset tokens, and single-use tokens.

  • Updated the work queue implementation to improve performance and reduce contention under extreme load.

  • Updated the LDAP-accessible changelog backend to add support for searches that include the simple paged results control. This control was previously only available for searches in local DB backends.

  • Improved the server’s rebuild-index performance, especially in environments with encrypted data.

  • Added a new time limit log retention policy to support removing log files older than a specified age.

  • Updated the audit log to support including a number of additional fields, including the server product name, the server instance name, request control OIDs, details of any intermediate client or operation purpose controls in the request, the origin of the operation (whether it was replicated, an internal operation, requested via SCIM, etc.), whether an add operation was an undelete, whether a delete operation was a soft delete, and whether a delete operation was a subtree delete.

  • Improved trace logging for HTTP-based services (e.g., the REST API, SCIM, the consent API, etc.) to make it easier to correlate events across trace logs, HTTP access logs, and general access logs.

  • Updated the server so that the replication database so that it is possible to specify a minimum number of changes to retain. Previously, it was only possible to specify the minimum age for changes to retain.

  • Updated the purge expired data plugin to support deleting expired non-leaf entries. If enabled, the expired entry and all of its subordinate entries will be removed.

  • Added support for additional equality matching rules that may be used for attributes with a JSON object syntax. Previously, the server always used case-sensitive matching for field names and case-insensitive matching for string values. The new matching rules make it possible to configure any combination of case sensitivity for these components.

  • Added the ability to configure multiple instances of the SCIM servlet extension in the server, which allows multiple SCIM service configurations in the same server.

  • Updated the server to prevent the possibility of a persistent search client that is slow to consume results from interfering with other clients and operations in the server.

  • Fixed an in which global sensitive attribute restrictions could be imposed on replicated operations, which could cause some types of replicated changes to be rejected.

  • Fixed an issue that could make it difficult to use third-party tasks created with the Server SDK.

  • Fixed an issue in which the correct size and time limit constraints may not be imposed for search operations processed with an alternate authorization identity.

  • Fixed an issue with the get effective rights request control that could cause it to incorrectly report that an unauthenticated client could have read access to an entry if there were any ACIs making use of the “ldap:///all” bind rule. Note that this only affected the response to a get effective rights request, and the server did not actually expose any data to unauthorized clients.

  • Fixed an issue with the dictionary password validator that could interfere with case-insensitive validation to behave incorrectly if the provided dictionary file contained passwords with uppercase characters.

  • Fixed an issue in servers with an account status notification handler enabled. In some cases, an administrative password reset could cause a notification to be generated on each replica instead of just the server that originally processed the change.

  • Fixed a SCIM issue in which the totalResults value for a paged request could be incorrect if the SCIM resources XML file had multiple base DNs defined.

  • Added support for running on the Oracle and OpenJDK Java implementations, and the garbage-first garbage collector (G1GC) algorithm will be configured by default when installing the server with a Java 11 JVM. Java 8 (Oracle and OpenJDK distributions) remains supported.

  • Added support for the RedHat 7.5, CentOS 7.5, and Ubuntu 18.04 LTS Linux distributions. We also support RedHat 6.6, RedHat 6.8, RedHat 6.9, RedHat 7.4, CentOS 6.9, CentOS 7.4, SUSE Enterprise 11 SP4, SUSE Enterprise 12 SP3, Ubuntu 16.04 LTS, Amazon Linux, Windows Server 2012 R2 and Windows Server 2016. Supported virtualization platforms include VMWare vSphere 6.0, VMWare ESX 6.0, KVM, Amazon EC2, and Microsoft Azure.

Ping Identity Directory Server 7.0.0.0

We have just released the Ping Identity Directory Server version 7.0.0.0, along with supporting products including the Directory Proxy Server, Data Synchronization Server, and Data Metrics Server. They’re available to download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html.

Full release notes are available at https://documentation.pingidentity.com/pingdirectory/7.0/relnotes/, and there are a lot of enhancements, fixes, and performance improvements, but some of the most significant new features are described below.

 

Improved Encryption for Data at Rest

We have always supported TLS to protect data in transit, and we carefully select from the set of available cipher suites to ensure that we only use strong encryption, preferring forward secrecy when it’s available. We also already offered protection for data at rest in the form of whole-entry encryption, encrypted backups and LDIF exports, and encrypted changelog and replication databases. In the 7.0 release, we’re improving upon this encryption for data at rest with several enhancements, including:

  • Previously, if you wanted to enable data encryption, you had to first set up the server without encryption, create an encryption settings definition, copy that definition to all servers in the topology, and export the data to LDIF and re-import it to ensure that any existing data got encrypted. With the 7.0 release, you can easily enable data encryption during the setup process, and you can provide a passphrase to use to generate the encryption key. If you supply the same passphrase when installing all of the instances, then they’ll all use the same encryption key.
  • Previously, if you enabled data encryption, the server would encrypt entries, but indexes and certain other database metadata (for example, information needed to store data compactly) remained unencrypted. In the 7.0 release, if you enable data encryption, we now encrypt index keys and that other metadata so that no potentially sensitive data is stored in the clear.
  • It was already possible to encrypt backups and LDIF exports, but you had to explicitly indicate that they should be encrypted, and the encryption was performed using a key that was shared among servers in the topology but that wasn’t available outside of the topology. In the 7.0 release, we have the option to automatically encrypt backups and LDIF exports, and that’s enabled by default if you configure encryption at setup. You also have more control over the encryption key so that encrypted backups and LDIF exports can be used outside of the topology.
  • We now support encrypted logging. Log-related tools like search-logs, sanitize-log, and summarize-access-log have been updated to support working with encrypted logs, and the UnboundID LDAP SDK for Java has been updated to support programmatically reading and parsing encrypted log files.
  • Several other tools that support reading from and writing to files have also been updated so that they can handle encrypted files. For example, tools that support reading from or writing to LDIF files (ldapsearch, ldapmodify, ldifsearch, ldifmodify, ldif-diff, transform-ldif, validate-ldif) now support encrypted LDIF.

 

Parameterized ACIs

Our server offers a rich access control mechanism that gives you fine-grained control over who has access to what data. You can define access control rules in the configuration, but it’s also possible to store rules in the data, which ensures that they are close to the data they govern and are replicated across all servers in the topology.

In many cases, it’s possible to define a small number of access control rules at the top of the DIT that govern access to all data. But there are other types of deployments (especially multi-tenant directories) where the data is highly branched, and users in one branch should have a certain amount of access to data in their own branch but no access to data in other branches. In the past, the only way to accomplish this was to define access control rules in each of the branches. This was fine from a performance and scalability perspective, but it was a management hassle, especially when creating new branches or if it became necessary to alter the rules for all of those branches.

In the 7.0 release, parameterized ACIs address many of these concerns. Parameterized ACIs make it possible to define a pattern that is automatically interpreted across a set of entries that match the parameterized content.

For example, say your directory has an “ou=Customers,dc=example,dc=com” entry, and each customer organization has its own branch below that entry. Each of those branches might have a common structure (for example, users might be below an “ou=People” subordinate entry, and groups might be below “ou=Groups”). The structure for an Acme organization might look something like:

  • dc=example,dc=com
    • ou=Customers
      • ou=Acme
        • ou=People
          • uid=amanda.adams
          • uid=bradley.baker
          • uid=carol.collins
          • uid=darren.dennings
        • ou=Groups
          • cn=Administrators
          • cn=Password Managers

If you want to create a parameterized ACI so that members of the “ou=Password Managers,ou=Groups,ou={customerName},ou=Customers,dc=example,dc=com” group have write access to the userPassword attribute in entries below “ou=People,ou={customerName},ou=Customers,dc=example,dc=com”, you might create a parameterized ACI that looks something like the following:

(target=”ldap:///ou=People,ou=($1),ou=Customers,dc=example,dc=com”)(targetattr=”userPassword”)(version 3.0; acl “Password Managers can manage passwords”; allow (write) groupdn=”ldap:///cn=Password Managers,ou=Groups,ou=($1),ou=Customers,dc=example,dc=com”;)

 

Recurring Tasks

The Directory Server supports a number of different types of administrative tasks, including:

  • Backing up one or more server backends
  • Restoring a backup
  • Exporting the contents of a backend to LDIF
  • Importing data from LDIF
  • Rebuild the contents of one or more indexes
  • Force a log file rotation

Administrative tasks can be scheduled to start immediately or at a specified time in the future, and you can define dependencies between tasks so that one task won’t be eligible to start until another one completes.

In previous versions, when you scheduled an administrative task, it would only run once. If you wanted to run it again, you needed to schedule it again. In the 7.0 release, we have added support for recurring tasks, which allow you to define a schedule that causes them to be processed on a regular basis. We have some pretty flexible scheduling logic that allows you to specify when they get run, and it’s able to handle things like daylight saving time and months with different numbers of days.

Although you can schedule just about any kind of task as a recurring task, we have enhanced support for backup and LDIF export tasks, since they’re among the most common types of tasks that we expect administrators will want to run on a recurring basis. For example, we have built-in retention support so that you can keep only the most recent backups or LDIF exports (based on either the number of older copies to retain or the age of those copies) so that you don’t have to manually free up disk space.

 

Equality Composite Indexes

The server offers a number of types of indexes that can help you ensure that various types of search operations can be processed as quickly as possible. For example, an equality attribute index maps each of the values for a specified attribute type to a list of the entries that contain that attribute value.

In the 7.0 release, we have introduced a new type of index called a composite index. When you configure a composite index, you need to define at least a filter pattern that describes the kinds of searches that will be indexed, and you can also define a base DN pattern that restricts the index to a specified portion of the DIT.

At present, we only support equality composite indexes, which allow you to index values for a single attribute, much like an equality attribute index. However, there are two key benefits of an equality composite index over an equality attribute index:

  • As previously stated, you can combine the filter pattern with a base DN pattern. This is very useful in directories that have a lot of branches (for example, a multi-tenant deployment) where searches are often constrained to one of those branches. By combining a filter pattern with a base DN pattern, the server can maintain smaller ID sets that are more efficient to process and more tightly scoped to the search being issued.
  • The way in which the server maintains the ID sets in a composite index is much more efficient for keys that match a very large number of entries than the way it maintains the ID set for an attribute index. In an attribute index, you can optimize for either read performance or write performance of a very large ID set, but not both. A composite index is very efficient for both reads and writes of very large ID sets.

In the future, we intend to offer support for additional types of composite indexes that can improve the performance for other types of searches. For example, we’re already working on AND composite indexes that allow you to index combinations of attributes.

 

Delegated Administration

We have added a new delegated administration web application that integrates with the Ping Identity Directory Server and Ping Federate products to allow a selected set of administrators to manage users in the directory. For example, help desk employees might use it to unlock a user’s account or reset their password. Administrators can be restricted to managing only a defined subset of users (based on things like their location in the DIT, entry content, or group membership), and also restricted to a specified set of attributes.

 

Automatic Entry Purging

In the past, our server has had limited support for automatically deleting data after a specified length of time. The LDAP changelog and the replication database can be set to purge old data, and we also support automatically purging soft-deleted entries (entries that have been deleted as far as most clients are concerned, but are really just hidden so that they can be recovered if the need arises).

With the 7.0 release, we’re exposing a new “purge expired data” plugin that can be used to automatically delete entries that match a given set of criteria. At a minimum, this criteria involves looking at a specified attribute or JSON object field whose value represents some kind of timestamp, but it can also be further restricted to entries in a specified portion of the DIT or entries matching a given filter. And it’s got rate limiting built in so that the background purging won’t interfere with client processing.

For example, say that you’ve got an application that generates data that represents some kind of short-lived token. You can create an instance of the purge expired data plugin with a base DN and filter that matches those types of entries, and configure it to delete entries with a createTimestamp value that is more than a specified length of time in the past.

 

Better Control over Unindexed Searches

Despite the variety of indexes defined in the server, there may be cases in which a client issues a search request that the server cannot use indexes to process efficiently. There are a variety of reasons that this may happen, including because there isn’t any applicable index defined in the server, because there so many entries that match the search criteria that the server has stopped maintaining the applicable index, or because the search targets a virtual attribute that doesn’t support efficient searching.

An unindexed search can be very expensive to process because the server needs to iterate across each entry in the scope of the search to determine whether it matches the search criteria. Processing an unindexed search can tie up a worker thread for a significant length of time, so it’s important to ensure that the server only actually processes the unindexed searches that are legitimately authorized. We already required clients to have the unindexed-search privilege, limited the number of unindexed searches that can be active at any given time, and provided an option to disable unindexed searches on a per-client-connection-policy basis.

In the 7.0 release, we’ve added additional features for limiting unindexed searches. They include:

  • We’ve added support for a new “reject unindexed searches” request control that can be included in a search request to indicate that the server should reject the request if it happens to be unindexed, even if would have otherwise been permitted. This is useful for a client that has the unindexed-search privilege but wants a measure of protection against inadvertently requesting an unindexed search.
  • We’ve added support for a new “permit unindexed searches” request control, which can be used in conjunction with a new “unindexed-search-with-control” privilege. If a client has this privilege, then only unindexed search requests that include this the permit unindexed searches control will be allowed.
  • We’ve updated the client connection policy configuration to make it possible to only allow unindexed searches that include the permit unindexed searches request control, even if the requester has the unindexed-search privilege.

 

GSSAPI Improvements

The GSSAPI SASL mechanism can be used to authenticate to the Directory Server using Kerberos V. We’ve always supported this mechanism, but the 7.0 server adds a couple of improvements to that support.

First, it’s now possible for the client to request an authorization identity that is different from the authentication identity. In the past, it was only possible to use GSSAPI if the authentication identity string exactly matched the authorization identity. Now, the server will permit the authorization identity to be different from the authentication identity (although the user specified as the authentication identity must have the proxied-auth privilege if they want to be able to use a different authorization identity).

We’ve also improved support for using GSSAPI through hardware load balancer, particularly in cases where the server uses a different FQDN than was used in the client request. This generally wasn’t an issue for the case in which a Ping Identity Directory Proxy Server was used to perform the load balancing, but it could have been a problem in some cases with hardware load balancers or other cases in which the client might connect to the server with a different name than the server thinks it’s using.

 

Tool Invocation Logging

We’ve updated our tool frameworks to add support for tool invocation logging, which can be used to record the arguments and result for any command-line tools provided with the server. By default, this feature is only enabled for tools that are likely to change the state of the server or the data contained in the server, and by default, all of those tools will use the same log file. However, you can configure which (if any) tools should be logged, and which files should be used.

Invocation logging includes two types of log messages:

  • A launch log message, which is recorded whenever the tool is first run but before it performs its actual processing. The launch log message includes the name of the tool, any arguments provided on the command line, any arguments automatically supplied from a properties file, the time the tool was run, and the username for the operating system account that ran the tool. The values of any sensitive arguments (for example, those that might be used to supply passwords) will be redacted so that information will not be recorded in the log.
  • A completion log message, which is recorded whenever the tool completes its processing, regardless of whether it completed successfully or exited with an error. This will at least include the tool’s numeric exit code, but in some cases, it might also include an exit message with additional information about the processing performed by the tool. Note that there may be some circumstances in which the completion log message may not be recorded (for example, if the tool is forcefully terminated with something like a “kill -9”).