Ping Identity Directory Server 7.0.0.0

We have just released the Ping Identity Directory Server version 7.0.0.0, along with supporting products including the Directory Proxy Server, Data Synchronization Server, and Data Metrics Server. They’re available to download at https://www.pingidentity.com/en/resources/downloads/pingdirectory-downloads.html.

Full release notes are available at https://documentation.pingidentity.com/pingdirectory/7.0/relnotes/, and there are a lot of enhancements, fixes, and performance improvements, but some of the most significant new features are described below.

 

Improved Encryption for Data at Rest

We have always supported TLS to protect data in transit, and we carefully select from the set of available cipher suites to ensure that we only use strong encryption, preferring forward secrecy when it’s available. We also already offered protection for data at rest in the form of whole-entry encryption, encrypted backups and LDIF exports, and encrypted changelog and replication databases. In the 7.0 release, we’re improving upon this encryption for data at rest with several enhancements, including:

  • Previously, if you wanted to enable data encryption, you had to first set up the server without encryption, create an encryption settings definition, copy that definition to all servers in the topology, and export the data to LDIF and re-import it to ensure that any existing data got encrypted. With the 7.0 release, you can easily enable data encryption during the setup process, and you can provide a passphrase to use to generate the encryption key. If you supply the same passphrase when installing all of the instances, then they’ll all use the same encryption key.
  • Previously, if you enabled data encryption, the server would encrypt entries, but indexes and certain other database metadata (for example, information needed to store data compactly) remained unencrypted. In the 7.0 release, if you enable data encryption, we now encrypt index keys and that other metadata so that no potentially sensitive data is stored in the clear.
  • It was already possible to encrypt backups and LDIF exports, but you had to explicitly indicate that they should be encrypted, and the encryption was performed using a key that was shared among servers in the topology but that wasn’t available outside of the topology. In the 7.0 release, we have the option to automatically encrypt backups and LDIF exports, and that’s enabled by default if you configure encryption at setup. You also have more control over the encryption key so that encrypted backups and LDIF exports can be used outside of the topology.
  • We now support encrypted logging. Log-related tools like search-logs, sanitize-log, and summarize-access-log have been updated to support working with encrypted logs, and the UnboundID LDAP SDK for Java has been updated to support programmatically reading and parsing encrypted log files.
  • Several other tools that support reading from and writing to files have also been updated so that they can handle encrypted files. For example, tools that support reading from or writing to LDIF files (ldapsearch, ldapmodify, ldifsearch, ldifmodify, ldif-diff, transform-ldif, validate-ldif) now support encrypted LDIF.

 

Parameterized ACIs

Our server offers a rich access control mechanism that gives you fine-grained control over who has access to what data. You can define access control rules in the configuration, but it’s also possible to store rules in the data, which ensures that they are close to the data they govern and are replicated across all servers in the topology.

In many cases, it’s possible to define a small number of access control rules at the top of the DIT that govern access to all data. But there are other types of deployments (especially multi-tenant directories) where the data is highly branched, and users in one branch should have a certain amount of access to data in their own branch but no access to data in other branches. In the past, the only way to accomplish this was to define access control rules in each of the branches. This was fine from a performance and scalability perspective, but it was a management hassle, especially when creating new branches or if it became necessary to alter the rules for all of those branches.

In the 7.0 release, parameterized ACIs address many of these concerns. Parameterized ACIs make it possible to define a pattern that is automatically interpreted across a set of entries that match the parameterized content.

For example, say your directory has an “ou=Customers,dc=example,dc=com” entry, and each customer organization has its own branch below that entry. Each of those branches might have a common structure (for example, users might be below an “ou=People” subordinate entry, and groups might be below “ou=Groups”). The structure for an Acme organization might look something like:

  • dc=example,dc=com
    • ou=Customers
      • ou=Acme
        • ou=People
          • uid=amanda.adams
          • uid=bradley.baker
          • uid=carol.collins
          • uid=darren.dennings
        • ou=Groups
          • cn=Administrators
          • cn=Password Managers

If you want to create a parameterized ACI so that members of the “ou=Password Managers,ou=Groups,ou={customerName},ou=Customers,dc=example,dc=com” group have write access to the userPassword attribute in entries below “ou=People,ou={customerName},ou=Customers,dc=example,dc=com”, you might create a parameterized ACI that looks something like the following:

(target=”ldap:///ou=People,ou=($1),ou=Customers,dc=example,dc=com”)(targetattr=”userPassword”)(version 3.0; acl “Password Managers can manage passwords”; allow (write) groupdn=”ldap:///cn=Password Managers,ou=Groups,ou=($1),ou=Customers,dc=example,dc=com”;)

 

Recurring Tasks

The Directory Server supports a number of different types of administrative tasks, including:

  • Backing up one or more server backends
  • Restoring a backup
  • Exporting the contents of a backend to LDIF
  • Importing data from LDIF
  • Rebuild the contents of one or more indexes
  • Force a log file rotation

Administrative tasks can be scheduled to start immediately or at a specified time in the future, and you can define dependencies between tasks so that one task won’t be eligible to start until another one completes.

In previous versions, when you scheduled an administrative task, it would only run once. If you wanted to run it again, you needed to schedule it again. In the 7.0 release, we have added support for recurring tasks, which allow you to define a schedule that causes them to be processed on a regular basis. We have some pretty flexible scheduling logic that allows you to specify when they get run, and it’s able to handle things like daylight saving time and months with different numbers of days.

Although you can schedule just about any kind of task as a recurring task, we have enhanced support for backup and LDIF export tasks, since they’re among the most common types of tasks that we expect administrators will want to run on a recurring basis. For example, we have built-in retention support so that you can keep only the most recent backups or LDIF exports (based on either the number of older copies to retain or the age of those copies) so that you don’t have to manually free up disk space.

 

Equality Composite Indexes

The server offers a number of types of indexes that can help you ensure that various types of search operations can be processed as quickly as possible. For example, an equality attribute index maps each of the values for a specified attribute type to a list of the entries that contain that attribute value.

In the 7.0 release, we have introduced a new type of index called a composite index. When you configure a composite index, you need to define at least a filter pattern that describes the kinds of searches that will be indexed, and you can also define a base DN pattern that restricts the index to a specified portion of the DIT.

At present, we only support equality composite indexes, which allow you to index values for a single attribute, much like an equality attribute index. However, there are two key benefits of an equality composite index over an equality attribute index:

  • As previously stated, you can combine the filter pattern with a base DN pattern. This is very useful in directories that have a lot of branches (for example, a multi-tenant deployment) where searches are often constrained to one of those branches. By combining a filter pattern with a base DN pattern, the server can maintain smaller ID sets that are more efficient to process and more tightly scoped to the search being issued.
  • The way in which the server maintains the ID sets in a composite index is much more efficient for keys that match a very large number of entries than the way it maintains the ID set for an attribute index. In an attribute index, you can optimize for either read performance or write performance of a very large ID set, but not both. A composite index is very efficient for both reads and writes of very large ID sets.

In the future, we intend to offer support for additional types of composite indexes that can improve the performance for other types of searches. For example, we’re already working on AND composite indexes that allow you to index combinations of attributes.

 

Delegated Administration

We have added a new delegated administration web application that integrates with the Ping Identity Directory Server and Ping Federate products to allow a selected set of administrators to manage users in the directory. For example, help desk employees might use it to unlock a user’s account or reset their password. Administrators can be restricted to managing only a defined subset of users (based on things like their location in the DIT, entry content, or group membership), and also restricted to a specified set of attributes.

 

Automatic Entry Purging

In the past, our server has had limited support for automatically deleting data after a specified length of time. The LDAP changelog and the replication database can be set to purge old data, and we also support automatically purging soft-deleted entries (entries that have been deleted as far as most clients are concerned, but are really just hidden so that they can be recovered if the need arises).

With the 7.0 release, we’re exposing a new “purge expired data” plugin that can be used to automatically delete entries that match a given set of criteria. At a minimum, this criteria involves looking at a specified attribute or JSON object field whose value represents some kind of timestamp, but it can also be further restricted to entries in a specified portion of the DIT or entries matching a given filter. And it’s got rate limiting built in so that the background purging won’t interfere with client processing.

For example, say that you’ve got an application that generates data that represents some kind of short-lived token. You can create an instance of the purge expired data plugin with a base DN and filter that matches those types of entries, and configure it to delete entries with a createTimestamp value that is more than a specified length of time in the past.

 

Better Control over Unindexed Searches

Despite the variety of indexes defined in the server, there may be cases in which a client issues a search request that the server cannot use indexes to process efficiently. There are a variety of reasons that this may happen, including because there isn’t any applicable index defined in the server, because there so many entries that match the search criteria that the server has stopped maintaining the applicable index, or because the search targets a virtual attribute that doesn’t support efficient searching.

An unindexed search can be very expensive to process because the server needs to iterate across each entry in the scope of the search to determine whether it matches the search criteria. Processing an unindexed search can tie up a worker thread for a significant length of time, so it’s important to ensure that the server only actually processes the unindexed searches that are legitimately authorized. We already required clients to have the unindexed-search privilege, limited the number of unindexed searches that can be active at any given time, and provided an option to disable unindexed searches on a per-client-connection-policy basis.

In the 7.0 release, we’ve added additional features for limiting unindexed searches. They include:

  • We’ve added support for a new “reject unindexed searches” request control that can be included in a search request to indicate that the server should reject the request if it happens to be unindexed, even if would have otherwise been permitted. This is useful for a client that has the unindexed-search privilege but wants a measure of protection against inadvertently requesting an unindexed search.
  • We’ve added support for a new “permit unindexed searches” request control, which can be used in conjunction with a new “unindexed-search-with-control” privilege. If a client has this privilege, then only unindexed search requests that include this the permit unindexed searches control will be allowed.
  • We’ve updated the client connection policy configuration to make it possible to only allow unindexed searches that include the permit unindexed searches request control, even if the requester has the unindexed-search privilege.

 

GSSAPI Improvements

The GSSAPI SASL mechanism can be used to authenticate to the Directory Server using Kerberos V. We’ve always supported this mechanism, but the 7.0 server adds a couple of improvements to that support.

First, it’s now possible for the client to request an authorization identity that is different from the authentication identity. In the past, it was only possible to use GSSAPI if the authentication identity string exactly matched the authorization identity. Now, the server will permit the authorization identity to be different from the authentication identity (although the user specified as the authentication identity must have the proxied-auth privilege if they want to be able to use a different authorization identity).

We’ve also improved support for using GSSAPI through hardware load balancer, particularly in cases where the server uses a different FQDN than was used in the client request. This generally wasn’t an issue for the case in which a Ping Identity Directory Proxy Server was used to perform the load balancing, but it could have been a problem in some cases with hardware load balancers or other cases in which the client might connect to the server with a different name than the server thinks it’s using.

 

Tool Invocation Logging

We’ve updated our tool frameworks to add support for tool invocation logging, which can be used to record the arguments and result for any command-line tools provided with the server. By default, this feature is only enabled for tools that are likely to change the state of the server or the data contained in the server, and by default, all of those tools will use the same log file. However, you can configure which (if any) tools should be logged, and which files should be used.

Invocation logging includes two types of log messages:

  • A launch log message, which is recorded whenever the tool is first run but before it performs its actual processing. The launch log message includes the name of the tool, any arguments provided on the command line, any arguments automatically supplied from a properties file, the time the tool was run, and the username for the operating system account that ran the tool. The values of any sensitive arguments (for example, those that might be used to supply passwords) will be redacted so that information will not be recorded in the log.
  • A completion log message, which is recorded whenever the tool completes its processing, regardless of whether it completed successfully or exited with an error. This will at least include the tool’s numeric exit code, but in some cases, it might also include an exit message with additional information about the processing performed by the tool. Note that there may be some circumstances in which the completion log message may not be recorded (for example, if the tool is forcefully terminated with something like a “kill -9”).

CVE-2018-1000134 and the UnboundID LDAP SDK for Java

On Friday, March 16, 2018, CVE-2018-1000134 was published, describing a vulnerability in the UnboundID LDAP SDK for Java. The vulnerability has been fixed in LDAP SDK version 4.0.5, which is available for immediate download from the LDAP.com website, from the releases page of our GitHub repository, from the Files page of our SourceForge project, and from the Maven Central Repository.

This post will explain the issue in detail (see the release notes for information about other changes in LDAP SDK version 4.0.5). However, to quickly determine whether your application is vulnerable, you should check to see if all of the following conditions are true:

  • You are using the LDAP SDK in synchronous mode. Although this mode is recommended for applications that do not require asynchronous functionality, the LDAP SDK does not use this mode by default.
  • You use the LDAP SDK to perform simple bind operations for the purpose of authenticating users to a directory server. This is a very common use case for LDAP-enabled applications.
  • Your application does not attempt to verify whether the user actually provided a password. This is unfortunately all too common for LDAP-enabled applications.
  • The simple bind requests are sent to a directory server that does not follow the RFC 4513 section 5.1.2 recommendation to reject simple bind requests with a non-empty DN and an empty password. Although this recommendation is part of the revised LDAPv3 specification published in 2006, there are apparently some directory servers that still do not follow this recommendation by default.

If your application meets all of these criteria, then you should take action immediately to protect yourself. The simplest way to fix the vulnerability in your application is to update it to use the 4.0.5 release of the LDAP SDK. However, you should also ensure that your applications properly validate all user input, and it may also be a good idea to consider switching to a more modern directory server.

The Vulnerability in LDAPv3

The original LDAPv3 protocol specification was published as RFC 2251 in December 1997. LDAPv3 is a very impressive protocol in most regards, but perhaps the most glaring problem in the specification lies in the following paragraph in section 4.2.2:

If no authentication is to be performed, then the simple authentication option MUST be chosen, and the password be of zero length. (This is often done by LDAPv2 clients.) Typically the DN is also of zero length.

It’s that word “typically” in this last sentence that has been the source of a great many vulnerabilities in LDAP-enabled applications. Usually, when you want to perform an anonymous simple bind, you provide an empty string for both the DN and the password. However, according to the letter of the specification above, you don’t have to provide an empty DN. As long as the password is empty, the server will treat it as an anonymous simple bind.

In applications that use an LDAP simple bind to authenticate users, it’s a very common practice to provide two fields on the login form: one for the username (or email address or phone number or some other kind of identifier), and one for the password. The application first performs a search to see if they can map that username to exactly one user in the directory, and if so, then it performs a simple bind with the DN of that user’s entry and the provided password. As long as that the server returns a “success” response to the bind request, then the application considers the user authenticated and will grant them whatever access that user is supposed to have.

However, a problem can arise if the application just blindly takes whatever password was provided in the login form and plugs it into the simple bind request without actually checking to see whether the user provided any password at all. In such cases, if the user provided a valid username but an empty password, then the application will perform a simple bind request with a valid DN but no password. The directory server will interpret that as an anonymous simple bind and will return a success result, and the application will assume that the user is authenticated even though they didn’t actually provide any password at all.

This is such a big problem in LDAP-enabled applications that it was specifically addressed in the updated LDAPv3 specification published in June 2006. RFC 4513 section 5.1.2 states the following:

Unauthenticated Bind operations can have significant security issues (see Section 6.3.1). In particular, users intending to perform Name/Password authentication may inadvertently provide an empty password and thus cause poorly implemented clients to request Unauthenticated access. Clients SHOULD be implemented to require user selection of the Unauthenticated Authentication Mechanism by means other than user input of an empty password. Clients SHOULD disallow an empty password input to a Name/Password Authentication user interface. Additionally, Servers SHOULD by default fail Unauthenticated Bind requests with a resultCode of unwillingToPerform.

Further, section 6.3.1 of the same RFC states:

Operational experience shows that clients can (and frequently do) misuse the unauthenticated access mechanism of the simple Bind method (see Section 5.1.2). For example, a client program might make a decision to grant access to non-directory information on the basis of successfully completing a Bind operation. LDAP server implementations may return a success response to an unauthenticated Bind request. This may erroneously leave the client with the impression that the server has successfully authenticated the identity represented by the distinguished name when in reality, an anonymous authorization state has been established. Clients that use the results from a simple Bind operation to make authorization decisions should actively detect unauthenticated Bind requests (by verifying that the supplied password is not empty) and react appropriately.

In directory servers that follow the recommendation from RFC 4513 section 5.1.2, clients can perform an anonymous simple bind by providing an empty DN and an empty password, but an attempt to bind with a non-empty DN and an empty password will be rejected. This very good recommendation was made over ten years ago, and the code change needed to implement it is probably very simple. However, for some reason, there are directory server implementations out there that haven’t been updated to follow this recommendation, and therefore leave client applications open to this inadvertent vulnerability.

The Vulnerability in the UnboundID LDAP SDK for Java

Ever since its initial release, the UnboundID LDAP SDK for Java has attempted to protect against simple bind requests that include a non-empty DN with an empty password. The LDAPConnectionOptions class provides a setBindWithDNRequiresPassword(boolean) method that you can use to indicate whether the LDAP SDK will reject a simple bind request that has a non-empty DN with an empty password. If you don’t explicitly use this option, then the LDAP SDK will assume a default value of true. If you try to send a simple bind request that includes a non-empty DN and an empty password, then the LDAP SDK won’t actually send any request to the server but will instead throw an LDAPException with a result code of ResultCode.PARAM_ERROR and a message of “Simple bind operations are not allowed to contain a bind DN without a password.”

Or at least, that’s the intended behavior. And that is the behavior that you’ll get if you send the bind request in the asynchronous mode that the LDAP SDK uses by default. However, Stanis Shkel created GitHub issue #40 (“processSync in SimpleBindRequest allows empty password with set bindDN”), which points out that this check was skipped for connections operating in synchronous mode.

LDAP is an asynchronous protocol. With a few exceptions, it’s possible to have multiple operations in progress simultaneously over the same LDAP connection. To support that asynchronous capability, the LDAP SDK maintains an extra background thread that constantly read data from a connection and makes sure that any data sent from the server gets delivered to whichever thread is waiting for it. This is just fine most of the time, but it does come at the cost of increased resource consumption, and a small performance hit from handing off data from one thread to another. To minimize this impact for applications that don’t take advantage of the asynchronous capabilities that LDAP provides, we added a synchronous mode to the LDAP SDK way back in version 0.9.10 (released in July of 2009). In this mode, the same thread that sends a request to the server is the one that waits for and reads the response. This can provide better performance and lower resource consumption, but you have to explicitly enable it using the LDAPConnectionOptions.setUseSynchronousMode(boolean) method before establishing a connection.

In the course of implementing support for the synchronous mode for a simple bind request, we incorrectly put the check for synchronous mode before the check for an empty password. For a connection operating in synchronous mode, we branched off to another part of the code and skipped the check for an empty password. The fix for the problem was simple: move the check for an empty password above the check for synchronous mode, and it was committed about three and a half hours after the issue was reported, including a unit test to ensure that a simple bind request with a non-empty DN and an empty password is properly rejected when operating in synchronous mode (there was already a test to ensure the correct behavior in the default asynchronous mode).

Conditions Necessary for the Vulnerability

Although there was unquestionably a bug in the LDAP SDK that created the possibility for this bug, there are a number of factors that could have prevented an application from being susceptible to it. Only an application that meets all of the following conditions would have been vulnerable:

  • The application must have explicitly enabled the use of synchronous mode when creating an LDAP connection or connection pool. If the application was using the default asynchronous mode, it would not have been vulnerable.
  • The application must have created simple bind requests from untrusted and unverified user input. If the application did not create simple bind requests (for example, because it did not perform binds at all, or because it used SASL authentication instead of simple), then it would not have been vulnerable. Alternately, if the application validated the user input to ensure that it would not attempt to bind with an empty password, then it would not have been vulnerable.
  • The application must have sent the simple bind request to a server that does not follow the RFC 4513 recommendations. If the server is configured to reject simple bind requests that contain a non-empty DN with an empty password, then an application communicating with that server would not have been vulnerable.

While we strongly recommend updating to LDAP SDK version 4.0.5, which no longer has the bug described in CVE-2018-1000134, we also strongly recommend ensuring that applications properly validate all user input as additional mitigation against problems like this. And if you’re using a directory server that hasn’t been updated to apply a very simple update to avoid a problem that has been well known and clearly documented for well over a decade, then perhaps you should consider updating to a directory server that takes security and standards compliance more seriously.

UnboundID LDAP SDK for Java 4.0.2

Happy 20th birthday, LDAPv3! The core LDAPv3 specifications, RFCs 2251 through 2256, were released on December 4, 1997. To celebrate, we’re releasing the UnboundID LDAP SDK for Java version 4.0.2. It is available now for download from the LDAP.com website, from our GitHub repository, from the SourceForge project, or from the Maven Central Repository.

The most significant changes included in this release are:

  • Added a new manage-certificates tool that can be used to interact with JKS and PKCS #12 keystores, generate certificates and certificate signing requests, sign certificates, and perform a number of other certificate-related features. It’s like keytool, but it offers additional functionality, and it’s a lot more user-friendly. The LDAP SDK also provides classes for generating and parsing certificates and certificate signing requests programmatically.
  • Added a new variant of the Entry.diff method that can be used to perform a byte-for-byte comparison of attribute values instead of using the associated attribute syntax. This can help identify changes that result in logically equivalent values, like changing the value of a case-insensitive attribute in a way that only affects capitalization.
  • Added a new PasswordReader.readPasswordChars method that can be used to read a password into a character array. Previously, it was only possible to read a password as a byte array.
  • Added a new LDAPConnection.closeWithoutUnbind method that can be used to close a connection without first sending an LDAP unbind request. While this isn’t usually recommended, it can be useful in cases where the connection is known to be invalid, and especially if there is the potential for sending the unbind request to cause the connection to block.
  • Improved support for validating object identifiers (OIDs). The LDAP SDK now offers a strict validation mode that requires the OID to be comprised of at least two components, that requires the first component to be between zero and two, and that requires the second component to be between zero and thirty-nine if the first component is zero or one. There is also a new OIDArgumentValueValidator class that can be used when requesting command-line arguments whose values are expected to be numeric OIDs.
  • Fixed a bug that could cause the LDAP SDK to leak a connection if it was configured with an SSLSocketVerifier and that verifier rejected the connection for some reason.
  • Fixed a bug that could cause the LDAP SDK to block for twice as long as it should in the event that a failure occurred while trying to send a simple bind request on a connection operating in synchronous mode and the attempt to send the request blocks.
  • Added support for new ASN.1 element types, including bit string, object identifier, generalized time, UTC time, UTF-8 string, IA5 string, printable string, and numeric string. Also added support for a new integer type that is backed by a BigInteger and can support values of any magnitude.
  • Added convenience methods that make it easier to determine the type class and primitive/constructed state of an ASN.1 element.
  • Added support for a new uniqueness request control that can be included in add, modify, and modify DN requests sent to the Ping Identity Directory Server. This control requests that the server identify attribute value conflicts that might arise as a result of the changes performed by the associated operation. The ldapmodify tool has also been updated to support this control.
  • Updated the searchrate tool to make it possible to set the search size limit, time limit, dereference policy, and typesOnly flag.
  • Updated the in-memory directory server to support the UnboundID/Ping-proprietary ignore NO-USER-MODIFICATION request control.
  • Updated the UnboundID/Ping-proprietary password policy state extended operation to make it possible to determine whether the target user has a static password.
  • Updated the argument parser to make it possible to hide subcommand names and argument identifiers so that they can be used but will not appear in generated usage information.
  • Improved the quality of LDAP request debug messages.
  • Updated the set of LDAP-related specifications to include updated versions of existing specifications, and to add a number of certificate-related specifications.

A Baffling Android Security Update Policy

Google is maddeningly schizophrenic when it comes to security. They were early adopters of two-factor authentication for their online services, and they offer multiple ways for obtaining that second factor (including at least TOTP, SMS, voice call, and U2F). Yet you can’t always require two-factor authentication when logging into a Chromebook, and it doesn’t seem like there’s any two-factor authentication scheme for logging into mobile devices. I’d love the option to require both a passphrase (or at least a reasonably long PIN) and a fingerprint, but enabling the fingerprint reader for any purpose (even if you just want to use a fingerprint as a second factor in an app) automatically makes your phone unlockable with just a fingerprint. That’s insane.

Their lack of decent VPN support for Chromebooks also boggles the mind. Unless you’re willing to jump through some very ugly hoops (like using Crouton to set up a Linux sandbox, diving deep into Chrome OS configuration internals), you’re limited to L2TP, which is vulnerable to man-in-the-middle attacks. For devices that are intended to be used on the go with a network connection, presumably through some WiFi service that you don’t control (and therefore have a much greater risk of having someone snoop on your communication), good VPN support is absolutely critical. And even if you can get a working VPN and you’ve got a Chromebook that supports running Android apps (which I’ve got to admit does make Chrome OS much nicer), good luck getting those apps to use the VPN for their communication.

But today I encountered something that seems to take dumb to a new level. On Monday, Google released an Android Security Bulletin about new security vulnerabilities, including “a Critical security vulnerability that could enable remote code execution on an affected device through multiple methods such as email, web browsing, and MMS when processing media files.” So in theory, someone could send you a text message with a media attachment and take over your phone. That seems like a pretty big deal. So I went to see if there was a system update for my phone (a Google Pixel XL), and there was. So I clicked to download it, and I got an error message saying “This update can be downloaded via a WiFi network only until May 6. To continue download, connect to a WiFi network.”

What? This doesn’t make even the tiniest bit of sense. Why is this critical security update only available if you’re connected to WiFi? Why does Google care if I want to use my mobile data to download the patch? And what’s special about May 6 that all of a sudden will make it okay for me to download it then? It’s not like it costs Google any more money if the data ultimately ends up on the phone via 3G/4G/LTE than it does via WiFi. Even if I were using Google’s Fi service for my mobile data (and I’m not), I’d have to pay for the data that I used because Google Fi doesn’t have any kind of unlimited data play (it’s a flat rate of ten dollars per gigabyte, and it would actually be in their best interests to get people to use as much data as possible).

I can absolutely understand warning the user about using mobile data for a potentially large file (this update was about 60 megabytes) and wanting to get the user’s okay before starting the download. That would be a good thing. If you have a mobile data cap, or if the size of your bill depends on how much data mobile data you use, then, of course, you’d want to be warned about doing something that could use a significant amount of data. But this wasn’t a warning message that I could simply dismiss and get on with the download. This was an error message that told me that I just plain couldn’t get the update unless I connected to WiFi or unless I wanted to wait (and remain vulnerable) for several more days.

In the interest of security, I did kowtow to this stupid demand, and I downloaded the update over WiFi (secured by VPN). But if I’d been traveling somewhere where I had good mobile data coverage but WiFi wasn’t readily available, then I’d have been stuck either needing to find somewhere I could leech (or pay for) a connection, or remain vulnerable for a few more days (during which time I’m sure there would be plenty of bad guys trying to reverse-engineer the update and figure out how to exploit the vulnerabilities that it fixes).

Seriously, Google. Do security better.