LDAP SDK Features: LDIF Processing

The LDAP Data Interchange Format (LDIF) is a standard way of representing directory data in plain text files, as defined in RFC 2849.  LDIF records can represent entire entires, or they can represent changes to be made to directory contents (add new entries, or remove, modify, or rename existing entries).  The UnboundID LDAP SDK for Java provides rich support for interacting with LDIF records, in many ways.  This post will describe the capabilities that it offers in this area.

Reading LDIF Records

The com.unboundid.ldif.LDIFReader class provides methods for reading data from LDIF files or from input streams.  Some of the methods it offers are:

  • readEntry() — This reads the next entry from the LDIF file or input stream.  This should be used when reading LDIF data known to contain only entries.
  • readChangeRecord() — This reads the next change record from the LDIF file or input stream.  This should only be used when reading LDIF data known to contain only change records.
  • readLDIFRecord() — This reads the next entry or change record from the LDIF file or input stream (both entries and change records implement the LDIFRecord interface).  This should be used when reading LDIF data that may contain a mix of entries and change records.
  • decodeEntry(String...) — This decodes the contents of the provided array as an LDIF entry.  The elements of the array should be the lines of LDIF entry to be parsed.  This is a static method and doesn’t require an LDIFReader object to be created to be able to use it.
  • decodeChangeRecord(String...) — This decodes the contents of the provided array as an LDIF change record.  The elements of the array should be the lines of the LDIF change record to be parsed.  This is a static method and doesn’t require an LDIFReader object to be created to be able to use it.

When reading and parsing LDIF records, if an invalid record is encountered then the LDIF reader will throw an LDIFException.  It will include a message about the problem encountered, the line number in the LDIF data on which the problem was encountered, and a flag indicating whether or not it is possible to continue reading data from the LDIF source.  Methods used to read LDIF data from a file or input stream may also throw an IOException.

Writing LDIF Records

The com.unboundid.ldif.LDIFWriter class provides methods for writing LDIF content to files or output streams.  Some of those methods include:

  • writeEntry(Entry) — This method writes the provided entry to the LDIF file or output stream.
  • writeEntry(Entry,String) — This method writes the provided entry to the LDIF file or output stream, immediately preceded by the specified comment.
  • writeChangeRecord(LDIFChangeRecord) — This method writes the provided change record to the LDIF file or output stream.
  • writeChangeRecord(LDIFChangeRecord,String) — This method writes the provided change record to the LDIF file or output stream, immediately preceded by the specified comment.
  • writeLDIFRecord(LDIFRecord) — This method writes the given LDIF record (which may either be an entry or change record) to the LDIF file or output stream.
  • writeLDIFRecord(LDIFRecord,String) — This method writes the given LDIF record (which may either be an entry or change record) to the LDIF file or output stream, immediately preceded by the specified comment.

Methods used for writing LDIF data may throw an IOException.

Parallel LDIF Processing

We’ve worked hard to make the process of reading and writing LDIF data as fast as possible.  While it’s pretty fast on its own as a serial process, portions of the processing may be parallelized for better performance on multi-CPU and/or multi-core systems.

When reading LDIF data, there are two phases of processing:  the process of reading raw data from the LDIF file or input stream, and the process of decoding that data as an entry or change record.  The first phase must be performed serially, but the second can be parallelized and multiple concurrent threads may be used to decode LDIF records.  As a result, when the LDIF reader is configured to use multiple concurrent threads, it may be the case that the limiting factor is the speed at which data can be read from the disk or input stream.

In order to use parallelism in the LDIF reader, it is only necessary to specify the number of additional threads to use when parsing entries or change records at the time the reader is created.  This parallelism will be provided completely behind the scenes, so that the caller may continue to use the readEntry(), readChangeRecord(), and readLDIFRecord() methods just as if all processing were performed serially.

Of course, introducing parallelism in the LDIF reader would be of limited usefulness if it introduced the possibility that entries or change records might be made available to the caller in a different order than they were contained in the original LDIF data.  As a result, the LDIF reader will ensure that the order of the data is preserved so that the only perceptible change between serial and parallel processing is the relative speed with which LDIF records may be made available to the caller.

The LDIF writer also provides support for parallel processing, in which case multiple entries or change records may be formatted into their LDIF representations in parallel, and then will be written serially to the LDIF file or output stream. As with the LDIF reader, parallelism may be enabled by specifying the number of threads to use to perform that processing at the time that the LDIF writer is created. However, parallel processing will only be performed in the LDIF writer if the entries or change records are provided to it using the writeLDIFRecords(List<? extends LDIFRecord>) method. The order that the records are provided in the list will be preserved when they are written to the LDIF file or output stream.

Transforming Entries Read from LDIF

When reading entries from an LDIF file or input stream, it may be useful to alter those entries in some way before they are made available to the caller, and in some cases it may be desirable to exclude entries altogether.  Either or both of these may be performed by providing a class which implements the com.unboundid.ldif.LDIFReaderEntryTranslator interface to the LDIF reader. If an entry translator is provided, then whenever an entry is read, the translate(Entry,long) method will be invoked to allow the translator to return a modified version of the provided entry, a completely new entry, or null to indicate that the entry should be omitted from the data made available to callers.

If the LDIF reader has been configured to perform parallel processing, then the entry translator will be invoked during the parallel portion of that processing, and as a result it may be faster to perform this transformation using the LDIFReaderEntryTranslator interface than performing it separately after the entry has been retrieved by the caller using a method like readEntry(). This does require the translator to be threadsafe, and it cannot depend on the order in which entries are processed.

Obtaining LDIF Representations of LDAP SDK Objects

There are a number of objects provided as part of the LDAP SDK which can be represented in LDIF form without the need for an LDIF writer. This includes the following types of objects:

  • com.unboundid.ldap.sdk.Entry — This can be represented as an LDIF entry.
  • com.unboundid.ldap.sdk.AddRequest — This can be represented as an LDIF add change record.
  • com.unboundid.ldap.sdk.DeleteRequest — This can be represented as an LDIF delete change record.
  • com.unboundid.ldap.sdk.ModifyRequest — This can be represented as an LDIF modify change record.
  • com.unboundid.ldap.sdk.ModifyDNRequest — This can be represented as an LDIF modify DN change record.

All of these objects provide the following methods that allow them to be represented in LDIF form:

  • toLDIF() — This returns a string array whose elements comprise the lines of the LDIF representation of the associated record.
  • toLDIFString() — This returns a string containing the LDIF representation of the associated record, including line breaks.

In addition, the above objects except Entry provide a toLDIFChangeRecord() method that allow them to be converted to the appropriate type of LDIF change record.

Using LDIF Records to Process Operations

The LDAP SDK also provides support for processing operations based on the LDIF representation.  There are two primary ways that this may be accomplished:

  • All types of the LDIF change records provide a processChange(LDAPConnection) method which can be used to directly process an operation from that change record.  Each change record type also provides a method for converting that change record object to an LDAP request (e.g., the LDIFAddChangeRecord class provides a toAddRequest() method to convert the change record to an equivalent add request).
  • Entries can be created from their LDIF representation using the Entry(String...) constructor, where the provided string array contains the lines that comprise the LDIF representation of that entry. Add requests can also be created from the LDIF representation of the entry to add using the AddRequest(String...) constructor. Modify requests can also be created from the LDIF representation of that modify request using the ModifyRequest(String...) constructor.

LDAP SDK Features: Client-Side Processing

When writing a directory-enabled application, you can let the directory server do a lot of the work for you, and in many cases that’s appropriate.  However, there are times when it might be better to do some of that processing on the client rather than the server (e.g., because of the performance hit of the additional requests, or because some target servers might not support all of the features you want to use).  In other cases it might be useful to perform client-side validation prior to sending a request to the server to reduce the likelihood of a failure or to provide better information about what needs to be fixed.  The UnboundID LDAP SDK for Java provides support for several types of client side processing, as described in this post.

Comparing Entries

The com.unboundid.ldap.sdk.Entry class includes a diff method that can be used to compare two entries. The complete definition for this method is:

public static List<Modification> diff(Entry sourceEntry,
Entry targetEntry, boolean ignoreRDN, String... attributes)

It can be used to compare the contents of the two provided entries and will return a list of the modifications that can be applied to the source entry in order to make it look like the target. You can optionally ignore differences in the RDN attribute, and you can optionally restrict the comparison to a specified set of attributes.

There are many uses for this capability. For example, if you have an application that allows a user to alter the contents of entries, then this method may be used to compare a local copy of the updated entry with the original entry in order to determine the modifications to apply to the server. Alternately, you could use this to compare an entry in two different servers to determine whether they are in sync.

Schema Validation

The LDAP SDK provides enhanced support for parsing server schema definitions, and can use that to perform a number of types of client-side processing. One such capability is exposed through the com.unboundid.ldap.sdk.schema.EntryValidator class. This class may be initialized with schema read from the server and provides the following method:

public boolean entryIsValid(Entry entry, List<String> invalidReasons)

This method may be used to determine whether the given entry conforms to the schema constraints defined in the server. The basic checks that it can perform include:

  • Ensure that the entry has a valid DN
  • Ensure that the entry has exactly one structural object class
  • Ensure that all of the entry’s object classes are defined in the schema
  • Ensure that all of the entry’s attributes are defined in the schema
  • Ensure that all of the attributes required by any of the entry’s object classes are present
  • Ensure that all of the attributes contained in the entry are allowed by at least one of that entry’s object classes
  • Ensure that all of the attribute values conform to the syntax defined for that attribute
  • Ensure that all of the attributes with multiple values are defined as multi-valued in the server schema

In addition, if there is a DIT content rule associated with the entry’s structural object class, then it will be used to ensure that all of the auxiliary object classes in the entry are allowed and that the entry doesn’t contain any prohibited attributes, and it may also allow additional attributes not allowed by any of the object classes. If there is a name form associated with the entry’s structural object class, then the entry validator will use it to ensure that the entry’s RDN includes all of the required attributes, and that all attributes included in the entry’s RDN are allowed.

All of these checks may be individually enabled or disabled, which makes it possible to tailor the types of validation to perform. In addition, the entryIsValid method is threadsafe, so it can be called concurrently by multiple threads. If the entry is invalid, then this method will return false, and the provided list will include one or more human-readable messages detailing the problems identified with the entry. It will also collect summary statistics about all of the entries examined so that if a number of entries are processed it is easy to understand the types of problems (if any) that were encountered.

One of the example programs provided with the LDAP SDK is a validate-ldif tool which uses the EntryValidator class to perform this validation for all entries in an LDIF file. This can provide a useful way to validate that the entries contained in an LDIF file are valid without needing to actually import the data into a server. If there are any problems, then the summary provided by the tool may be easier to interpret than the errors reported by the server.

Sorting

Many directory servers provide support for the server-side sort control as defined in RFC 2891. However, some servers may not support this control, and of the servers that do support it, some of them may require special configuration or indexing, and the client may require special permissions to be allowed to request it. In addition, for small result sets, it may be significantly more efficient (and less resource-intensive on the server) to perform sorting on the client rather than asking the server to do it.

The UnboundID LDAP SDK for Java provides a com.unboundid.ldap.sdk.EntrySorter class that can be used to accomplish this. It includes the following method that makes it possible to sort a collection of entries:

public SortedSet<Entry> sort(Collection<? extends Entry> entries)

This class also implements the Comparator<Entry> interface, so it can be used with any type of Java collection that can use comparators.

It is possible to define the sort criteria, including specifying which attributes should be used and indicating whether to sort in reverse order. It can also be configured to take the entry’s position in the DIT into consideration when sorting so that entries can be sorted by hierarchy instead of or in addition to performing attribute-based sorting.

For best results, the entry sorter is able to use schema read from the directory server in order to better understand which matching rules it should use for sorting, but if no schema is available it will simply assume that all sorting should be done using case-ignore ordering.

Filter Evaluation

There are a number of cases in which it might be useful to determine whether an entry matches a given set of criteria, and search filters provide a convenient way to express that criteria. If the entry exists in a directory server, then of course a search operation may be used to make the determination, but if the client has a local copy of the entry then it may be more convenient or efficient to perform the evaluation on the client. Further, there may be cases (e.g., for an entry read from an LDIF file) in which the entry isn’t available in a server. To provide this capability, the com.unboundid.ldap.sdk.Filter class includes the following methods:

public boolean matchesEntry(Entry entry)
public boolean matchesEntry(Entry entry, Schema schema)

If a schema object is provided, then it will be used to perform more accurate evaluation, but if no schema is available then it will work using case-ignore matching. This method currently supports all types of filter evaluation except approximate and extensible matching.

LDAP SDK Features: Connection Pooling

In order to get the best performance and efficiency from a directory-enabled application, you will likely want to use some form of connection pooling.  Most LDAP APIs provide support for some level of connection pooling, but I’m not aware of any other API that provides anywhere near the power and convenience of the connection pooling capabilities of the UnboundID LDAP SDK for Java.  This post will examine many of the connection pooling features that it provides.

Connection Pool Implementations

The UnboundID LDAP SDK for Java actually provides two connection pool implementations.  The com.unboundid.ldap.sdk.LDAPConnectionPool class provides a single set of connections intended to be used for all types of operations. The com.unboundid.ldap.sdk.LDAPReadWriteConnectionPool class maintains two sets of connections, one set intended for use in processing write (add, delete, modify, and modify DN) operations, and the other for use in processing read (bind, compare, and search) operations. The former is the best choice for environments in which all of the directory servers have the same capabilities, while the latter is better for environments containing some a mix of read-only and read-write servers.  The read-write connection pool implementation may also be useful in environments in which it is recommended to try to send all writes to the same server, even if all of the servers technically support read and write operations.

Failover and Load Balancing

Connection pools can be created so that all connections in the pool are established to the same server, or they can be spread across multiple servers.  This can be accomplished through the com.unboundid.ldap.sdk.ServerSet API, which is used to determine which server to use at the time that a connection is created.  There are three server set implementations provided in the LDAP SDK, and you can create your own with whatever logic you choose. The included implementations are:

  • com.unboundid.ldap.sdk.SingleServerSet — This is a simple implementation in which connections will always be established to the same server.
  • com.unboundid.ldap.sdk.RoundRobinServerSet — This is an implementation that allows you to specify a list of servers. The first attempt to create a connection will use the first server, the second will choose the second server, and so on, and once it hits the end of the list it will wrap back around to the beginning.   If the selected server is unavailable, it will skip ahead to the next server in the list, and will only fail if none of the servers are available.
  • com.unboundid.ldap.sdk.FailoverServerSet — This is an implementation that allows you to specify an ordered list of servers, and when attempting to establish a new connection it will always try the first server, but if it isn’t available then it will try the second, then the third, and so on. This implementation also provides the ability to fail over between server sets rather than individual servers, which can provide a lot of flexibility (e.g., try round-robin across all servers in the local data center, but if none of them are available then try round-robin across all the servers in a remote data center).

When a connection pool is created with a server set, then that server set will be used to establish all connections in that pool.  For example, if you create a pool with ten connections and you use the round-robin server set with a list of two servers, then five of the connections in that pool will be established to the first server and five to the second.  This provides a simple mechanism for client-side load balancing, since operations performed using pooled connections will be spread across the two servers.  It also provides better availability, since if either server becomes unavailable then connections established to it will fail over to the other (and in many cases it will be transparent to the application).

If all of the servers defined for use in a connection pool are unavailable at the same time, then obviously any attempt to use that connection pool during that time will fail.  However, as soon as any of those servers is back online, then the pool will resume normal operation and operations will again be successful without the need to restart the application or re-create the pool.

Checking out and Releasing Connections

Traditionally, connection pools simply maintain a set of connections that the application can check out, use as needed, and return.  Although the implementations provided by the UnboundID LDAP SDK for Java go well beyond this, it still possible to check out and release connections as needed.

In the LDAPConnectionPool implementation, a connection can be obtained using the getConnection() method. The application can then do the appropriate processing, and when complete it can release the connection back to the pool with the releaseDefunctConnection(LDAPConnection) method. If an error occurs during processing that indicates the connection is no longer viable, then the releaseDefunctConnection(LDAPConnection) method should be used instead, which will cause the provided connection to be closed and discarded a new connection created in its place.

The LDAPReadWriteConnectionPool implementation provides similar methods for checking out and releasing connections, but there are separate methods for interacting with the read and write pools. The getReadConnection() method may be used to check out a connection from the read pool, and the getWriteConnection() method will check out a connection from the write pool. When returning connections, the releaseReadConnection(LDAPConnection) and releaseWriteConnection(LDAPConnection) methods may be used to return valid connections, and the releaseDefunctReadConnection(LDAPConnection) and releaseDefunctWriteConnection(LDAPConnection) methods may be used to return connections that are no longer valid.

Performing Operations in the Pool

One nice feature about the connection pools provided by the UnboundID LDAP SDK for Java is that they include convenience methods for performing operations using connections from that pool.  As a result, it is not necessary to check out and release connections in order to process an operation because the pool will do it on behalf of the requester.  It will also perform any necessary error handling, and evaluate the result code to indicate whether the connection should be destroyed and a new connection created in its place.

The provided connection pool implementations implement the com.unboundid.ldap.sdk.LDAPInterface interface, which is the same interface implemented by the LDAPConnection class.  You can process a search using a pooled connection by invoking one of the LDAPConnectionPool.search or LDAPReadWriteConnectionPool.search methods just like you can call LDAPConnection.search.  Because they all implement the same interface, it’s relatively simple to write an application that can be used to work with either single connections or connection pools.

The LDAPConnectionPool class also provides a processRequests(List<LDAPRequest>,boolean) method that can be used to process multiple requests in succession on a single connection from the pool. In this case, the results of those operations will be returned in a list, and in the event that an any of those requests does not complete successfully then it may either continue attempting to process subsequent requests in the lists or it may stop at that point.

Connection Availability

It is important to understand the behavior that a connection pool exhibits whenever a connection is needed but none are immediately available. The UnboundID LDAP SDK for Java provides two settings that can be used to control the connection pool’s behavior in this case. The first setting is whether the connection pool should be allowed to create a new connection if one is needed but none are available, and the second is how long to wait for a connection to become available before either creating a new connection or returning an error. These settings are controlled by the setCreateIfNecessary(boolean) and setMaxWaitTimeMillis(long) methods, respectively.

With the combination of these settings, any of the following behaviors may be selected if a connection is needed but none are available:

  • The connection pool should immediately throw an exception
  • The connection pool should immediately create a new connection
  • The connection pool should immediately create a new connection
  • The connection pool should wait for up to a specified length of time for a connection to become available, and if that time passes without a connection becoming available then it will throw an exception
  • The connection pool should wait for up to a specified length of time for a connection to become available, and if that time passes without a connection becoming available then it will throw an exception
  • The connection pool should wait as long as it takes for a connection to become available.

If the pool is configured to create new connections if none are available, then there may be periods of time in which there are more total connections associated with that connection pool than the maximum number of connections to maintain in the pool.  In this case, the behavior that the pool will exhibit when a connection is released will depend on the number of connections available in the pool at that time.  As long as the number of available connections in the pool is less than the maximum, then the connection will be released back to the pool and will be made available for use by subsequent requests.  If an attempt is made to release a connection when the pool already has the maximum number of available connections, then the released connection will be closed.  This provides a scalable and efficient way for the connection pool to grow in size as needed under heavy load while ensuring that it doesn’t hold onto too many connections during slower periods.

Making it Easier to Write Directory-Enabled Applications

I’ve been working with LDAP directory servers for about ten years, and for that entire time I’ve also been writing code.  I’ve written a lot of server-side code in the course of building directory servers, but I’ve written even more client-side code for interacting with them.  Unfortunately, I’ve always been a bit disappointed with the APIs that are available for LDAP communication, especially those for use in Java applications.

Java should be a great language for writing directory-enabled applications.  It’s fast, has a broad standard library, offers a number of frameworks for developing web-based and desktop applications, and it’s easy to write robust code in a short period of time.  Unfortunately, the APIs available for LDAP communication are pretty limited.  JNDI is a common choice because it’s part of the core Java runtime, but it’s very cumbersome and confusing and provides rather limited access to the elements of the LDAP protocol.  The Netscape Directory SDK for Java is more user friendly than JNDI, but it’s fairly buggy (especially under load), supports a pretty limited set of controls and extended operations, and is really showing its age after not having any new releases since 2002.  I’ve never actually used JLDAP, but it looks to expose pretty much the same API as the Netscape SDK and has also gone several years without a new release.

Today, UnboundID is correcting this problem with our release of the UnboundID LDAP SDK for Java.  It is a user-friendly, high-performance, feature-rich, and completely free Java API for communicating with LDAP directory servers and performing other directory-related tasks.  Some of the benefits that it provides include:

  • It is completely free to use and redistribute.  The LDAP SDK is available under either the GNU General Public License v2 (GPLv2) or the UnboundID LDAP SDK Free Use License.
  • It provides a broad feature set.  In addition to providing full support for the core LDAPv3 protocol, it also includes support for 17 standard controls, 4 standard extended operations, and 6 standard SASL mechanisms.  It also provides a number of related APIs for things like LDIF, base64 and ASN.1 parsing, working with root DSE and changelog entries, enhanced schema support, command line argument processing, and SSL/TLS communication.
  • It is much more convenient and easy to use than other LDAP APIs,  It is often possible to do what you want with quite a bit less code than the alternatives, and its use of Java features like generics, enums, annotations, and varargs can further enhance the development experience.
  • It provides support for connection pooling and client-side failover and load balancing.  Connections can be easily spread across multiple directory servers, and you can even have read and write operations sent to different sets of servers.  The connection pool classes implement the same interface as individual connections, so you can process operations using pooled connections without needing to deal with checking out and releasing connections and performing all of the necessary error handling (although that’s possible too if you need it).
  • It provides excellent performance and scalability.  My testing has shown it to be significantly faster than either JNDI or the Netscape SDK, and the searchrate tool that we include as an example can handily outperform the popular C-based version provided by another vendor.
  • It has no external dependencies.  Everything is included in a single jar file, and the only requirement is a Java SE 5.0 or higher runtime.
  • It is robust and reliable.  We have an extensive test suite for the SDK itself with over 26,000 test cases covering over 94% of the code.  The LDAP SDK is also used as an integral part of other UnboundID products, so it benefits from the testing we do for them as well.  It’s frequently subjected to very heavy and highly concurrent workloads so there shouldn’t be any surprises when moving your applications from testing into production (at least, not because of the LDAP SDK).
  • It includes generous documentation.  In addition to a more thorough overview of the benefits our LDAP SDK provides over other the alternatives, it also includes a getting started guide, Javadoc documentation with lots of examples, and a number of sample programs demonstrating the use of various SDK components.
  • Commercial support is available.  This can help ensure fast access to patches for any problems found in the LDAP SDK, and may also be used to request enhancements and additional functionality.  Developer support is also available to assist you in using the LDAP SDK to create directory-enabled applications.

You can find the UnboundID LDAP SDK for Java available for download at http://www.unboundid.com/products/ldapsdk/.  All of the documentation is available there as well (and also in the product download), including some frequently asked questions and a more detailed list of the advantages it offers over other LDAP APIs.

Why I like Solaris

A recent InfoWorld article asks whether Solaris is going to be able to continue to compete with Linux.  Apparently someone from the Linux community pointed Paul Krill, the author, to my earlier blog article about the earlier issue that I wrote about with OpenDS and he asked me about it.  While I didn’t really want to bring that up again, I did tell him that I felt Solaris is a better OS than Linux.  I was briefly quoted in the article, but I would like to expand on that a bit.

First, let me say that I’ve got a pretty decent amount of experience with both Solaris and Linux.  I started using Linux in 1995 and Solaris around 1998.  Linux was my primary desktop operating system from probably around 1997 through about 2004, when I switched to Solaris.  I still run Linux quite a bit, including on my laptop out of necessity (primarily because Solaris doesn’t support the Broadcom Ethernet interface), but for the work that I do, which is primarily development, I find that Solaris is just more convenient.  On servers, there is no question that I prefer Solaris over Linux.

My fondness for Solaris definitely did not come easy, nor was it always warranted.  My first experiences were pretty unpleasant compared with Linux.  For many years, Solaris was anything but user friendly, providing only relatively antiquated shells and shipping without critical utilities like gzip or a C compiler.  CDE remained the default desktop environment for far too long, and it wasn’t easy to come by a lot of software that was commonly included with Linux distributions.  And of course, I’m sure that Sun’s decision to stop shipping Solaris for x86 systems for a period of time seriously hindered its usability and probably steered a number of potential customers away.

However, my opinion started to sway as Solaris 10 started to materialize.  Being a Sun employee, I had access to internal builds and I started to check them out.  Months before it was released to the public, I had changed my tune.  It was a serious leap forward in usability and convenience, and the new features were very compelling.  Since then, I’ve followed the development of Solaris 10, Nevada (the code name for what may become Solaris 11), and OpenSolaris.

It’s a lot easier to see why Linux may be more appealing than Solaris on the desktop.  Linux has better hardware support, so it will run on a lot more systems than Solaris.  Availability of applications can also be an issue, but I think in many cases that’s somewhat due to lack of publicity.  Many people haven’t heard about Nexenta (which is very much like Ubuntu Linux but with a Solaris kernel) or Blastwave (a site providing a large amount of Solaris software using a relatively simple interface that behaves like apt-get), and if you’re running Sun’s OpenSolaris distribution then you can use IPS to get access to a number of applications in a similar method.  But even so, there is just more software that works on Linux (or works more easily on Linux) than on Solaris.

However, there are a lot of reasons that I prefer Solaris and am willing to overlook some of its shortcomings.  I’ve listed some of them below.  Note that I’m not trying to slam Linux.  I do like it and hope that it continues to improve.  Consider this a wish list.

Overall Feel and Consistency

Linux feels like it was written.  Solaris feels like it was designed.  While I think that Sun’s development processes can sometimes be a little heavyweight, and I think that Sun is trying to retain too much control over OpenSolaris, there is a lot to be said for having processes in place to guide development.

Unfortunately, this isn’t something that Linux can simply adopt.  Linux isn’t one thing and there isn’t any one organization controlling any significant part of it.  As Solaris integrates an increasing amount of third-party software, it will likely also begin to suffer from this somewhat although perhaps to a lesser extent because there are still some things that can be made a bit more common.

Interface Stability

This is an area where Solaris shines and Linux is just plain abysmal.  If an application runs on Solaris version X, then there’s an excellent chance it will work on version X+1.  This is far from true on Linux.  I’ve had countless experiences where an application that worked on one release of Red Hat or Ubuntu didn’t work on the next.  In some of those cases, recompiling the application was sufficient to get it running, but there are still a lot of cases where that’s not enough, and sometimes the source isn’t available to even try.

Seamless 32-bit and 64-bit Integration

Solaris has had 64-bit support since Solaris 7, and I know that I was testing 64-bit Solaris on x86-64 systems before the Linux kernel had 64-bit support for those systems.  Of course, Linux has included support for x86-64 systems for a while now, but the bigger issue with 32-bit versus 64-bit support in Linux is that it’s generally one or the other.  If you install a 64-bit Linux distribution, then it’s generally entirely 64-bit and may not even include the ability to run 32-bit applications at all, at least not without installing additional packages.

This leads to silliness like 64-bit web browsers and complaints about lack of 64-bit plugins.  I don’t see much need for a browser to support over 4GB of memory, which is the primary benefit for 64-bit applications.  There are a few other cases where 64-bit support may be beneficial (e.g., having access to more CPU registers), but they are primarily beneficial for applications that need to do a lot of calculation or cryptography.

It’s true that Sun should have provided a 64-bit Java plugin, and plugin providers are equally to blame for the same infraction.  However, if 32-bit and 64-bit support had been better integrated then only applications which truly benefit from 64-bit support would really need to be provided as such.

Process Rights Management

In both Linux and Solaris, root is an all-powerful user that can do pretty much anything.  Historically, there are a lot of things that only root can do.  Both Linux and Solaris provide mechanisms for granting a specified set of normal users the ability to run certain commands with root authority, but in such cases those applications have full root access and it is necessary to trust that they will behave properly and don’t have any security holes that could lead to unintended consequences.

Solaris improves upon this with process rights management, also known as least privilege.  Rather than granting an application full root access, it is possible to grant only those portions of root access that the application needs.  For example, a web server may need to listen on port 80, but there is no need to let it read or write any file on the system, load kernel modules, or halt or reboot the system.  Instead of granting it full root access (even if it gives it up later via setuid), you can just grant it the net_privaddr privilege so that can listen on ports below 1024 without having access to any of the other things that might be possible if it were truly running as root.

ZFS

ZFS is a phenomenal filesystem + volume manager.  Linux has good filesystems and a volume manager, but none of them have the combination of power, flexibility, and ease of use that ZFS offers.  It works well on a fraction of a drive, and it works well on a system like the Sun Fire x4500 with 48 drives.  It’s got good performance.  It’s got great data integrity features through redundancy, checksums, and multiple copies of data and metadata.  It’s got instantaneous atomic snapshots.  It never needs to be defragmented.  It never needs fsck.  The implementation of cryptographic support for encrypted filesystems is going to be integrated in the near future.

ZFS is great for servers, but it’s also very useful on desktop systems.  As a developer, I find the instant snapshot and rollback capability to be extremely useful for resetting an application to a known clean state when running tests.  Compression increases the amount of data that you can store and in many cases actualliy helps performance.  The ability to keep multiple copies can help prevent you from losing information even if you’ve only got a single disk.

Zones

Virtualization is a pretty hot topic right now, and there isn’t really a shortage of options.  VMware, VirtualBox, Xen, QEMU, and other solutions make it easy to run one operating system on top of another, or another operating system on top of itself.  They provide excellent separation of environments, but they are all pretty heavyweight because they require a significant amount of memory and disk space for the virtualized environment.

In some cases, e.g., for security or application testing, you want something that looks like a separate system but you don’t need a different operating system.  For those cases,  Solaris offers an excellent alternative in the form of zones.  A zone provides excellent separation and for the most part looks like a completely separate system, but requires virtually no memory and in many cases a relatively small amount of disk space because it’s able to share a large amount of the filesystem from the host system (also called the global zone).  Zones are trivial to configure, and when used in conjunction with ZFS snapshotting and cloning it can be possible to create new zones nearly instantly.

In the default case, a Solaris Zone runs exactly the same OS and kernel version as the host system, which makes it extremely cheap.  However, because of the strong compatibility that Solaris provides between versions, alternative implementations have been added which make it possible to create zones which appear to be running Solaris 8 or 9 on Solaris 10.

It’s also possible to run a Linux emulation layer in a zone, which is pretty impressive but unfortunately the support is pretty limited and it doesn’t really look like there’s much active development going on to improve it.  However, while I did make use of this capability when it was first released for a few specialty purposes (e.g., running Acrobat Reader), I haven’t used it in quite a while because I can run pretty much everything I need to natively in Solaris, and I do have Ubuntu installed in VirtualBox which provides a much better way to run Linux on Solaris.

Observability

Both Linux and Solaris provide pretty good ways of identifying what’s happening on the system, but Solaris tends to do a better job.  At a very high level, I’ve noticed that most Linux systems don’t ship with utilities like iostat and mpstat and require the installation of additional packages, whereas they are included by default on Solaris.

When looking into what a particular process is doing, both the Solaris truss and Linux strace are pretty similar, and they allow you to see the system calls that the process is making.  It does appear that at leasat some Linux distributions (like Ubuntu) don’t seem to provide pstack or an equivalent command for getting a stack trace of all threads in the process.

Of course, the big feature that Solaris has in this arena is DTrace.  Linux does have SystemTap, but it appears to be sat the present time it’s limited to only examining the kernel and doesn’t have any support for tracing user-space applications.  DTrace provides full support for debugging user-space applications, and there is also special support for tracing Java, Ruby, Perl, shell scripts, and other scripting languages.  And because Solaris provides stable interfaces, there’s a much better guarantee that DTrace scripts which use stable interfaces will continue to work in the future.

The Future of SLAMD

Even to the casual observer it’s not too hard to notice that not much has happened with SLAMD in quite a while.  In fact, it’s been almost two years since my last commit to the public repository.  There are several reasons for this, including:

  • It works pretty doggone well in its current state.  I still use it quite a bit and hear that several other people do, too.  There are definitely ways that it could be improved, but so far it has been able to meet my needs.  Most of the SLAMD code that I have written since then has been in the form of new jobs that were pretty task-specific and not something that are likely to be useful in other environments.
  • Most of the time, I have been really busy with other things.  I just haven’t had nearly as much time to invest in it as I would have liked.  Of course, I did have a forced two-month vacation near the end of last year, but the terms of my severance stated that I wasn’t allowed to do any work and I didn’t want to press my luck.  After that period ended I’ve been going full-steam-ahead on something else.
  • There are parts of it that could do with a redesign.  The code used to generate the administrative interface is currently all held in one large file and could stand to be broken up.  There are also many areas in which updating the code to require Java 5 would allow it to be much more efficient and scalable, and the introduction of features like generics and enums would make the code easier and safer to edit.

Ultimately, I think that it’s at the point where it would be better to invest the effort in a clean rewrite than to try to build upon what’s there now, but so far I haven’t had much opportunity to do either one of them.  It’s definitely something that I would like to do and I’m hopeful that I might have the time to do it at some point in the future.  I have a lot of ideas for interesting and powerful enhancements, so I don’t want to count it out just yet.

Why the Dislike for CDDL?

Disclaimer: Although I was employed by Sun at the time the CDDL was created and chosen as the license for the OpenSolaris code base, I was not involved in either of those processes and was not privy to any of the related discussions. I am not making any attempt to speak for Sun, and all information provided in this post is either based on publicly-available information or my own personal opinion.

In reading discussions about what has happened with OpenDS, I’ve seen a wide range of reactions. This is to be expected, but one thing that I have found to be a bit surprising is that there have been some comments that are critical of the Common Development and Distribution License (CDDL). This isn’t the first time that such comments have been made, as I’ve heard them ever since the license was first created, but I am a little puzzled by them and the fact that they have persisted for so long. I think that the CDDL is an excellent open source license and that many of the negative comments stem from not really understanding it, while others may have something to do with the fact that the open source community in general has been and continues to be somewhat suspicious of Sun (who authored the CDDL).

The CDDL was originally created as a potential license for OpenSolaris. This drew a lot of criticism because many people, especially those in the Linux community, wanted Sun to use the GNU General Public License (GPL). Since GPLv3 was nowhere near complete at the time, if Sun did choose GPL then it would have to be GPLv2 but that would have been completely impossible for Sun to do in a reasonable way. While Sun certainly owns copyright on most of the code in Solaris, there are parts of the code that Sun licenses from third parties. Since GPLv2 doesn’t play well with non-GPLv2 code, if Sun had chosen to use GPLv2 for OpenSolaris, then they wouldn’t have been able to include some of those third-party components (especially those that interact directly with the kernel) which would have made it a lot less attractive for potential users. In that case, about the only people that would have been happy would be those in the Linux community because they would have been able to take the best parts of Solaris and pull them into Linux. OpenSolaris itself wouldn’t have been really useful until they had either re-written the third-party components or convinced their respective copyright owners to make them available under GPLv2. Other operating systems which use non-GPL licenses (like the BSD-based variants, which have gotten a lot of benefit from the OpenSolaris code) wouldn’t have been able to use it, and third-party vendors (especially those that need kernel-level interaction, like hardware device drivers) would have also found it much less attractive. It is possible that some of these concerns could have been addressed by creating GPL exceptions, much like they have done with Java, but even still there would have been significant deficiencies that GPLv2 doesn’t address like legal concerns about code which is covered by patents. Rather than try to pigeonhole OpenSolaris into GPLv2, Sun chose to look at other options, including the possibility of using their own license, which ultimately led to the creation of the CDDL.

Before I go any further, let me briefly describe the primary types of licenses that exist in the open source world. They fall into three basic categories:

  • Licenses which preserve open source at all costs, like the GPLv2. These licenses require that any software that uses code under such a license must always be open source. In other words, you can’t use code licensed in this manner in an application with closed-source components. This is very good for the community that releases the code under this license, since it ensures that they will always have access to any improvements made to it, but it’s less friendly to downstream developers since it creates significant restrictions on how they might be able to use that code.
  • Licenses which preserve freedom at all costs, like the BSD and Apache licenses. These licenses place very few restrictions on how other developers can use the code, and it’s entirely possible for someone to take code under such a license and make changes to it without making those changes available to anyone else, even the original developers.
  • Licenses which attempt to strike a balance between open source and freedom, like the Mozilla Public License, the CDDL, and GPLv3. These licenses generally require that any changes to the existing code be made available under the terms of the original license, but any completely new code that is created can be under a different license, including one that is closed source.

As someone who has done a significant amount of both open source and closed source development, I really like licenses in this third category. If I make code that I have written available under an open source license, then I like the guarantee that this code will remain open. On the other hand, I also like giving others the freedom to do what they want with their own code, even if some of their code happens to interact with some of my code, and I know that commercial users are much more likely to shy away from licenses in the “open source at all costs” camp than licenses in the other two categories.

So what are the specifics of the CDDL? It’s based on the Mozilla Public License, but clarifies some things that the MPL doesn’t cover. The basic principles of the CDDL are as follows:

  • CDDL has been approved by OSI as an official open source license, which means that it meets all of the minimum requirements defined at http://www.opensource.org/docs/osd.
  • CDDL is a file-based license. This means that if you make any changes to CDDL-licensed software, any existing files that you modify need to remain under CDDL, but any new files that you create can be under whatever license you want as long as that license isn’t incompatible with CDDL.
  • Similar to the above point, CDDL is very friendly when interacting with code under other licenses. This makes it easy to mix CDDL-licensed code with libraries under other licenses, or to use CDDL-licensed libraries in a project under a different license.
  • CDDL includes an explicit patent grant clause, which means that if any of the code is covered by patents then anyone using or extending that code is also granted the right to use those patents. It also includes a clause that terminates the usage rights of anyone who brings patent-based litigation against the code.
  • CDDL isn’t a Sun-specific license, and is suitable for software written by anyone. The only mention of Sun in the license is to indicate that Sun is the license steward and the only entity able to create new versions of the license

See http://www.sun.com/cddl/ and http://www.opensolaris.org/os/about/faq/licensing_faq/ for further information about CDDL license terms.

In my opinion, the CDDL is a very attractive license for open source software. It certainly doesn’t seem evil or unfair in any way, so I have a hard time understanding the bad reputation that it seems to have gotten. It is true that CDDL code can’t be mixed with GPLv2 code, but that’s not because CDDL is incompatible with GPLv2, but rather because GPLv2 is incompatible with CDDL. GPLv2 is incompatible with lots of other licenses, including other popular open source licenses like the Apache License, the BSD license, and the Mozilla Public License. In fact, the GPLv2 is even incompatible with the GPLv3 (as per http://www.gnu.org/philosophy/license-list.html#GNUGPL). It is unfortunate that the licenses used by OpenSolaris and Linux aren’t compatible with one another, but I think that it would have been a mistake to use GPLv2 for OpenSolaris and highly doubt that incompatibility with Linux was seen as a key benefit when CDDL was selected for OpenSolaris.

Our decision to use CDDL for OpenDS was made after careful consideration and was based on the merits of the license. We were certainly not pressured into using it by Sun, and in fact during discussions with Sun’s open source office they wanted to make sure that we weren’t choosing it just because we thought it was the company line but rather because it was the right license for the project. There are a number of other open source licenses out there, and they have their benefits as well, but if I were to be involved with the creation of a new open source software project, then I would imagine that CDDL would at least be in the running during the license selection process.

Clarifications on the Open Letter

It appears that there are some questions about the content in the open letter that I posted earlier this week. Simon Phipps (Sun’s chief open source officer) posted a comment on my blog that summarizes these questions, so I will use this post to reply to it. The original text from Simon’s post will be indented and italicized, and my responses will be in plain text.

Hi Neil,

Despite the fact you didn’t actually contact the Sun ombudsman service[1], I have had several referrals of your postings. I’ve done a little investigation and I have some questions about your story.

Actually, I did contact the Sun ombudsman service. The exact same text that was included in my blog post was also sent as an e-mail message. That message was sent from neil.a.wilson[at]directorymanager.org with a timestamp of “Wed, 28 Nov 2007 09:57:03 -0600” (9:57 AM US Central Time), and was addressed to neil.a.wilson[at]directorymanager.org. It was blind copied to the following recipients:

  • users[at]opends.dev.java.net
  • dev[at]opends.dev.java.net
  • jis[at]sun.com
  • ombudsman[at]sun.com

I did not receive any bounce messages in reply, and my mail server logs confirm that Sun’s mail server did in fact accept the message for delivery. If my message never made it into the ombudsman[at]sun.com inbox, then perhaps the problem is on your end (e.g., over-eager spam filtering, which happened to me on more than one occasion when I was a Sun employee).

It’s very regrettable that you were laid off, no question. That’s not a part of your narrative I can comment on for HR/legal reasons, but it’s always sad when business pressures force layoffs.

Thank you for the sentiment. While I wasn’t particularly happy about being laid off, I don’t hold a grudge against Sun because of it. Regardless of whether I think it was an intelligent move, Sun did have a justification for it (geographic consolidation). If the only thing that had happened was that I got laid off, then I fully expect that I would still be actively participating in the project. I believe I demonstrated that through my continued involvement in the project even after having received my layoff notification.

However, I do question how you characterize the requests to change the OpenDS governance. I note that the OpenDS governance was changed on April 28 by sshoaff[2] and that the original line reading:

“This Project Lead, who is appointed by Sun Microsystems, is responsible for managing the entire project”

was replaced by one reading

“This Project Lead, who is appointed and removed by a majority vote of the Project Owners, is responsible for managing the entire project”

I have not been able to find a discussion of this change anywhere, and I understand from your former managers that they were unaware of this change. While you characterize the request made of you as:

“demanded that the owners approve a governance change that would grant Sun full control of the OpenDS project”

it seems to me that what in fact happened was you were (collectively) asked to revert that change to its original state. On present data, it appears to me that far from Sun acting in bad faith over the governance, they were in fact making a reasonable request to correct an earlier error. Indeed, all that has happened to the governance document since then is to revert the change[3].

This is not the whole story.

First, the change to which you refer (committed in revision 1739 by Stephen Shoaff on April 28, 2007) was absolutely not unauthorized. Stephen Shoaff and Don Bowen both served as officers of the company (Stephen as the director of engineering for directory products, and Don as a director of product marketing for all identity products), and David Ely was the engineering manager and the Sun-appointed project lead for OpenDS under the original governance. This change was also discussed with Sun’s open source office, and while you (Simon) may not have been directly involved with those discussions, Don Bowen has informed me that there was a telephone conversation in which you told him that each project should make the decisions that are best for its respective community. We also involved the OpenDS and Identity Management communities in the process, although those conversations were on a personal basis with key members rather than at large on the public mailing lists. Unfortunately, none of us can currently produce any evidence to support these claims. When we received the layoff notification we were required to return or destroy any Sun property that we may have had, and since all of these discussions would be considered Sun-internal communication we no longer have access to any record of them in compliance with the notification requirement. However, full documentation to support all of these claims should exist within Sun should you feel the need to verify them.

Second, this was not the governance change to which I referred in my original post. In the meeting that the owners (including Ludovic) had on November 13, 2007, we were informed that it was Sun’s intention to replace the governance with something different and that the new governance would be chosen and managed by a Sun-selected committee. This change has not yet been applied, and as I am no longer involved with the project I cannot comment on whether there is still intent to make it. However, Eduardo referenced this future change on the OpenDS user mailing list today (https://opends.dev.java.net/servlets/ReadMsg?list=users&msgNo=627) when he said “We want to improve these governances, ideally in a consistent way.”

There was no discussion at all during the November 13 meeting of the change made in revision 1739, and it was not brought to our attention until the following evening. To the best of my knowledge the request to revert the change made in revision 1739 was never discussed with anyone other than Stephen Shoaff. I know that I personally never received any communication from anyone within Sun asking me to approve reverting this change.

Finally, I would ask Sun to justify their subsequent reversion of that change and how they believe that it was in the best interests of OpenDS, or how doing so was consistent with Sun’s public stance on the importance and value of community-led projects. Despite the fact that the change we made had more than sufficient authorization, I fail to see how reverting it is in any way an improvement. How is reverting to a Sun-appointed absolute authority better for the community than the consensus-driven model we thought Sun wanted?

I would be happy to continue to investigate this case, so if you would like to submit a complaint to ombudsman@sun.com with full data supporting your accusations I would be pleased to investigate further. I’m afraid I don’t usually read your blog so you’ll need to alert me (webmink@sun.com) to any postings here that need my attention.

Regards

Simon

[1] http://blogs.sun.com/webmink/entry/open_source_ombudsman
[2] http://tinyurl.com/ys5hf3
[3] http://tinyurl.com/yto9qs

I am afraid that there may not be any benefit to further investigation. It appears that you are using your position to attack my credibility and focus on damage control for Sun rather than acting impartially on my behalf as per your claim at http://blogs.sun.com/webmink/entry/open_source_ombudsman. Even if for some reason you did not receive the message that I originally sent to ombudsman[at]sun.com, I find it very discouraging and disappointing that Sun’s community advocate would choose to respond in such an inflammatory manner via e-mail messages and blog comments without even making an attempt to contact me for further clarification. You have accused me of launching an attack with partial facts but apparently have made no attempt to contact me to get the complete facts for yourself. I had gone out of my way to indicate that I felt that this was an isolated incident and not in-line with Sun’s true stance on open source, but it’s hard to continue to hold that position when Sun’s ombudsman and chief open source officer behaves in such a manner.

An Open Letter to the OpenDS Community and to Sun Microsystems

My name is Neil Wilson, and until recently I held the Owner and Committer roles in the open source OpenDS project. I helped found OpenDS, served as the project architect, and have contributed more code than anyone else. However, I must now regrettably inform you that I have been compelled to end all involvement with OpenDS. I have resigned all roles that I held in the project and have rescinded my Sun Contributor Agreement. I will no longer contribute code, documentation, bug reports, suggestions for improvement, or advice of any kind.

I joined Sun Microsystems in October of 2001, where I was directly involved with its proprietary directory products in addition to my later work with OpenDS. I wrote and analyzed code to provide new features, fix bugs, and improve performance, and I developed a number of tools to help improve the Directory Server experience. I had excellent working relationships with a number of customers, and I was instrumental in closing several deals worth many millions of dollars. I consistently received the top rating in annual performance reviews, and I worked with a number of other groups within Sun, as well as with Sun partners, to help ensure that the Directory Server products worked as well as possible with other Sun technologies, including Solaris, Java, and a number of other software products, as well as many different kinds of hardware.

On September 27, 2007, I was notified that Directory Server engineering, including OpenDS, was being consolidated in Grenoble, France, and that US-based positions were being eliminated. Some individuals were reassigned to work on other software products, but among those laid off were the four OpenDS project owners (myself, Stephen Shoaff, Don Bowen, and David Ely), as well as the OpenDS community manager (Trey Drake). We would technically remain Sun employees for the next two months, but were not able to access any Sun-internal resources and were not required to work in any way and were encouraged to use that time to seek employment elsewhere.

This was certainly a very surprising move, but the shock wore off and within a few days the OpenDS owners and community manager got together and decided that even if we were no longer working for Sun that we would like to continue our involvement with OpenDS and wished to ensure that the project was in the best possible position moving forward. To that end, we had face-to-face meetings, conference calls, and e-mail discussions with Sun employees still involved in the project to provide advice and knowledge transfers. I also continued participation on the project mailing lists, committed code changes, and updated the project issue tracker and documentation wiki.

The project owners also decided that as an act of good faith (and without any prompting from Sun) that we should elect a fifth owner who was a Sun employee, since Sun had certainly made a significant contribution to the project. We appointed Ludovic Poitou to this position, as he had served as the architect for Sun’s proprietary Directory Server product for several years, and further suggested that we should amend the project governance to ensure that Sun Microsystems was granted a permanent seat in the project ownership. On November 13, 2007, the OpenDS project owners (including Ludovic) met via conference call with the intention of discussing this governance change. However, during that meeting Ludovic informed us that Sun’s intention was to change the OpenDS governance policy so that the project was controlled entirely by a Sun-selected committee. This was a surprise to us, and we indicated that while we were willing to discuss this further to better understand what was involved, we were concerned that this was not necessarily in the best interests of the OpenDS project or its associated open source community. We noted that the current OpenDS governance policy stated that governance changes could only be made by a consensus of the project owners, and therefore we would be required to approve any potential change.

On November 14, 2007, a member of executive management within Sun’s software division contacted one of the recently-laid-off OpenDS project owners and demanded that the owners approve a governance change that would grant Sun full control of the OpenDS project. During this call, we were threatened that if we did not make this change we could face immediate termination and loss of all severance benefits. The four former-Sun owners discussed this and decided that we could not in good conscience approve the requested change as we did not believe that it would be in the best interests of the project, but we were also not willing to risk the considerable financial loss that could result if Sun decided to make good on that threat. After first trying to resolve the issue through more amicable avenues, we were ultimately compelled to resign our ownership and end our association with the project on November 19, 2007.

This was a very disappointing and hurtful turn of events. I believe that we acted only in good faith and in the best interests of the community, and we had clearly taken action to protect Sun’s position in the project even after our own jobs had been eliminated. OpenDS was founded as a community-focused “doacracy”, and no one has done more than I have to help ensure its success, or to ensure Sun’s success through OpenDS. However, Sun management has shown that at least in this case they are willing to resort to rather hostile tactics to preserve absolute control. This is most certainly not in the spirit of open source and open development that we tried to foster or that Sun claims to embody.

Please note that I don’t feel that this action was representative of Sun’s true open source strategy, but was a relatively isolated incident brought on by middle management acting of their own accord. I believe and certainly hope that the public statements made by individuals like CEO Jonathan Schwartz and Chief Open Source Officer Simon Phipps are honest and that Sun truly does want to be a genuine community-focused open source company, and I have no reason to believe that they were aware of or involved with any of what happened with OpenDS. Similarly, I sympathize with the remaining Sun-employed OpenDS engineers who may have been unwittingly drawn into this turmoil, and am disappointed that we will no longer be able to work together, but it was not my choice. Unfortunately, if Sun is unable to ensure that their middle management is on the same page as the senior management setting the open source strategy and the engineers making it happen, then it won’t take too many more incidents like this (or the Project Indiana / OpenSolaris Developer Preview naming fiasco) for people to start to question Sun’s true intentions.

In order to avoid potential retaliation from Sun, I have remained silent on this matter through the duration of the two-month period following the layoff notification during which I was still technically a Sun employee. Now that this time has elapsed, I am no longer at risk of losing severance benefits and I believe that it is important to clear the air. I have no desire to pursue this matter any further through legal or other channels, but simply wish to explain why I am no longer able to be involved with the OpenDS project.

I am passionate about the technology and hope to continue working in this area in the future, but I am not yet prepared to discuss where I’m going from here. You may watch my new blog at / for more information in the future.

Neil Wilson
neil.a.wilson[at]directorymanager.org