SLAMD 2.0.0-20090227

After quite a significant hiatus, I have just released a new version of SLAMD, which I am calling 2.0.0-20090227. It isn’t the official 2.0.0 release, but it should be pretty solid, and there are significant improvements over the previous 2.0.0-alpha1 build. You can get it at, and the source is hosted in a subversion repository at

This is the first public build of SLAMD since I left Sun, but the SLAMD codebase hasn’t exactly been collecting dust since then. One of the first things that I did when we founded UnboundID was to start working on the UnboundID LDAP SDK for Java, and within a few weeks of that I was already using it in SLAMD jobs. However, since until recently our LDAP SDK was not publicly available, I couldn’t really release a version of SLAMD that included it, and I didn’t want to maintain a separate private version of the code that made use of the SDK and a public version that didn’t. Now that our LDAP SDK is open source and publicly available, I’m able to use it in SLAMD without any concerns.

Since it has been over two years since the last public commit, and since I had problems with an older mail server, I have recreated all of the SLAMD mailing lists without any members. If you were previously subscribed to any of the mailing lists and want to keep it that way, then you’ll need to re-subscribe to the desired lists. Instructions for doing so are available at

Note that a number of changes have been made in this release that make it incompatible with earlier releases, and you probably won’t be able to export data from an older server and import it into this new version and have it all work seamlessly. I think that the enhancements in this version were worth it, especially if you’re using it to test LDAP directory servers, and hopefully the incompatibilities introduced now will help avoid the need to introduce further incompatibilities in the future.

Changes Since SLAMD 2.0.0-alpha1

There have been a lot of changes made to SLAMD since the 2.0.0-alpha1 release. Some of the most significant changes are outlined below. There have also been a number of less significant bugfixes and enhancements that probably aren’t worth mentioning individually.

Source Code Refactoring

The source code has been completely refactored in several ways. The code now uses a simplified package structure of “com.slamd.*“. The “example” sub-package has been renamed to “jobs“. All of the source code is now in one single directory structure rather than having separate source locations for a number of the tools.

The source code has also been fully updated to use Java 5 functionality. As a result, SLAMD now requires that you use Java 5.0 or later (I recommend the latest Java 6 build). The code has been updated to use generics and builds cleanly with all lint options enabled. As a result of updating to Java 5, I have also been able to take advantage of the better concurrency support and more accurate timing that it provides.

The job API has been updated so that it is now possible to provide both long and short descriptions. Further, where it previously used the destroy() method as a means of trying to forcefully stop a job, I have changed that to destroyThread(), since the former method overrode the deprecated Thread.destroy() method which I shouldn’t have done in the first place.

Job Updates

As mentioned above, most of the LDAP-based jobs now use the UnboundID LDAP SDK for Java. This provides better performance and scalability than the Netscape Directory SDK that I had previously been using, and it’s also much more stable under load. All of the LDAP-based jobs using the UnboundID LDAP SDK now extend a common parent class which provides a number of benefits for these jobs, including:

  • It is now possible to perform client-side load balancing so that you can stress multiple servers at once with a single job.
  • All of these jobs support SSL and StartTLS for secure communication with the directory server, and I’ve simplified the interface so that it’s no longer necessary to worry about key or trust stores.
  • These jobs offer the ability to specify a response time threshold, which can be used to keep track of the number of times that an operation took longer than the specified length of time to complete.

I have consolidated many of the LDAP-based jobs where it made sense to do so, so that there are no longer as many jobs but you should still have the same basic set of capabilities, in addition to new features.

The LDAP Add and Delete Rate job now provides the ability to control how the operations are performed. You can choose to perform only adds, only deletes, or both adds and deletes. If you choose to perform both, then you can either have all of the adds done first and then all of the deletes, or you can choose to delete each entry immediately after adding it.

The LDAP AuthRate job now supports the ability to authenticate using the CRAM-MD5, DIGEST-MD5, or PLAIN SASL mechanisms in addition to using simple authentication.

The LDAP ModRate job allows you to specify the set of characters to use to create the modification, so it will work better with attributes that syntaxes that don’t accept arbitrary characters.

I’ve added a new LDAP Modify DN Rate job which can be used to measure modify DN performance. You can specify a range of entries to be renamed, and the job will actually rename them twice. The first time it will rename them to include the job ID in the RDN, and the second time it will rename them back to the original value.

I’ve added a new LDAP Multi-Connection SearchRate job which allows you to perform searches while maintaining large numbers of connections to the directory in order to test performance with lots of active connections.

I’ve added a new “Wait for Directory” job which allows you to wait for a directory server to become available and start accepting requests before continuing on with other jobs that need to communicate with the server.

I have gotten rid of some older jobs that hadn’t been updated in a while and that targeted outdated software or software that I wasn’t in a position to support. This primarily included the jobs targeting an ancient version of the Sun Calendar Server, old versions of the Access Manager product, and the Identity Synchronization for Windows. If this is a significant problem, then it shouldn’t be difficult to resurrect them, but I didn’t want to invest the time cleaning them up and I wasn’t in any position to test them to see if they still work properly.

Resource Monitor Updates

A new resource monitor has been provided that makes it possible to capture arbitrary information from an LDAP directory server that exposes monitoring information. This was written by Bertold Kolics and it is very useful because it can be used to collect all kinds of monitoring information about the state of the directory server while a job is running (if the directory that you’re targeting exposes the desired information).

I’ve also fixed some bugs in a couple of the resource monitors. In particular, the VMStat monitor has been updated to support newer versions of Linux, and the IOStat monitor has been updated to fix a problem where it didn’t always work properly on Solaris.

Other Updates

I have updated the TimeTracker so that it uses the high-resolution timer which can provide up to nanosecond-level accuracy. Previously the TimeTracker only supported millisecond-level accuracy, but this wasn’t fine-grained enough for timing things that took less than a few milliseconds to complete.

The SLAMD client and resource monitor clients have been updated so that it is now possible to explicitly specify the client ID and source IP address to use when communicating with the SLAMD server. This is particularly useful on multi-homed systems, especially when they might have multiple IP addresses on the same network.

I have updated the job scheduler so that it now uses a blocking queue that allows a new job to be seen immediately. If it’s eligible to start running at the time that it’s scheduled, then it will often be the case that the job will already be running before the administrative interface renders the page indicating that the job has been scheduled.

I have changed the layout of the tools directory so that instead of providing separate subdirectories for each of the major tools, the code for each of those tools is integrated into the main codebase and now there are just shell scripts and batch files for launching them. I have also added a new Tools Guide that summarizes the tools that are available and provides information about using them.

The UnboundID LDAP SDK for Java provides searchrate, modrate, and authrate command-line tools as example programs, and they are now exposed as tools available for use within SLAMD. They are also available through shell scripts or batch files in the tools directory.

I have made some changes to the way that statistics are displayed in the administrative interface so that they are less likely to require horizontal scrolling.

The descriptions for the available jobs have been improved, and it’s now easier to tell what a job does when you schedule it. On the page allowing you to pick which type of job to schedule, SLAMD now provides a pop-up hint that gives a short description of the job. The page allowing you to provide the parameters to use when scheduling a job has also been updated to provide a more detailed description of the job at the top of the page.

Why I like Solaris

A recent InfoWorld article asks whether Solaris is going to be able to continue to compete with Linux.  Apparently someone from the Linux community pointed Paul Krill, the author, to my earlier blog article about the earlier issue that I wrote about with OpenDS and he asked me about it.  While I didn’t really want to bring that up again, I did tell him that I felt Solaris is a better OS than Linux.  I was briefly quoted in the article, but I would like to expand on that a bit.

First, let me say that I’ve got a pretty decent amount of experience with both Solaris and Linux.  I started using Linux in 1995 and Solaris around 1998.  Linux was my primary desktop operating system from probably around 1997 through about 2004, when I switched to Solaris.  I still run Linux quite a bit, including on my laptop out of necessity (primarily because Solaris doesn’t support the Broadcom Ethernet interface), but for the work that I do, which is primarily development, I find that Solaris is just more convenient.  On servers, there is no question that I prefer Solaris over Linux.

My fondness for Solaris definitely did not come easy, nor was it always warranted.  My first experiences were pretty unpleasant compared with Linux.  For many years, Solaris was anything but user friendly, providing only relatively antiquated shells and shipping without critical utilities like gzip or a C compiler.  CDE remained the default desktop environment for far too long, and it wasn’t easy to come by a lot of software that was commonly included with Linux distributions.  And of course, I’m sure that Sun’s decision to stop shipping Solaris for x86 systems for a period of time seriously hindered its usability and probably steered a number of potential customers away.

However, my opinion started to sway as Solaris 10 started to materialize.  Being a Sun employee, I had access to internal builds and I started to check them out.  Months before it was released to the public, I had changed my tune.  It was a serious leap forward in usability and convenience, and the new features were very compelling.  Since then, I’ve followed the development of Solaris 10, Nevada (the code name for what may become Solaris 11), and OpenSolaris.

It’s a lot easier to see why Linux may be more appealing than Solaris on the desktop.  Linux has better hardware support, so it will run on a lot more systems than Solaris.  Availability of applications can also be an issue, but I think in many cases that’s somewhat due to lack of publicity.  Many people haven’t heard about Nexenta (which is very much like Ubuntu Linux but with a Solaris kernel) or Blastwave (a site providing a large amount of Solaris software using a relatively simple interface that behaves like apt-get), and if you’re running Sun’s OpenSolaris distribution then you can use IPS to get access to a number of applications in a similar method.  But even so, there is just more software that works on Linux (or works more easily on Linux) than on Solaris.

However, there are a lot of reasons that I prefer Solaris and am willing to overlook some of its shortcomings.  I’ve listed some of them below.  Note that I’m not trying to slam Linux.  I do like it and hope that it continues to improve.  Consider this a wish list.

Overall Feel and Consistency

Linux feels like it was written.  Solaris feels like it was designed.  While I think that Sun’s development processes can sometimes be a little heavyweight, and I think that Sun is trying to retain too much control over OpenSolaris, there is a lot to be said for having processes in place to guide development.

Unfortunately, this isn’t something that Linux can simply adopt.  Linux isn’t one thing and there isn’t any one organization controlling any significant part of it.  As Solaris integrates an increasing amount of third-party software, it will likely also begin to suffer from this somewhat although perhaps to a lesser extent because there are still some things that can be made a bit more common.

Interface Stability

This is an area where Solaris shines and Linux is just plain abysmal.  If an application runs on Solaris version X, then there’s an excellent chance it will work on version X+1.  This is far from true on Linux.  I’ve had countless experiences where an application that worked on one release of Red Hat or Ubuntu didn’t work on the next.  In some of those cases, recompiling the application was sufficient to get it running, but there are still a lot of cases where that’s not enough, and sometimes the source isn’t available to even try.

Seamless 32-bit and 64-bit Integration

Solaris has had 64-bit support since Solaris 7, and I know that I was testing 64-bit Solaris on x86-64 systems before the Linux kernel had 64-bit support for those systems.  Of course, Linux has included support for x86-64 systems for a while now, but the bigger issue with 32-bit versus 64-bit support in Linux is that it’s generally one or the other.  If you install a 64-bit Linux distribution, then it’s generally entirely 64-bit and may not even include the ability to run 32-bit applications at all, at least not without installing additional packages.

This leads to silliness like 64-bit web browsers and complaints about lack of 64-bit plugins.  I don’t see much need for a browser to support over 4GB of memory, which is the primary benefit for 64-bit applications.  There are a few other cases where 64-bit support may be beneficial (e.g., having access to more CPU registers), but they are primarily beneficial for applications that need to do a lot of calculation or cryptography.

It’s true that Sun should have provided a 64-bit Java plugin, and plugin providers are equally to blame for the same infraction.  However, if 32-bit and 64-bit support had been better integrated then only applications which truly benefit from 64-bit support would really need to be provided as such.

Process Rights Management

In both Linux and Solaris, root is an all-powerful user that can do pretty much anything.  Historically, there are a lot of things that only root can do.  Both Linux and Solaris provide mechanisms for granting a specified set of normal users the ability to run certain commands with root authority, but in such cases those applications have full root access and it is necessary to trust that they will behave properly and don’t have any security holes that could lead to unintended consequences.

Solaris improves upon this with process rights management, also known as least privilege.  Rather than granting an application full root access, it is possible to grant only those portions of root access that the application needs.  For example, a web server may need to listen on port 80, but there is no need to let it read or write any file on the system, load kernel modules, or halt or reboot the system.  Instead of granting it full root access (even if it gives it up later via setuid), you can just grant it the net_privaddr privilege so that can listen on ports below 1024 without having access to any of the other things that might be possible if it were truly running as root.


ZFS is a phenomenal filesystem + volume manager.  Linux has good filesystems and a volume manager, but none of them have the combination of power, flexibility, and ease of use that ZFS offers.  It works well on a fraction of a drive, and it works well on a system like the Sun Fire x4500 with 48 drives.  It’s got good performance.  It’s got great data integrity features through redundancy, checksums, and multiple copies of data and metadata.  It’s got instantaneous atomic snapshots.  It never needs to be defragmented.  It never needs fsck.  The implementation of cryptographic support for encrypted filesystems is going to be integrated in the near future.

ZFS is great for servers, but it’s also very useful on desktop systems.  As a developer, I find the instant snapshot and rollback capability to be extremely useful for resetting an application to a known clean state when running tests.  Compression increases the amount of data that you can store and in many cases actualliy helps performance.  The ability to keep multiple copies can help prevent you from losing information even if you’ve only got a single disk.


Virtualization is a pretty hot topic right now, and there isn’t really a shortage of options.  VMware, VirtualBox, Xen, QEMU, and other solutions make it easy to run one operating system on top of another, or another operating system on top of itself.  They provide excellent separation of environments, but they are all pretty heavyweight because they require a significant amount of memory and disk space for the virtualized environment.

In some cases, e.g., for security or application testing, you want something that looks like a separate system but you don’t need a different operating system.  For those cases,  Solaris offers an excellent alternative in the form of zones.  A zone provides excellent separation and for the most part looks like a completely separate system, but requires virtually no memory and in many cases a relatively small amount of disk space because it’s able to share a large amount of the filesystem from the host system (also called the global zone).  Zones are trivial to configure, and when used in conjunction with ZFS snapshotting and cloning it can be possible to create new zones nearly instantly.

In the default case, a Solaris Zone runs exactly the same OS and kernel version as the host system, which makes it extremely cheap.  However, because of the strong compatibility that Solaris provides between versions, alternative implementations have been added which make it possible to create zones which appear to be running Solaris 8 or 9 on Solaris 10.

It’s also possible to run a Linux emulation layer in a zone, which is pretty impressive but unfortunately the support is pretty limited and it doesn’t really look like there’s much active development going on to improve it.  However, while I did make use of this capability when it was first released for a few specialty purposes (e.g., running Acrobat Reader), I haven’t used it in quite a while because I can run pretty much everything I need to natively in Solaris, and I do have Ubuntu installed in VirtualBox which provides a much better way to run Linux on Solaris.


Both Linux and Solaris provide pretty good ways of identifying what’s happening on the system, but Solaris tends to do a better job.  At a very high level, I’ve noticed that most Linux systems don’t ship with utilities like iostat and mpstat and require the installation of additional packages, whereas they are included by default on Solaris.

When looking into what a particular process is doing, both the Solaris truss and Linux strace are pretty similar, and they allow you to see the system calls that the process is making.  It does appear that at leasat some Linux distributions (like Ubuntu) don’t seem to provide pstack or an equivalent command for getting a stack trace of all threads in the process.

Of course, the big feature that Solaris has in this arena is DTrace.  Linux does have SystemTap, but it appears to be sat the present time it’s limited to only examining the kernel and doesn’t have any support for tracing user-space applications.  DTrace provides full support for debugging user-space applications, and there is also special support for tracing Java, Ruby, Perl, shell scripts, and other scripting languages.  And because Solaris provides stable interfaces, there’s a much better guarantee that DTrace scripts which use stable interfaces will continue to work in the future.

Sleeping with Solaris

I recently came across an interesting issue on Solaris.  I was trying to do something pretty simple:

  1. Check to see if there’s work to do.
  2. If there is work to do, then do it and go back to step 1.
  3. Sleep for a short length of time and go back to step 1.

This worked great on Linux, but not so well on Solaris.  Things were getting done in fractions of a millisecond on Linux, but it was taking over ten milliseconds on Solaris.  I hypothesized that the sleep might be the problem, and a quick test confirmed it.  It turns out that on Solaris in the default configuration you can’t sleep for less than ten milliseconds at a time (at least in Java).  This is because on Solaris, the Java sleep method is dependent on system clock ticks, and the default configuration uses 100 ticks per second, or one tick every ten milliseconds.  Any attempt to sleep for less than that time will get rounded up to ten milliseconds, and attempts to sleep for a length of time greater than that will be rounded up to the nearest multiple of ten milliseconds.

You can change this behavior by adding the following to the /etc/system file and rebooting the system:

set hires_tick=1

This will change the default from 100 ticks per second to 1000 (which means you can sleep in increments of one millisecond instead of ten).  There is also a way to fine-tune how many ticks per second you want (via hires_hz), but changing that isn’t supported and every mention of it I can find indicates that you don’t want to increase the frequency anyway because it will probably degrade performance as it will become expensive to update the tick counter too often.

There are, of course, other ways to accomplish the task I had originally set out to do (e.g., blocking queues, semaphores, wait/notify, etc.), but the bigger takeaway here is that you shouldn’t depend on algorithms involving fine-grained sleeping, especially if your application needs to run on Solaris.

The Future of SLAMD

Even to the casual observer it’s not too hard to notice that not much has happened with SLAMD in quite a while.  In fact, it’s been almost two years since my last commit to the public repository.  There are several reasons for this, including:

  • It works pretty doggone well in its current state.  I still use it quite a bit and hear that several other people do, too.  There are definitely ways that it could be improved, but so far it has been able to meet my needs.  Most of the SLAMD code that I have written since then has been in the form of new jobs that were pretty task-specific and not something that are likely to be useful in other environments.
  • Most of the time, I have been really busy with other things.  I just haven’t had nearly as much time to invest in it as I would have liked.  Of course, I did have a forced two-month vacation near the end of last year, but the terms of my severance stated that I wasn’t allowed to do any work and I didn’t want to press my luck.  After that period ended I’ve been going full-steam-ahead on something else.
  • There are parts of it that could do with a redesign.  The code used to generate the administrative interface is currently all held in one large file and could stand to be broken up.  There are also many areas in which updating the code to require Java 5 would allow it to be much more efficient and scalable, and the introduction of features like generics and enums would make the code easier and safer to edit.

Ultimately, I think that it’s at the point where it would be better to invest the effort in a clean rewrite than to try to build upon what’s there now, but so far I haven’t had much opportunity to do either one of them.  It’s definitely something that I would like to do and I’m hopeful that I might have the time to do it at some point in the future.  I have a lot of ideas for interesting and powerful enhancements, so I don’t want to count it out just yet.

Back from the Dead

Well, the title isn’t exactly accurate.  I haven’t really been dead, so please don’t come after me with fire and pitchforks.  But I certainly haven’t been very visible for a while, and I have felt like I have been buried under a mountain of work, so it may not be all that far off.

I tried writing general interest posts for a while, but gave that up because my guess is that nobody cares.  It’s true that I see a lot of movies (although I’m not trying to match last year’s pace of 109 different new releases), but there are lot of other places you can go to read better and more insightful reviews.  And I haven’t been playing my Wii very much recently either, so I don’t have much to say on that front.

Honestly, I think that the real value of my blog here is to keep things technical.  That’s what I did when I was at Sun, and it worked out well.  Unfortunately a lot of what I’m doing now is still a bit under wraps so I’m not quite ready to comment on that yet, but hopefully that will be coming in the not-too-distant future.  But in the meantime, there are some .  I do think that going without saying anything for too long was a mistake as well, and I’ll try to avoid that moving forward.

Wii Bowling Tips

Since I’ve had more free time than usual over the last couple of months, I spent some time with my Wii, and much of that has been in Wii Sports. I can’t seem to get the hang of boxing (actually, I’ve never lost any of the few bouts that I’ve had, but it seems like just flailing your arms about is enough to win and a lot of my punches don’t seem to register properly in the training modes), but I’ve gotten pretty good at the rest of the games and for some reason I’m particularly attached to bowling. When I first started the scores I got weren’t any better than I could achieve in real life, but over time I’ve been able to refine my skills and I’ve now had several perfect games. In the process, my approach looks a lot less like actual bowling, and I have to say that it does take some of the fun out of it, but nevertheless it’s nice to see a top score of 300.

As far as the other sports go, the only advice I can give is that practice makes perfect, so you’ll just need to keep at it. Also, sometimes nontraditional approaches may be helpful. For example, I’ve found that I often have better luck in batting practice when I’m sitting down and just swing with a flick of the wrist rather than standing with a traditional batting stance or even using a full swing. Also, there are plenty of other tips and videos available, and YouTube seems to be a good starting point.

Now, onto the bowling tips.

The Obvious Stuff

There really isn’t any huge secret to doing well. The most important thing to do is to figure out what works well for you and then keep doing that. As long as you are consistent, you should get repeatable results. Consistency includes:

  • Starting in the same position on the floor and with the same angle. Use the arrows on the floor to help ensure that you have the same position, and if you press “up” on the cross button then it will zoom in so you can fine-tune your position. Press “up” again to zoom back out.
  • Use the same delivery every time. This includes the point in your approach at which you start the swing, the speed of the swing, the amount of spin you put on the ball, and the point at which you release the ball.

Of course, this isn’t necessarily as easy as it sounds and I usually find some way to screw it up. Before I got my first perfect game (and since then as well), I threw many 279, 289, and even 299 point games where I left one pin standing that I had to pick up with a spare. However, through lots of trial-and-error, I have managed to isolate a pretty minimal set of actions that can help you out. Note that I’m right-handed, so if you’re a lefty then some of this may or may not be helpful (or at the very least you’ll probably want to reverse it).

Another somewhat obvious tip is that if your main goal is to get a perfect game, then it doesn’t make sense to continue playing a game if you’ve already blown it. In that case, you can press the “+” button on the controller to bring up a menu that allows you to start the game over without the need to finish the current game. Clearly, this is only something that you’ll want to do when you’re playing solo as it would probably be quite annoying to others if they were playing along. And even when you are alone it’s not very sportsman-like because if you do this it will be like you hadn’t played that game at all so it won’t have any potential to impact your score (and if you get a high enough rating, anything less than a perfect game can actually hurt your score).

Consistently Getting Strikes

One thing that I found was that I kept getting messed up on the approach. This wasn’t always my fault, as I noticed that sometimes the animation at the beginning of the approach got a little choppy and that threw off my timing. However, I discovered a little trick to completely eliminate that from the equation. If you hold down the “B” button (the trigger on the underside of the controller) then it will start the approach, but if you keep holding it down you will stop once you reach the scratch line. If you wait to make your swing until after you’ve stopped then you’ll completely eliminate any variability from the approach.

With the approach problems solved, it’s much easier to maintain consistency in the rest of the process. The easiest area in which to remain consistent is in your positioning and aim, since you can take your time and always make that exactly the same. I’ve found that works best for me (especially when I don’t have to worry about the approach, and when I use the swing technique outlined below) is to zoom in and align with the arrows on the floor. There are two rows of arrows, a continuous row further down the lane (i.e., higher up on the screen) and a second row closer to you that is split into the left and right sides. The perfect position for me is to align the left edge of the red stripe with the left edge of the leftmost arrow in the lower right group. I don’t change my angle at all, so I always keep the red stripe parallel with the gutters. When I’m done, the red stripe is pretty much aligned with the three pin (the first pin to the right of the head pin). Note that positioning can be sensitive, so it may take some adjustment to get it right, and you really do need to zoom in to ensure that your positioning is perfect.

The final piece of the puzzle is to ensure that your swing has a consistent speed, spin, and release. I can’t really separate these components easily, but what I do is to hold down the trigger until I have stopped at the scratch line, at which point my arm and the controller are pointing straight down and the front of the controller is parallel with my TV screen. Then, without moving my upper arm, I snap my forearm up until my hand is at my shoulder, and at the same time I twist my wrist so that the controller is turned 90 degrees and the face of the controller is perpendicular to the TV screen. I time it so that I start twisting my wrist at the same time I start raising my forearm, and complete the twist when my hand is at my shoulder. Also, at the same time my hand reaches my shoulder, I release the trigger. If everything is done correctly, the ball should end up spinning right into the one and three pins, and it should be a pretty explosive hit.

If you get the motion down, you should easily find yourself consistently scoring over 200, and I pretty frequently get at least a 279 and have gotten 300 several times. Of course, this doesn’t really look like real bowling, and you can even do it when sitting down which is even more of an abomination. Other than the thrill of getting a 300-point game, I actually prefer a more traditional process because it’s just more fun when you treat it like a sport than like a computer simulation where precision is important.

I should also point out that this technique doesn’t work nearly as well in the “power throws” training game, where you start with ten pins but then there are more pins added each throw until you hit 91 pins on the tenth roll. It does pretty well, but you’re not going to get an 890-point game using it. The best that I’ve gotten with this technique (in combination with the 91-pin strike mentioned below) is a 651.

Rolling the Ball Straight

Using a “normal” approach, I’ve found it very hard to get the ball to roll straight down the lane, and I’ve heard others mention this as well. For some reason, my natural swing puts spin on the ball even when I’m trying to consciously avoid it. However, I have come up with a modified version of the swing above that allows me to consistently throw the ball in a straight line so that it doesn’t veer off to one side as it goes down the lane.

In this case, since the ball is going to be rolling straight down the lane, you’ll want to make sure that your position is such that you’re pointing right at where you want the ball to hit. I still hold down the trigger until I stop at the scratch line with my arm pointed straight down and the face of the controller parallel to the TV, but then instead of bending my arm at the elbow I keep my whole arm straight and pull it up until my arm is pointing upward at about a 45-degree angle, and I release the trigger just after my arm has come to a stop without twisting my wrist at all. The whole motion looks kind of similar to the well-known salute to an infamous German World War II dictator, and if you do it right, then you should find that the ball will head straight down the lane.

Note that I generally use this technique for picking up spares, especially with the seven or ten pin right along the gutter, since it’s easier to get a strike when you do have spin on the ball.

The 91-Pin Strike

When you’re doing the “Power Throws” training session, you start with ten pins and then more pins are added for each additional roll until you have 91 pins on the tenth roll. In this game, your score is equal to the sum of all the pins that you knock down in each roll, but if you get a strike with any of your rolls, then you get twice as many points for that roll (e.g., if you get nine pins on the first frame then it’s worth nine points, but if you get all ten pins then it’s worth twenty points). As a result, getting all 91 pins in the last frame is worth 182 points, which can really boost your score, but that’s not an easy feat to accomplish even when using the technique outlined above. However, there is a cheat/Easter egg built into the game that allows you to get a strike on the tenth roll without even touching a single pin.

When you’re in the power throws game, there are rails along the sides to help prevent you from getting a gutter ball. However, with some careful positioning and the right throw, you can actually throw the ball so that it rolls down the top of the rail and stays on the rail the whole time, then it will count as a strike an you’ll get 91×2 points for that roll. Note that this only works on the tenth frame, so don’t waste your time trying it on the first nine.

The approach that I’ve found works best for this is to move all the way to the right until you stop. Then, move back one click to the left and press “A” to allow you to change the angle, and move two clicks to the right so that you’re angled slightly toward the rail. Then you can hold down the trigger until you stop at the scratch line and then swing your arm and try to put clockwise spin on the ball so that it will counter the natural tendency for the ball to curve back to the left. It may take some practice to get it right, since it’s easy to not put enough spin on the ball and have it fall off the rail before the end, and you can even put too much spin on the ball and have it fall off the rail in the other direction.

I don’t claim to have discovered this trick. I first saw it in a video on YouTube, and you can find an example at

The Bowling Robot

The ultimate in repeatability would be to build a robot to bowl for you, and actually that’s exactly what someone did. See for a page detailing how someone built a Lego robot that is able to play Wii Sports bowling. There’s even a video showing it roll a perfect game ( This is an incredible feat, and it’s quite an innovative approach. If nothing else, it does show that if you’re able to deliver consistent throws then you really can get a perfect game.

Why the Dislike for CDDL?

Disclaimer: Although I was employed by Sun at the time the CDDL was created and chosen as the license for the OpenSolaris code base, I was not involved in either of those processes and was not privy to any of the related discussions. I am not making any attempt to speak for Sun, and all information provided in this post is either based on publicly-available information or my own personal opinion.

In reading discussions about what has happened with OpenDS, I’ve seen a wide range of reactions. This is to be expected, but one thing that I have found to be a bit surprising is that there have been some comments that are critical of the Common Development and Distribution License (CDDL). This isn’t the first time that such comments have been made, as I’ve heard them ever since the license was first created, but I am a little puzzled by them and the fact that they have persisted for so long. I think that the CDDL is an excellent open source license and that many of the negative comments stem from not really understanding it, while others may have something to do with the fact that the open source community in general has been and continues to be somewhat suspicious of Sun (who authored the CDDL).

The CDDL was originally created as a potential license for OpenSolaris. This drew a lot of criticism because many people, especially those in the Linux community, wanted Sun to use the GNU General Public License (GPL). Since GPLv3 was nowhere near complete at the time, if Sun did choose GPL then it would have to be GPLv2 but that would have been completely impossible for Sun to do in a reasonable way. While Sun certainly owns copyright on most of the code in Solaris, there are parts of the code that Sun licenses from third parties. Since GPLv2 doesn’t play well with non-GPLv2 code, if Sun had chosen to use GPLv2 for OpenSolaris, then they wouldn’t have been able to include some of those third-party components (especially those that interact directly with the kernel) which would have made it a lot less attractive for potential users. In that case, about the only people that would have been happy would be those in the Linux community because they would have been able to take the best parts of Solaris and pull them into Linux. OpenSolaris itself wouldn’t have been really useful until they had either re-written the third-party components or convinced their respective copyright owners to make them available under GPLv2. Other operating systems which use non-GPL licenses (like the BSD-based variants, which have gotten a lot of benefit from the OpenSolaris code) wouldn’t have been able to use it, and third-party vendors (especially those that need kernel-level interaction, like hardware device drivers) would have also found it much less attractive. It is possible that some of these concerns could have been addressed by creating GPL exceptions, much like they have done with Java, but even still there would have been significant deficiencies that GPLv2 doesn’t address like legal concerns about code which is covered by patents. Rather than try to pigeonhole OpenSolaris into GPLv2, Sun chose to look at other options, including the possibility of using their own license, which ultimately led to the creation of the CDDL.

Before I go any further, let me briefly describe the primary types of licenses that exist in the open source world. They fall into three basic categories:

  • Licenses which preserve open source at all costs, like the GPLv2. These licenses require that any software that uses code under such a license must always be open source. In other words, you can’t use code licensed in this manner in an application with closed-source components. This is very good for the community that releases the code under this license, since it ensures that they will always have access to any improvements made to it, but it’s less friendly to downstream developers since it creates significant restrictions on how they might be able to use that code.
  • Licenses which preserve freedom at all costs, like the BSD and Apache licenses. These licenses place very few restrictions on how other developers can use the code, and it’s entirely possible for someone to take code under such a license and make changes to it without making those changes available to anyone else, even the original developers.
  • Licenses which attempt to strike a balance between open source and freedom, like the Mozilla Public License, the CDDL, and GPLv3. These licenses generally require that any changes to the existing code be made available under the terms of the original license, but any completely new code that is created can be under a different license, including one that is closed source.

As someone who has done a significant amount of both open source and closed source development, I really like licenses in this third category. If I make code that I have written available under an open source license, then I like the guarantee that this code will remain open. On the other hand, I also like giving others the freedom to do what they want with their own code, even if some of their code happens to interact with some of my code, and I know that commercial users are much more likely to shy away from licenses in the “open source at all costs” camp than licenses in the other two categories.

So what are the specifics of the CDDL? It’s based on the Mozilla Public License, but clarifies some things that the MPL doesn’t cover. The basic principles of the CDDL are as follows:

  • CDDL has been approved by OSI as an official open source license, which means that it meets all of the minimum requirements defined at
  • CDDL is a file-based license. This means that if you make any changes to CDDL-licensed software, any existing files that you modify need to remain under CDDL, but any new files that you create can be under whatever license you want as long as that license isn’t incompatible with CDDL.
  • Similar to the above point, CDDL is very friendly when interacting with code under other licenses. This makes it easy to mix CDDL-licensed code with libraries under other licenses, or to use CDDL-licensed libraries in a project under a different license.
  • CDDL includes an explicit patent grant clause, which means that if any of the code is covered by patents then anyone using or extending that code is also granted the right to use those patents. It also includes a clause that terminates the usage rights of anyone who brings patent-based litigation against the code.
  • CDDL isn’t a Sun-specific license, and is suitable for software written by anyone. The only mention of Sun in the license is to indicate that Sun is the license steward and the only entity able to create new versions of the license

See and for further information about CDDL license terms.

In my opinion, the CDDL is a very attractive license for open source software. It certainly doesn’t seem evil or unfair in any way, so I have a hard time understanding the bad reputation that it seems to have gotten. It is true that CDDL code can’t be mixed with GPLv2 code, but that’s not because CDDL is incompatible with GPLv2, but rather because GPLv2 is incompatible with CDDL. GPLv2 is incompatible with lots of other licenses, including other popular open source licenses like the Apache License, the BSD license, and the Mozilla Public License. In fact, the GPLv2 is even incompatible with the GPLv3 (as per It is unfortunate that the licenses used by OpenSolaris and Linux aren’t compatible with one another, but I think that it would have been a mistake to use GPLv2 for OpenSolaris and highly doubt that incompatibility with Linux was seen as a key benefit when CDDL was selected for OpenSolaris.

Our decision to use CDDL for OpenDS was made after careful consideration and was based on the merits of the license. We were certainly not pressured into using it by Sun, and in fact during discussions with Sun’s open source office they wanted to make sure that we weren’t choosing it just because we thought it was the company line but rather because it was the right license for the project. There are a number of other open source licenses out there, and they have their benefits as well, but if I were to be involved with the creation of a new open source software project, then I would imagine that CDDL would at least be in the running during the license selection process.

Clarifications on the Open Letter

It appears that there are some questions about the content in the open letter that I posted earlier this week. Simon Phipps (Sun’s chief open source officer) posted a comment on my blog that summarizes these questions, so I will use this post to reply to it. The original text from Simon’s post will be indented and italicized, and my responses will be in plain text.

Hi Neil,

Despite the fact you didn’t actually contact the Sun ombudsman service[1], I have had several referrals of your postings. I’ve done a little investigation and I have some questions about your story.

Actually, I did contact the Sun ombudsman service. The exact same text that was included in my blog post was also sent as an e-mail message. That message was sent from neil.a.wilson[at] with a timestamp of “Wed, 28 Nov 2007 09:57:03 -0600” (9:57 AM US Central Time), and was addressed to neil.a.wilson[at] It was blind copied to the following recipients:

  • users[at]
  • dev[at]
  • jis[at]
  • ombudsman[at]

I did not receive any bounce messages in reply, and my mail server logs confirm that Sun’s mail server did in fact accept the message for delivery. If my message never made it into the ombudsman[at] inbox, then perhaps the problem is on your end (e.g., over-eager spam filtering, which happened to me on more than one occasion when I was a Sun employee).

It’s very regrettable that you were laid off, no question. That’s not a part of your narrative I can comment on for HR/legal reasons, but it’s always sad when business pressures force layoffs.

Thank you for the sentiment. While I wasn’t particularly happy about being laid off, I don’t hold a grudge against Sun because of it. Regardless of whether I think it was an intelligent move, Sun did have a justification for it (geographic consolidation). If the only thing that had happened was that I got laid off, then I fully expect that I would still be actively participating in the project. I believe I demonstrated that through my continued involvement in the project even after having received my layoff notification.

However, I do question how you characterize the requests to change the OpenDS governance. I note that the OpenDS governance was changed on April 28 by sshoaff[2] and that the original line reading:

“This Project Lead, who is appointed by Sun Microsystems, is responsible for managing the entire project”

was replaced by one reading

“This Project Lead, who is appointed and removed by a majority vote of the Project Owners, is responsible for managing the entire project”

I have not been able to find a discussion of this change anywhere, and I understand from your former managers that they were unaware of this change. While you characterize the request made of you as:

“demanded that the owners approve a governance change that would grant Sun full control of the OpenDS project”

it seems to me that what in fact happened was you were (collectively) asked to revert that change to its original state. On present data, it appears to me that far from Sun acting in bad faith over the governance, they were in fact making a reasonable request to correct an earlier error. Indeed, all that has happened to the governance document since then is to revert the change[3].

This is not the whole story.

First, the change to which you refer (committed in revision 1739 by Stephen Shoaff on April 28, 2007) was absolutely not unauthorized. Stephen Shoaff and Don Bowen both served as officers of the company (Stephen as the director of engineering for directory products, and Don as a director of product marketing for all identity products), and David Ely was the engineering manager and the Sun-appointed project lead for OpenDS under the original governance. This change was also discussed with Sun’s open source office, and while you (Simon) may not have been directly involved with those discussions, Don Bowen has informed me that there was a telephone conversation in which you told him that each project should make the decisions that are best for its respective community. We also involved the OpenDS and Identity Management communities in the process, although those conversations were on a personal basis with key members rather than at large on the public mailing lists. Unfortunately, none of us can currently produce any evidence to support these claims. When we received the layoff notification we were required to return or destroy any Sun property that we may have had, and since all of these discussions would be considered Sun-internal communication we no longer have access to any record of them in compliance with the notification requirement. However, full documentation to support all of these claims should exist within Sun should you feel the need to verify them.

Second, this was not the governance change to which I referred in my original post. In the meeting that the owners (including Ludovic) had on November 13, 2007, we were informed that it was Sun’s intention to replace the governance with something different and that the new governance would be chosen and managed by a Sun-selected committee. This change has not yet been applied, and as I am no longer involved with the project I cannot comment on whether there is still intent to make it. However, Eduardo referenced this future change on the OpenDS user mailing list today ( when he said “We want to improve these governances, ideally in a consistent way.”

There was no discussion at all during the November 13 meeting of the change made in revision 1739, and it was not brought to our attention until the following evening. To the best of my knowledge the request to revert the change made in revision 1739 was never discussed with anyone other than Stephen Shoaff. I know that I personally never received any communication from anyone within Sun asking me to approve reverting this change.

Finally, I would ask Sun to justify their subsequent reversion of that change and how they believe that it was in the best interests of OpenDS, or how doing so was consistent with Sun’s public stance on the importance and value of community-led projects. Despite the fact that the change we made had more than sufficient authorization, I fail to see how reverting it is in any way an improvement. How is reverting to a Sun-appointed absolute authority better for the community than the consensus-driven model we thought Sun wanted?

I would be happy to continue to investigate this case, so if you would like to submit a complaint to with full data supporting your accusations I would be pleased to investigate further. I’m afraid I don’t usually read your blog so you’ll need to alert me ( to any postings here that need my attention.




I am afraid that there may not be any benefit to further investigation. It appears that you are using your position to attack my credibility and focus on damage control for Sun rather than acting impartially on my behalf as per your claim at Even if for some reason you did not receive the message that I originally sent to ombudsman[at], I find it very discouraging and disappointing that Sun’s community advocate would choose to respond in such an inflammatory manner via e-mail messages and blog comments without even making an attempt to contact me for further clarification. You have accused me of launching an attack with partial facts but apparently have made no attempt to contact me to get the complete facts for yourself. I had gone out of my way to indicate that I felt that this was an isolated incident and not in-line with Sun’s true stance on open source, but it’s hard to continue to hold that position when Sun’s ombudsman and chief open source officer behaves in such a manner.

An Open Letter to the OpenDS Community and to Sun Microsystems

My name is Neil Wilson, and until recently I held the Owner and Committer roles in the open source OpenDS project. I helped found OpenDS, served as the project architect, and have contributed more code than anyone else. However, I must now regrettably inform you that I have been compelled to end all involvement with OpenDS. I have resigned all roles that I held in the project and have rescinded my Sun Contributor Agreement. I will no longer contribute code, documentation, bug reports, suggestions for improvement, or advice of any kind.

I joined Sun Microsystems in October of 2001, where I was directly involved with its proprietary directory products in addition to my later work with OpenDS. I wrote and analyzed code to provide new features, fix bugs, and improve performance, and I developed a number of tools to help improve the Directory Server experience. I had excellent working relationships with a number of customers, and I was instrumental in closing several deals worth many millions of dollars. I consistently received the top rating in annual performance reviews, and I worked with a number of other groups within Sun, as well as with Sun partners, to help ensure that the Directory Server products worked as well as possible with other Sun technologies, including Solaris, Java, and a number of other software products, as well as many different kinds of hardware.

On September 27, 2007, I was notified that Directory Server engineering, including OpenDS, was being consolidated in Grenoble, France, and that US-based positions were being eliminated. Some individuals were reassigned to work on other software products, but among those laid off were the four OpenDS project owners (myself, Stephen Shoaff, Don Bowen, and David Ely), as well as the OpenDS community manager (Trey Drake). We would technically remain Sun employees for the next two months, but were not able to access any Sun-internal resources and were not required to work in any way and were encouraged to use that time to seek employment elsewhere.

This was certainly a very surprising move, but the shock wore off and within a few days the OpenDS owners and community manager got together and decided that even if we were no longer working for Sun that we would like to continue our involvement with OpenDS and wished to ensure that the project was in the best possible position moving forward. To that end, we had face-to-face meetings, conference calls, and e-mail discussions with Sun employees still involved in the project to provide advice and knowledge transfers. I also continued participation on the project mailing lists, committed code changes, and updated the project issue tracker and documentation wiki.

The project owners also decided that as an act of good faith (and without any prompting from Sun) that we should elect a fifth owner who was a Sun employee, since Sun had certainly made a significant contribution to the project. We appointed Ludovic Poitou to this position, as he had served as the architect for Sun’s proprietary Directory Server product for several years, and further suggested that we should amend the project governance to ensure that Sun Microsystems was granted a permanent seat in the project ownership. On November 13, 2007, the OpenDS project owners (including Ludovic) met via conference call with the intention of discussing this governance change. However, during that meeting Ludovic informed us that Sun’s intention was to change the OpenDS governance policy so that the project was controlled entirely by a Sun-selected committee. This was a surprise to us, and we indicated that while we were willing to discuss this further to better understand what was involved, we were concerned that this was not necessarily in the best interests of the OpenDS project or its associated open source community. We noted that the current OpenDS governance policy stated that governance changes could only be made by a consensus of the project owners, and therefore we would be required to approve any potential change.

On November 14, 2007, a member of executive management within Sun’s software division contacted one of the recently-laid-off OpenDS project owners and demanded that the owners approve a governance change that would grant Sun full control of the OpenDS project. During this call, we were threatened that if we did not make this change we could face immediate termination and loss of all severance benefits. The four former-Sun owners discussed this and decided that we could not in good conscience approve the requested change as we did not believe that it would be in the best interests of the project, but we were also not willing to risk the considerable financial loss that could result if Sun decided to make good on that threat. After first trying to resolve the issue through more amicable avenues, we were ultimately compelled to resign our ownership and end our association with the project on November 19, 2007.

This was a very disappointing and hurtful turn of events. I believe that we acted only in good faith and in the best interests of the community, and we had clearly taken action to protect Sun’s position in the project even after our own jobs had been eliminated. OpenDS was founded as a community-focused “doacracy”, and no one has done more than I have to help ensure its success, or to ensure Sun’s success through OpenDS. However, Sun management has shown that at least in this case they are willing to resort to rather hostile tactics to preserve absolute control. This is most certainly not in the spirit of open source and open development that we tried to foster or that Sun claims to embody.

Please note that I don’t feel that this action was representative of Sun’s true open source strategy, but was a relatively isolated incident brought on by middle management acting of their own accord. I believe and certainly hope that the public statements made by individuals like CEO Jonathan Schwartz and Chief Open Source Officer Simon Phipps are honest and that Sun truly does want to be a genuine community-focused open source company, and I have no reason to believe that they were aware of or involved with any of what happened with OpenDS. Similarly, I sympathize with the remaining Sun-employed OpenDS engineers who may have been unwittingly drawn into this turmoil, and am disappointed that we will no longer be able to work together, but it was not my choice. Unfortunately, if Sun is unable to ensure that their middle management is on the same page as the senior management setting the open source strategy and the engineers making it happen, then it won’t take too many more incidents like this (or the Project Indiana / OpenSolaris Developer Preview naming fiasco) for people to start to question Sun’s true intentions.

In order to avoid potential retaliation from Sun, I have remained silent on this matter through the duration of the two-month period following the layoff notification during which I was still technically a Sun employee. Now that this time has elapsed, I am no longer at risk of losing severance benefits and I believe that it is important to clear the air. I have no desire to pursue this matter any further through legal or other channels, but simply wish to explain why I am no longer able to be involved with the OpenDS project.

I am passionate about the technology and hope to continue working in this area in the future, but I am not yet prepared to discuss where I’m going from here. You may watch my new blog at / for more information in the future.

Neil Wilson