Making it Easier to Write Directory-Enabled Applications

I’ve been working with LDAP directory servers for about ten years, and for that entire time I’ve also been writing code.  I’ve written a lot of server-side code in the course of building directory servers, but I’ve written even more client-side code for interacting with them.  Unfortunately, I’ve always been a bit disappointed with the APIs that are available for LDAP communication, especially those for use in Java applications.

Java should be a great language for writing directory-enabled applications.  It’s fast, has a broad standard library, offers a number of frameworks for developing web-based and desktop applications, and it’s easy to write robust code in a short period of time.  Unfortunately, the APIs available for LDAP communication are pretty limited.  JNDI is a common choice because it’s part of the core Java runtime, but it’s very cumbersome and confusing and provides rather limited access to the elements of the LDAP protocol.  The Netscape Directory SDK for Java is more user friendly than JNDI, but it’s fairly buggy (especially under load), supports a pretty limited set of controls and extended operations, and is really showing its age after not having any new releases since 2002.  I’ve never actually used JLDAP, but it looks to expose pretty much the same API as the Netscape SDK and has also gone several years without a new release.

Today, UnboundID is correcting this problem with our release of the UnboundID LDAP SDK for Java.  It is a user-friendly, high-performance, feature-rich, and completely free Java API for communicating with LDAP directory servers and performing other directory-related tasks.  Some of the benefits that it provides include:

  • It is completely free to use and redistribute.  The LDAP SDK is available under either the GNU General Public License v2 (GPLv2) or the UnboundID LDAP SDK Free Use License.
  • It provides a broad feature set.  In addition to providing full support for the core LDAPv3 protocol, it also includes support for 17 standard controls, 4 standard extended operations, and 6 standard SASL mechanisms.  It also provides a number of related APIs for things like LDIF, base64 and ASN.1 parsing, working with root DSE and changelog entries, enhanced schema support, command line argument processing, and SSL/TLS communication.
  • It is much more convenient and easy to use than other LDAP APIs,  It is often possible to do what you want with quite a bit less code than the alternatives, and its use of Java features like generics, enums, annotations, and varargs can further enhance the development experience.
  • It provides support for connection pooling and client-side failover and load balancing.  Connections can be easily spread across multiple directory servers, and you can even have read and write operations sent to different sets of servers.  The connection pool classes implement the same interface as individual connections, so you can process operations using pooled connections without needing to deal with checking out and releasing connections and performing all of the necessary error handling (although that’s possible too if you need it).
  • It provides excellent performance and scalability.  My testing has shown it to be significantly faster than either JNDI or the Netscape SDK, and the searchrate tool that we include as an example can handily outperform the popular C-based version provided by another vendor.
  • It has no external dependencies.  Everything is included in a single jar file, and the only requirement is a Java SE 5.0 or higher runtime.
  • It is robust and reliable.  We have an extensive test suite for the SDK itself with over 26,000 test cases covering over 94% of the code.  The LDAP SDK is also used as an integral part of other UnboundID products, so it benefits from the testing we do for them as well.  It’s frequently subjected to very heavy and highly concurrent workloads so there shouldn’t be any surprises when moving your applications from testing into production (at least, not because of the LDAP SDK).
  • It includes generous documentation.  In addition to a more thorough overview of the benefits our LDAP SDK provides over other the alternatives, it also includes a getting started guide, Javadoc documentation with lots of examples, and a number of sample programs demonstrating the use of various SDK components.
  • Commercial support is available.  This can help ensure fast access to patches for any problems found in the LDAP SDK, and may also be used to request enhancements and additional functionality.  Developer support is also available to assist you in using the LDAP SDK to create directory-enabled applications.

You can find the UnboundID LDAP SDK for Java available for download at http://www.unboundid.com/products/ldapsdk/.  All of the documentation is available there as well (and also in the product download), including some frequently asked questions and a more detailed list of the advantages it offers over other LDAP APIs.

The UnboundID Website is Live

Since leaving Sun, I haven’t really said a whole lot about what I’ve been working on.  Shortly after our termination was finalized, the other OpenDS outcasts and I started a new company, named UnboundID, focusing on providing directory services solutions.

We have kind of been in “stealth mode” so far, and for a long time, we had only a simple “coming soon” page on our website, but as of last Friday we have an actual site (http://www.unboundid.com/).  There is still not a huge amount of content there, but we’ll be adding more in the upcoming weeks as we release more information about our products.

Why I like Solaris

A recent InfoWorld article asks whether Solaris is going to be able to continue to compete with Linux.  Apparently someone from the Linux community pointed Paul Krill, the author, to my earlier blog article about the earlier issue that I wrote about with OpenDS and he asked me about it.  While I didn’t really want to bring that up again, I did tell him that I felt Solaris is a better OS than Linux.  I was briefly quoted in the article, but I would like to expand on that a bit.

First, let me say that I’ve got a pretty decent amount of experience with both Solaris and Linux.  I started using Linux in 1995 and Solaris around 1998.  Linux was my primary desktop operating system from probably around 1997 through about 2004, when I switched to Solaris.  I still run Linux quite a bit, including on my laptop out of necessity (primarily because Solaris doesn’t support the Broadcom Ethernet interface), but for the work that I do, which is primarily development, I find that Solaris is just more convenient.  On servers, there is no question that I prefer Solaris over Linux.

My fondness for Solaris definitely did not come easy, nor was it always warranted.  My first experiences were pretty unpleasant compared with Linux.  For many years, Solaris was anything but user friendly, providing only relatively antiquated shells and shipping without critical utilities like gzip or a C compiler.  CDE remained the default desktop environment for far too long, and it wasn’t easy to come by a lot of software that was commonly included with Linux distributions.  And of course, I’m sure that Sun’s decision to stop shipping Solaris for x86 systems for a period of time seriously hindered its usability and probably steered a number of potential customers away.

However, my opinion started to sway as Solaris 10 started to materialize.  Being a Sun employee, I had access to internal builds and I started to check them out.  Months before it was released to the public, I had changed my tune.  It was a serious leap forward in usability and convenience, and the new features were very compelling.  Since then, I’ve followed the development of Solaris 10, Nevada (the code name for what may become Solaris 11), and OpenSolaris.

It’s a lot easier to see why Linux may be more appealing than Solaris on the desktop.  Linux has better hardware support, so it will run on a lot more systems than Solaris.  Availability of applications can also be an issue, but I think in many cases that’s somewhat due to lack of publicity.  Many people haven’t heard about Nexenta (which is very much like Ubuntu Linux but with a Solaris kernel) or Blastwave (a site providing a large amount of Solaris software using a relatively simple interface that behaves like apt-get), and if you’re running Sun’s OpenSolaris distribution then you can use IPS to get access to a number of applications in a similar method.  But even so, there is just more software that works on Linux (or works more easily on Linux) than on Solaris.

However, there are a lot of reasons that I prefer Solaris and am willing to overlook some of its shortcomings.  I’ve listed some of them below.  Note that I’m not trying to slam Linux.  I do like it and hope that it continues to improve.  Consider this a wish list.

Overall Feel and Consistency

Linux feels like it was written.  Solaris feels like it was designed.  While I think that Sun’s development processes can sometimes be a little heavyweight, and I think that Sun is trying to retain too much control over OpenSolaris, there is a lot to be said for having processes in place to guide development.

Unfortunately, this isn’t something that Linux can simply adopt.  Linux isn’t one thing and there isn’t any one organization controlling any significant part of it.  As Solaris integrates an increasing amount of third-party software, it will likely also begin to suffer from this somewhat although perhaps to a lesser extent because there are still some things that can be made a bit more common.

Interface Stability

This is an area where Solaris shines and Linux is just plain abysmal.  If an application runs on Solaris version X, then there’s an excellent chance it will work on version X+1.  This is far from true on Linux.  I’ve had countless experiences where an application that worked on one release of Red Hat or Ubuntu didn’t work on the next.  In some of those cases, recompiling the application was sufficient to get it running, but there are still a lot of cases where that’s not enough, and sometimes the source isn’t available to even try.

Seamless 32-bit and 64-bit Integration

Solaris has had 64-bit support since Solaris 7, and I know that I was testing 64-bit Solaris on x86-64 systems before the Linux kernel had 64-bit support for those systems.  Of course, Linux has included support for x86-64 systems for a while now, but the bigger issue with 32-bit versus 64-bit support in Linux is that it’s generally one or the other.  If you install a 64-bit Linux distribution, then it’s generally entirely 64-bit and may not even include the ability to run 32-bit applications at all, at least not without installing additional packages.

This leads to silliness like 64-bit web browsers and complaints about lack of 64-bit plugins.  I don’t see much need for a browser to support over 4GB of memory, which is the primary benefit for 64-bit applications.  There are a few other cases where 64-bit support may be beneficial (e.g., having access to more CPU registers), but they are primarily beneficial for applications that need to do a lot of calculation or cryptography.

It’s true that Sun should have provided a 64-bit Java plugin, and plugin providers are equally to blame for the same infraction.  However, if 32-bit and 64-bit support had been better integrated then only applications which truly benefit from 64-bit support would really need to be provided as such.

Process Rights Management

In both Linux and Solaris, root is an all-powerful user that can do pretty much anything.  Historically, there are a lot of things that only root can do.  Both Linux and Solaris provide mechanisms for granting a specified set of normal users the ability to run certain commands with root authority, but in such cases those applications have full root access and it is necessary to trust that they will behave properly and don’t have any security holes that could lead to unintended consequences.

Solaris improves upon this with process rights management, also known as least privilege.  Rather than granting an application full root access, it is possible to grant only those portions of root access that the application needs.  For example, a web server may need to listen on port 80, but there is no need to let it read or write any file on the system, load kernel modules, or halt or reboot the system.  Instead of granting it full root access (even if it gives it up later via setuid), you can just grant it the net_privaddr privilege so that can listen on ports below 1024 without having access to any of the other things that might be possible if it were truly running as root.

ZFS

ZFS is a phenomenal filesystem + volume manager.  Linux has good filesystems and a volume manager, but none of them have the combination of power, flexibility, and ease of use that ZFS offers.  It works well on a fraction of a drive, and it works well on a system like the Sun Fire x4500 with 48 drives.  It’s got good performance.  It’s got great data integrity features through redundancy, checksums, and multiple copies of data and metadata.  It’s got instantaneous atomic snapshots.  It never needs to be defragmented.  It never needs fsck.  The implementation of cryptographic support for encrypted filesystems is going to be integrated in the near future.

ZFS is great for servers, but it’s also very useful on desktop systems.  As a developer, I find the instant snapshot and rollback capability to be extremely useful for resetting an application to a known clean state when running tests.  Compression increases the amount of data that you can store and in many cases actualliy helps performance.  The ability to keep multiple copies can help prevent you from losing information even if you’ve only got a single disk.

Zones

Virtualization is a pretty hot topic right now, and there isn’t really a shortage of options.  VMware, VirtualBox, Xen, QEMU, and other solutions make it easy to run one operating system on top of another, or another operating system on top of itself.  They provide excellent separation of environments, but they are all pretty heavyweight because they require a significant amount of memory and disk space for the virtualized environment.

In some cases, e.g., for security or application testing, you want something that looks like a separate system but you don’t need a different operating system.  For those cases,  Solaris offers an excellent alternative in the form of zones.  A zone provides excellent separation and for the most part looks like a completely separate system, but requires virtually no memory and in many cases a relatively small amount of disk space because it’s able to share a large amount of the filesystem from the host system (also called the global zone).  Zones are trivial to configure, and when used in conjunction with ZFS snapshotting and cloning it can be possible to create new zones nearly instantly.

In the default case, a Solaris Zone runs exactly the same OS and kernel version as the host system, which makes it extremely cheap.  However, because of the strong compatibility that Solaris provides between versions, alternative implementations have been added which make it possible to create zones which appear to be running Solaris 8 or 9 on Solaris 10.

It’s also possible to run a Linux emulation layer in a zone, which is pretty impressive but unfortunately the support is pretty limited and it doesn’t really look like there’s much active development going on to improve it.  However, while I did make use of this capability when it was first released for a few specialty purposes (e.g., running Acrobat Reader), I haven’t used it in quite a while because I can run pretty much everything I need to natively in Solaris, and I do have Ubuntu installed in VirtualBox which provides a much better way to run Linux on Solaris.

Observability

Both Linux and Solaris provide pretty good ways of identifying what’s happening on the system, but Solaris tends to do a better job.  At a very high level, I’ve noticed that most Linux systems don’t ship with utilities like iostat and mpstat and require the installation of additional packages, whereas they are included by default on Solaris.

When looking into what a particular process is doing, both the Solaris truss and Linux strace are pretty similar, and they allow you to see the system calls that the process is making.  It does appear that at leasat some Linux distributions (like Ubuntu) don’t seem to provide pstack or an equivalent command for getting a stack trace of all threads in the process.

Of course, the big feature that Solaris has in this arena is DTrace.  Linux does have SystemTap, but it appears to be sat the present time it’s limited to only examining the kernel and doesn’t have any support for tracing user-space applications.  DTrace provides full support for debugging user-space applications, and there is also special support for tracing Java, Ruby, Perl, shell scripts, and other scripting languages.  And because Solaris provides stable interfaces, there’s a much better guarantee that DTrace scripts which use stable interfaces will continue to work in the future.

Sleeping with Solaris

I recently came across an interesting issue on Solaris.  I was trying to do something pretty simple:

  1. Check to see if there’s work to do.
  2. If there is work to do, then do it and go back to step 1.
  3. Sleep for a short length of time and go back to step 1.

This worked great on Linux, but not so well on Solaris.  Things were getting done in fractions of a millisecond on Linux, but it was taking over ten milliseconds on Solaris.  I hypothesized that the sleep might be the problem, and a quick test confirmed it.  It turns out that on Solaris in the default configuration you can’t sleep for less than ten milliseconds at a time (at least in Java).  This is because on Solaris, the Java sleep method is dependent on system clock ticks, and the default configuration uses 100 ticks per second, or one tick every ten milliseconds.  Any attempt to sleep for less than that time will get rounded up to ten milliseconds, and attempts to sleep for a length of time greater than that will be rounded up to the nearest multiple of ten milliseconds.

You can change this behavior by adding the following to the /etc/system file and rebooting the system:

set hires_tick=1

This will change the default from 100 ticks per second to 1000 (which means you can sleep in increments of one millisecond instead of ten).  There is also a way to fine-tune how many ticks per second you want (via hires_hz), but changing that isn’t supported and every mention of it I can find indicates that you don’t want to increase the frequency anyway because it will probably degrade performance as it will become expensive to update the tick counter too often.

There are, of course, other ways to accomplish the task I had originally set out to do (e.g., blocking queues, semaphores, wait/notify, etc.), but the bigger takeaway here is that you shouldn’t depend on algorithms involving fine-grained sleeping, especially if your application needs to run on Solaris.

The Future of SLAMD

Even to the casual observer it’s not too hard to notice that not much has happened with SLAMD in quite a while.  In fact, it’s been almost two years since my last commit to the public repository.  There are several reasons for this, including:

  • It works pretty doggone well in its current state.  I still use it quite a bit and hear that several other people do, too.  There are definitely ways that it could be improved, but so far it has been able to meet my needs.  Most of the SLAMD code that I have written since then has been in the form of new jobs that were pretty task-specific and not something that are likely to be useful in other environments.
  • Most of the time, I have been really busy with other things.  I just haven’t had nearly as much time to invest in it as I would have liked.  Of course, I did have a forced two-month vacation near the end of last year, but the terms of my severance stated that I wasn’t allowed to do any work and I didn’t want to press my luck.  After that period ended I’ve been going full-steam-ahead on something else.
  • There are parts of it that could do with a redesign.  The code used to generate the administrative interface is currently all held in one large file and could stand to be broken up.  There are also many areas in which updating the code to require Java 5 would allow it to be much more efficient and scalable, and the introduction of features like generics and enums would make the code easier and safer to edit.

Ultimately, I think that it’s at the point where it would be better to invest the effort in a clean rewrite than to try to build upon what’s there now, but so far I haven’t had much opportunity to do either one of them.  It’s definitely something that I would like to do and I’m hopeful that I might have the time to do it at some point in the future.  I have a lot of ideas for interesting and powerful enhancements, so I don’t want to count it out just yet.

Back from the Dead

Well, the title isn’t exactly accurate.  I haven’t really been dead, so please don’t come after me with fire and pitchforks.  But I certainly haven’t been very visible for a while, and I have felt like I have been buried under a mountain of work, so it may not be all that far off.

I tried writing general interest posts for a while, but gave that up because my guess is that nobody cares.  It’s true that I see a lot of movies (although I’m not trying to match last year’s pace of 109 different new releases), but there are lot of other places you can go to read better and more insightful reviews.  And I haven’t been playing my Wii very much recently either, so I don’t have much to say on that front.

Honestly, I think that the real value of my blog here is to keep things technical.  That’s what I did when I was at Sun, and it worked out well.  Unfortunately a lot of what I’m doing now is still a bit under wraps so I’m not quite ready to comment on that yet, but hopefully that will be coming in the not-too-distant future.  But in the meantime, there are some .  I do think that going without saying anything for too long was a mistake as well, and I’ll try to avoid that moving forward.

Before the Devil Knows You’re Dead

This weekend was a very slow weekend movie-wise. I had absolutely no desire to see The Golden Compass, and nothing else new was out around here. However, scanning over what was playing that I hadn’t seen yet, I came across Before the Devil Knows You’re Dead and saw that it had an 8.1 rating on IMDB, so I figured I would give it a shot. Overall, I rate it about a 6 out of 10.

The basic premise for the movie is that two brothers (played by Philip Seymour Hoffman and Ethan Hawke) find themselves in financial trouble. Philip Seymour Hoffman’s character Andy has been embezzling from his company and has a bit of a drug problem, and Ethan Hawke’s character Hank is behind on his child support. Their parents (played by Albert Finney and Rosemary Harris) own a jewelry store, and Andy suggests to Hank that they could make some easy money by robbing it. The store is insured, so it would be a victimless crime, and Andy figures that they should each be able to get about $60,000 out of it. Of course, things don’t go as smoothly as planned and they find themselves getting deeper and deeper into trouble.

This was a very non-linear movie. We knew that the robbery failed because it happened in the second scene, whereas we didn’t get all the details of its planning and the reasons behind it until later. Normally I don’t really care for stories that are told out of order, but in this case I think that it worked quite well. It felt like the audience was given the right information at the right times throughout the movie, and having the story told in order might not have been as effective. There were a few times when this approach was confusing, since it wasn’t immediately obvious where it fit in the story, but it usually didn’t take long to figure that out. I do think that the very first scene was hard to place correctly in the time line until the very end, but in general I think that it helped the movie out because it kept you guessing.

However, in my opinion there were some significant problems with the movie:

  • There were a few critical aspects of the story line that seemed pretty implausible, and I don’t think that it would hold up that well under scrutiny. Not the least of which is why they would risk so much for what seemed to be a relatively small payout — $60K isn’t the kind of money that leaves you “set for life”.
  • Other aspects of the story seemed to be thrown in for no apparent reason and didn’t seem to have any relation to anything else that was going on. This didn’t really detract from the movie, but it didn’t really add anything to it.
  • The movie seemed to end very abruptly and left some pretty big questions unanswered. On the other hand, because of the non-linear manner in which the story led to a lot of overlap where we saw parts of the same scenes multiple times. It almost seemed like I ended up watching the movie twice. While it did sometimes help to establish the time line, I think that it would have fared better if some of that duplication had been eliminated and they spent more time tying up loose ends.
  • The very beautiful Marisa Tomei was in this movie, playing the part of Andy’s wife. She had several nude scenes, but none of them seemed very integral to the story line there were a couple of times that it just felt awkward.

I did like the movie more than I thought I would after having seen the trailer, but I do think that there are some small touches that could have really elevated it. It had the potential to be a great movie, but ultimately I think that it ended up as just pretty good.

Wii Bowling Tips

Since I’ve had more free time than usual over the last couple of months, I spent some time with my Wii, and much of that has been in Wii Sports. I can’t seem to get the hang of boxing (actually, I’ve never lost any of the few bouts that I’ve had, but it seems like just flailing your arms about is enough to win and a lot of my punches don’t seem to register properly in the training modes), but I’ve gotten pretty good at the rest of the games and for some reason I’m particularly attached to bowling. When I first started the scores I got weren’t any better than I could achieve in real life, but over time I’ve been able to refine my skills and I’ve now had several perfect games. In the process, my approach looks a lot less like actual bowling, and I have to say that it does take some of the fun out of it, but nevertheless it’s nice to see a top score of 300.

As far as the other sports go, the only advice I can give is that practice makes perfect, so you’ll just need to keep at it. Also, sometimes nontraditional approaches may be helpful. For example, I’ve found that I often have better luck in batting practice when I’m sitting down and just swing with a flick of the wrist rather than standing with a traditional batting stance or even using a full swing. Also, there are plenty of other tips and videos available, and YouTube seems to be a good starting point.

Now, onto the bowling tips.

The Obvious Stuff

There really isn’t any huge secret to doing well. The most important thing to do is to figure out what works well for you and then keep doing that. As long as you are consistent, you should get repeatable results. Consistency includes:

  • Starting in the same position on the floor and with the same angle. Use the arrows on the floor to help ensure that you have the same position, and if you press “up” on the cross button then it will zoom in so you can fine-tune your position. Press “up” again to zoom back out.
  • Use the same delivery every time. This includes the point in your approach at which you start the swing, the speed of the swing, the amount of spin you put on the ball, and the point at which you release the ball.

Of course, this isn’t necessarily as easy as it sounds and I usually find some way to screw it up. Before I got my first perfect game (and since then as well), I threw many 279, 289, and even 299 point games where I left one pin standing that I had to pick up with a spare. However, through lots of trial-and-error, I have managed to isolate a pretty minimal set of actions that can help you out. Note that I’m right-handed, so if you’re a lefty then some of this may or may not be helpful (or at the very least you’ll probably want to reverse it).

Another somewhat obvious tip is that if your main goal is to get a perfect game, then it doesn’t make sense to continue playing a game if you’ve already blown it. In that case, you can press the “+” button on the controller to bring up a menu that allows you to start the game over without the need to finish the current game. Clearly, this is only something that you’ll want to do when you’re playing solo as it would probably be quite annoying to others if they were playing along. And even when you are alone it’s not very sportsman-like because if you do this it will be like you hadn’t played that game at all so it won’t have any potential to impact your score (and if you get a high enough rating, anything less than a perfect game can actually hurt your score).

Consistently Getting Strikes

One thing that I found was that I kept getting messed up on the approach. This wasn’t always my fault, as I noticed that sometimes the animation at the beginning of the approach got a little choppy and that threw off my timing. However, I discovered a little trick to completely eliminate that from the equation. If you hold down the “B” button (the trigger on the underside of the controller) then it will start the approach, but if you keep holding it down you will stop once you reach the scratch line. If you wait to make your swing until after you’ve stopped then you’ll completely eliminate any variability from the approach.

With the approach problems solved, it’s much easier to maintain consistency in the rest of the process. The easiest area in which to remain consistent is in your positioning and aim, since you can take your time and always make that exactly the same. I’ve found that works best for me (especially when I don’t have to worry about the approach, and when I use the swing technique outlined below) is to zoom in and align with the arrows on the floor. There are two rows of arrows, a continuous row further down the lane (i.e., higher up on the screen) and a second row closer to you that is split into the left and right sides. The perfect position for me is to align the left edge of the red stripe with the left edge of the leftmost arrow in the lower right group. I don’t change my angle at all, so I always keep the red stripe parallel with the gutters. When I’m done, the red stripe is pretty much aligned with the three pin (the first pin to the right of the head pin). Note that positioning can be sensitive, so it may take some adjustment to get it right, and you really do need to zoom in to ensure that your positioning is perfect.

The final piece of the puzzle is to ensure that your swing has a consistent speed, spin, and release. I can’t really separate these components easily, but what I do is to hold down the trigger until I have stopped at the scratch line, at which point my arm and the controller are pointing straight down and the front of the controller is parallel with my TV screen. Then, without moving my upper arm, I snap my forearm up until my hand is at my shoulder, and at the same time I twist my wrist so that the controller is turned 90 degrees and the face of the controller is perpendicular to the TV screen. I time it so that I start twisting my wrist at the same time I start raising my forearm, and complete the twist when my hand is at my shoulder. Also, at the same time my hand reaches my shoulder, I release the trigger. If everything is done correctly, the ball should end up spinning right into the one and three pins, and it should be a pretty explosive hit.

If you get the motion down, you should easily find yourself consistently scoring over 200, and I pretty frequently get at least a 279 and have gotten 300 several times. Of course, this doesn’t really look like real bowling, and you can even do it when sitting down which is even more of an abomination. Other than the thrill of getting a 300-point game, I actually prefer a more traditional process because it’s just more fun when you treat it like a sport than like a computer simulation where precision is important.

I should also point out that this technique doesn’t work nearly as well in the “power throws” training game, where you start with ten pins but then there are more pins added each throw until you hit 91 pins on the tenth roll. It does pretty well, but you’re not going to get an 890-point game using it. The best that I’ve gotten with this technique (in combination with the 91-pin strike mentioned below) is a 651.

Rolling the Ball Straight

Using a “normal” approach, I’ve found it very hard to get the ball to roll straight down the lane, and I’ve heard others mention this as well. For some reason, my natural swing puts spin on the ball even when I’m trying to consciously avoid it. However, I have come up with a modified version of the swing above that allows me to consistently throw the ball in a straight line so that it doesn’t veer off to one side as it goes down the lane.

In this case, since the ball is going to be rolling straight down the lane, you’ll want to make sure that your position is such that you’re pointing right at where you want the ball to hit. I still hold down the trigger until I stop at the scratch line with my arm pointed straight down and the face of the controller parallel to the TV, but then instead of bending my arm at the elbow I keep my whole arm straight and pull it up until my arm is pointing upward at about a 45-degree angle, and I release the trigger just after my arm has come to a stop without twisting my wrist at all. The whole motion looks kind of similar to the well-known salute to an infamous German World War II dictator, and if you do it right, then you should find that the ball will head straight down the lane.

Note that I generally use this technique for picking up spares, especially with the seven or ten pin right along the gutter, since it’s easier to get a strike when you do have spin on the ball.

The 91-Pin Strike

When you’re doing the “Power Throws” training session, you start with ten pins and then more pins are added for each additional roll until you have 91 pins on the tenth roll. In this game, your score is equal to the sum of all the pins that you knock down in each roll, but if you get a strike with any of your rolls, then you get twice as many points for that roll (e.g., if you get nine pins on the first frame then it’s worth nine points, but if you get all ten pins then it’s worth twenty points). As a result, getting all 91 pins in the last frame is worth 182 points, which can really boost your score, but that’s not an easy feat to accomplish even when using the technique outlined above. However, there is a cheat/Easter egg built into the game that allows you to get a strike on the tenth roll without even touching a single pin.

When you’re in the power throws game, there are rails along the sides to help prevent you from getting a gutter ball. However, with some careful positioning and the right throw, you can actually throw the ball so that it rolls down the top of the rail and stays on the rail the whole time, then it will count as a strike an you’ll get 91×2 points for that roll. Note that this only works on the tenth frame, so don’t waste your time trying it on the first nine.

The approach that I’ve found works best for this is to move all the way to the right until you stop. Then, move back one click to the left and press “A” to allow you to change the angle, and move two clicks to the right so that you’re angled slightly toward the rail. Then you can hold down the trigger until you stop at the scratch line and then swing your arm and try to put clockwise spin on the ball so that it will counter the natural tendency for the ball to curve back to the left. It may take some practice to get it right, since it’s easy to not put enough spin on the ball and have it fall off the rail before the end, and you can even put too much spin on the ball and have it fall off the rail in the other direction.

I don’t claim to have discovered this trick. I first saw it in a video on YouTube, and you can find an example at http://www.youtube.com/watch?v=gTsjnjsO-E0.

The Bowling Robot

The ultimate in repeatability would be to build a robot to bowl for you, and actually that’s exactly what someone did. See http://www.battlebricks.com/wiigobot/index.html for a page detailing how someone built a Lego robot that is able to play Wii Sports bowling. There’s even a video showing it roll a perfect game (http://www.youtube.com/watch?v=KUvind4t7Pk). This is an incredible feat, and it’s quite an innovative approach. If nothing else, it does show that if you’re able to deliver consistent throws then you really can get a perfect game.

Movies for the Weekend of 11/30/2007

Scrooged (7/10)

The first movie I saw this weekend was Scrooged, playing as the midnight movie at the Alamo Drafthouse Lake Creek. In it, Bill Murray plays the young and ambitious president of a television station that is preparing to air a live version of Charles Dickens’ A Christmas Carol on Christmas Eve, when he finds a modern-day version is playing out in his own life. It also includes Bobcat Goldthwait, Karen Allen, Carol Kane, three other Murray brothers, and a host of cameos.

Even though Bill Murray comedies (especially the more recent ones) have been hit-or-miss at best, this one definitely goes in the “funny” list. It’s not his funniest ever, but it’s far from his worst. There are several non-funny parts as well, but that’s to be expected based on the story being told. It’s certainly the best version of A Christmas Carol that I’ve ever seen (but I also loved the “Bah, Humbug” episode of WKRP in Cincinnati). However, Bill Murray’s brother Brian holds the honor for being in the best Christmas movie of all time (National Lampoon’s Christmas Vacation).

Awake (6/10)

The trailer for this movie reveals an interesting premise: a man (played by Hayden Christiansen) undergoing surgery isn’t completely anesthetized and finds himself awake and aware of what’s going on, but completely paralyzed and unable to do anything about it. And as if that weren’t bad enough, he overhears the doctors talking about how they’re going to kill him

Unfortunately, the trailer screams out “plot twists”, so you’re watching out for them and it’s hard to be surprised by them when they happen. In fact, one of the biggest twists is blatantly given away in the movie’s tagline, and even if you hadn’t read it you should have seen some pretty obvious evidence to suggest that in the movie’s opening scene. Nevertheless, reactions in the theater I went to (the Regal Gateway 16, since the movie was unfortunately not being shown at any of Alamo Drafthouse theaters) at least some people were surprised several times.

I was also disappointed by the number of inaccuracies in the movie. If you’re going to have a plot that deals with medicine, then it’s probably not that hard to find a doctor to consult on the project and make it a little more realistic. The plot didn’t depend on any of the medical inaccuracies, so in my opinion it’s inexcusable to have made such mistakes. I won’t go into detail on what some of these problems were to avoid any spoilers, and it’s probably true they would go unnoticed by most people without as much of an interest in medicine, but they did impact my appreciation of the movie.

I did like the story itself, and despite the problems that I’ve mentioned earlier I did find the movie enjoyable. It’s probably actually something that I would find more enjoyable watching a second time with complete knowledge of what happens, since anticipating the twists may have perhaps been a distraction. I was a little concerned that Hayden Christiansen’s more prominent earlier role might have detracted from his choice in this movie, but that didn’t really seem to happen much in this case and it offers more hope for his upcoming role in Jumper. Jessica Alba’s performance seemed the same as pretty much every other movie that she’s done, which is to say that she is pretty.

August Rush (7/10)

August Rush is in a rare class of movies that I find to be both utterly predictable and utterly enjoyable. If you’ve seen the trailer, then you shouldn’t expect any surprises. It’s what last year’s The Pursuit of Happyness should have been.

If you’ve seen the trailer, then you should know what happens, but if not then the short version is that Keri Russell and Jonathan Rhys Myers are both musicians that hook up for one night and Keri gets pregnant but has to give the baby up (and the circumstances around her giving it up are probably about the only thing in the movie that you wouldn’t have guessed from the trailer). Her son Evan Taylor grows up hoping that his parents will one day be able to find him. He’s always loved music, and when he meets Robin Williams (who gives him the “stage name” of August Rush) we discover that he’s a musical prodigy, and ultimately that it’s music that will reunite him with his parents.

Even though I did like the story, there were a couple of things that I think could have been done better. The beginning of the movie was a bit too jumpy for my tastes, switching several times between the past and the present, and I think that a more linear presentation would have been better. There were also several pretty obvious mistakes in the movie (e.g., notes getting lower instead of higher as his hands move to the right on a piano) that really should have been caught. And at the time that I saw it, I felt that the ending was a little too abrupt, although after having thought about it I think that it was probably a safe decision and that there were many ways that they could have screwed it up if they had chosen to prolong it.

Why the Dislike for CDDL?

Disclaimer: Although I was employed by Sun at the time the CDDL was created and chosen as the license for the OpenSolaris code base, I was not involved in either of those processes and was not privy to any of the related discussions. I am not making any attempt to speak for Sun, and all information provided in this post is either based on publicly-available information or my own personal opinion.

In reading discussions about what has happened with OpenDS, I’ve seen a wide range of reactions. This is to be expected, but one thing that I have found to be a bit surprising is that there have been some comments that are critical of the Common Development and Distribution License (CDDL). This isn’t the first time that such comments have been made, as I’ve heard them ever since the license was first created, but I am a little puzzled by them and the fact that they have persisted for so long. I think that the CDDL is an excellent open source license and that many of the negative comments stem from not really understanding it, while others may have something to do with the fact that the open source community in general has been and continues to be somewhat suspicious of Sun (who authored the CDDL).

The CDDL was originally created as a potential license for OpenSolaris. This drew a lot of criticism because many people, especially those in the Linux community, wanted Sun to use the GNU General Public License (GPL). Since GPLv3 was nowhere near complete at the time, if Sun did choose GPL then it would have to be GPLv2 but that would have been completely impossible for Sun to do in a reasonable way. While Sun certainly owns copyright on most of the code in Solaris, there are parts of the code that Sun licenses from third parties. Since GPLv2 doesn’t play well with non-GPLv2 code, if Sun had chosen to use GPLv2 for OpenSolaris, then they wouldn’t have been able to include some of those third-party components (especially those that interact directly with the kernel) which would have made it a lot less attractive for potential users. In that case, about the only people that would have been happy would be those in the Linux community because they would have been able to take the best parts of Solaris and pull them into Linux. OpenSolaris itself wouldn’t have been really useful until they had either re-written the third-party components or convinced their respective copyright owners to make them available under GPLv2. Other operating systems which use non-GPL licenses (like the BSD-based variants, which have gotten a lot of benefit from the OpenSolaris code) wouldn’t have been able to use it, and third-party vendors (especially those that need kernel-level interaction, like hardware device drivers) would have also found it much less attractive. It is possible that some of these concerns could have been addressed by creating GPL exceptions, much like they have done with Java, but even still there would have been significant deficiencies that GPLv2 doesn’t address like legal concerns about code which is covered by patents. Rather than try to pigeonhole OpenSolaris into GPLv2, Sun chose to look at other options, including the possibility of using their own license, which ultimately led to the creation of the CDDL.

Before I go any further, let me briefly describe the primary types of licenses that exist in the open source world. They fall into three basic categories:

  • Licenses which preserve open source at all costs, like the GPLv2. These licenses require that any software that uses code under such a license must always be open source. In other words, you can’t use code licensed in this manner in an application with closed-source components. This is very good for the community that releases the code under this license, since it ensures that they will always have access to any improvements made to it, but it’s less friendly to downstream developers since it creates significant restrictions on how they might be able to use that code.
  • Licenses which preserve freedom at all costs, like the BSD and Apache licenses. These licenses place very few restrictions on how other developers can use the code, and it’s entirely possible for someone to take code under such a license and make changes to it without making those changes available to anyone else, even the original developers.
  • Licenses which attempt to strike a balance between open source and freedom, like the Mozilla Public License, the CDDL, and GPLv3. These licenses generally require that any changes to the existing code be made available under the terms of the original license, but any completely new code that is created can be under a different license, including one that is closed source.

As someone who has done a significant amount of both open source and closed source development, I really like licenses in this third category. If I make code that I have written available under an open source license, then I like the guarantee that this code will remain open. On the other hand, I also like giving others the freedom to do what they want with their own code, even if some of their code happens to interact with some of my code, and I know that commercial users are much more likely to shy away from licenses in the “open source at all costs” camp than licenses in the other two categories.

So what are the specifics of the CDDL? It’s based on the Mozilla Public License, but clarifies some things that the MPL doesn’t cover. The basic principles of the CDDL are as follows:

  • CDDL has been approved by OSI as an official open source license, which means that it meets all of the minimum requirements defined at http://www.opensource.org/docs/osd.
  • CDDL is a file-based license. This means that if you make any changes to CDDL-licensed software, any existing files that you modify need to remain under CDDL, but any new files that you create can be under whatever license you want as long as that license isn’t incompatible with CDDL.
  • Similar to the above point, CDDL is very friendly when interacting with code under other licenses. This makes it easy to mix CDDL-licensed code with libraries under other licenses, or to use CDDL-licensed libraries in a project under a different license.
  • CDDL includes an explicit patent grant clause, which means that if any of the code is covered by patents then anyone using or extending that code is also granted the right to use those patents. It also includes a clause that terminates the usage rights of anyone who brings patent-based litigation against the code.
  • CDDL isn’t a Sun-specific license, and is suitable for software written by anyone. The only mention of Sun in the license is to indicate that Sun is the license steward and the only entity able to create new versions of the license

See http://www.sun.com/cddl/ and http://www.opensolaris.org/os/about/faq/licensing_faq/ for further information about CDDL license terms.

In my opinion, the CDDL is a very attractive license for open source software. It certainly doesn’t seem evil or unfair in any way, so I have a hard time understanding the bad reputation that it seems to have gotten. It is true that CDDL code can’t be mixed with GPLv2 code, but that’s not because CDDL is incompatible with GPLv2, but rather because GPLv2 is incompatible with CDDL. GPLv2 is incompatible with lots of other licenses, including other popular open source licenses like the Apache License, the BSD license, and the Mozilla Public License. In fact, the GPLv2 is even incompatible with the GPLv3 (as per http://www.gnu.org/philosophy/license-list.html#GNUGPL). It is unfortunate that the licenses used by OpenSolaris and Linux aren’t compatible with one another, but I think that it would have been a mistake to use GPLv2 for OpenSolaris and highly doubt that incompatibility with Linux was seen as a key benefit when CDDL was selected for OpenSolaris.

Our decision to use CDDL for OpenDS was made after careful consideration and was based on the merits of the license. We were certainly not pressured into using it by Sun, and in fact during discussions with Sun’s open source office they wanted to make sure that we weren’t choosing it just because we thought it was the company line but rather because it was the right license for the project. There are a number of other open source licenses out there, and they have their benefits as well, but if I were to be involved with the creation of a new open source software project, then I would imagine that CDDL would at least be in the running during the license selection process.