Daily Archive: July 25, 2004

Open Solutions or Open Source?

Although not strictly contradictory, it makes for a nice title.  This article is about one of Microsoft’s reactions to Open Source and one way in which it is delivering on its “integrated innovation”, marketing strategy.  

 

The basic concept is that Microsoft takes a collection of their products, and applies them to the solution of a particular business need.   They publish for free standard architectures, processes, templates etc.  You can populate these architectures with some products of your own choice.  In a way whilst this is not Open Source it’s a sort of Open Solution. 

 

The concept is quite interesting to me because one of the challenges with Open Source software, due in the main to the way it is created, is how to build a coherent solution from the many different components, without some over-arching architectural vision.  Where does this vision get created in the current Open Source development model?  It happens within IBM, Red-hat and Novel etc and it probably happens in a proprietary way.  Even if all of the source for the components in the architecture are Open, the architecture itself is likely to evolve in …

How does Open Source Software come to be?

This may seem like a simple question to answer, i.e. is written, just like any other software!  It also might seem a strange sort of question to ask, but you will hopefully get my point if you read on!

 

NOTE: No thorough analysis supports the observations I report here.

 

It seems to me that the vast majority of the important Open Source Software comes to be through the following mechanisms:

 

  1. Cloning or reproducing in some way an existing design specification or similar.  Examples of this route being Mono(.NET), Linux(Unix) and Wine(Win32).  This technique is usually to force a product or interface into the open, by creating an alternative.

  2. Donating, i.e. some third party gifts pre-existing Open Source to the community, examples of this being OpenOffice, Zope and Niku.  This route is often taken by closed source product companies with an old product that is not generating much revenue.  The closed source community uses this old product line to, improve their image, generate services revenue, stimulate demand for optional closed source products, kill off a competitor etc.  In some cases the original developer continues to have some involvement in the development, …

Solaris and Linux

Johnathan Schwartz writes a nice article about Sun’s dilema, now resolved, about how to compete against Linux.  Linux is not a product, its a social movement that Sun applauds, so how can they compete?  He goes on to explain that in reality Linux is delieverd as many incompatible distributions, and...

Looking to the future.

My previous two posts gave two examples of projects that would have benefited from having been Open Sourced.  They happened a long time ago, when I worked for a different community and both have disappeared so there’s no problem with discussing them.  When these products were developed Open Source was just an emerging concept.  I am not going to discuss the history of Open Source, that’s been well documented already, and I am not qualified.  However I am going to start to build up a series of articles that describe some of concerns and the challenges I think the Open Source model faces in the future.  I am going to assess some of the established beliefs as documented in Open Source bibles like the Cathedral and the Bazaar and I am going to do my bit to try and help.

Lost Opportunity 1

In a previous post I described a simple networked or standalone, (depending on data definitions), document imaging processing project I led.  This system would have been perfect as an Open Source project for the following reasons:

 

  1. The need for such capability to be provided as infrastructure was universal.  The main constraint at the time to this happening being the high cost and complexity of the current applications.  So adoption would probably have been quite rapid

  2. The target user community tended to be very network centric, as they were often charged with distributed access to large central collections of information.

  3. My company had no interest in the software,  its interest was in an efficient way to capture and distribute its image data as efficiently as possible

  4. The system was very extensible, allowing additional storage drivers, scanner drivers, printer drivers, viewers and databases to be added by other developers.  The automatic maintenance of these components in line with new hardware advances and the increased deployment reach would have been very valuable

  5. There were many areas of potential improvement.  Some of the concepts were not fully realised, and many functional/feature improvements such as OCR were missing…

Lost Opportunity 2

I thought I would pick two previous projects and look at their potential as Open Source projects.  The second I picked is a bit whacky.  It was written almost entirely in DCL and provided an automated help desk job logging, analysis and reporting tool and knowledge base.  To write this in DCL is a testament to the flexibility and power of the VMS scripting environment and indexed files and the creativity of one of its developers, (not me).  But the concept was mine and its ‘conceptual integrity’ was maintained for many years.  What value could this have had in an Open Source context:

 

  1. The application itself had little value, although it was trivial to deploy and could have proved very popular for other small support teams.

  2. The tools developed to manage a complex system constructed from DCL would have been very valuable

  3. The library of DCL routines would have been very useful to the community

  4. The concepts used in the system which allowed 4GL like application development may have inspired other developers to rapid application prototyping of similar applications.

This system lived for well over a decade and died because the team that used …

Open Source, even then.

As I gradually migrated towards infrastructure and away from line of business applications, the reality of having to deliver applications to Windows and manage them on Windows began to dawn on me.  To a developer used to centralised computing, with remote access through X Windows or terminal clients this was a considerable shock.  However my first real Windows development project showed one of my most valuable character traits, I don’t give up easily!  Without going into the gory details, here are some of the attributes of that first application, a system for capturing, storing, accessing and viewing large image collections:

 

  1. Very easy deployment

  2. Self maintaining code, i.e. a minimal system start-up application compared what was installed with what should be installed according to a central manifest, and updated itself accordingly.

  3. A very flexible storage model built on the concept of logical storage units, (a bit like VMS logical names on steroids), which handled the fact that images could be on removable CD’s, local disks, media libraries, networked disks etc and in different combinations.

  4. Data driven.  The whole system was configured through simple text files and meta data definitions that defined the actual data structures in the SQL database.

  5. Globally …