More Eating Your Own Dog Food And Daily Builds

 

2013-08-05 16.08.41I have just been reading an article on the importance of scheduling and bug and feature tracking in software projects.  Its a good article and worth a read, but its basic stuff really.  However it’s often the basic stuff that gets neglected so don’t dismiss it on that basis.  Anyway the article prompted me to think a bit more on the benefits of eating your own dog food and regular/daily builds.

The key thing I missed in the previous article was the importance of the process to managing compromise, and often that compromise takes the form of cutting or dropping features in order to deliver to time and budget.  The daily build/dog food approach helps with this as follows:

  1. First it’s pretty key on all projects to put the basic platform elements in place first. These are the foundation elements upon which everything is built; they need to be the most reliable and therefore tested for the longest period.  They are also needed normally before any realistic dog food environment can be created.  In my desktop example this basic building block would be a stable system image, with a core set of applications.
  2. From that point onwards you are into the features management game.  Using the environment daily and releasing updates regularly helps because your user/testers will start to provide feedback on the features they really need to deploy, use and manage the environment, where there is a mismatch between this feedback and the prioritised feature list (make sure you have one of these) the development team are working to this can be really useful.
  3. The second reason it helps is because when the development team provides an early release of some really sexy feature, if the user/testers hate it or never use it, then it’s pretty likely that you need to take a hard look at whether it’s worth proceeding with it at least in its current form.  If you do the daily build process right you could get this early feedback with maybe only 20% of the effort that would be needed to finish the feature so there is a good chance of making real savings
  4. It’s also likely that the architects have specified some really tough challenges to the engineering team to achieve ‘elegance’ that no user of supporter of the environment is going to appreciate.  Dog food users are a great community to help you spot this sort of thing.  Sometimes its also called ‘over engineering’, although that’s slightly different.  Dog food user/testers will also spot over engineering as well because at some point you can elicit feedback on whether the current functionality is ‘good enough’ if people are using it and not complaining that’s the time to ask the question.

Here are a couple of other points I missed as well:

  1. Another observation that’s VERY applicable to software development but helpful in system integration is that it’s MUCH easier to fix a problem the day after you wrote the software than it is a month later when you do the integration test.  In system integration its much easier to change the design the day after your users say they hate it than it is when that design implications are embedded in tens of documents and lots of system implications have effectively made it impractical to change.
  2. The final observation is that the objective of developers should be to have the minimum number of bugs outstanding or unknown; with daily builds provided you fix everything as it happens this is generally true and so the system matures with only a handful of outstanding bugs at any point in time.  This minimises the risk of fixing a bug causing other unanticipated problems and makes regression testing much easier.  If you wait until everything is finished you end up with hundreds of bugs/issues to resolve and for every 10 you fix you create 3 more.  In systems integration projects it’s not so much coding bugs that you minimise it’s risks associated with not knowing if systems actually perform as documented.  Anyone that does a lot of systems integration knows that no application works as documented, and the more applications you string together the more difficult it is to get any complete business process to work.  Even worse if you are using a closed source product its quite unlikely that problems you find will get fixed in time, so you often have to find workarounds or change the architecture or design completely.  Finding these issues as early as possible is key to delivering a quality product to time and cost.

This dog food idea works for documents as well, often projects write documents that only the project team think they need, or the managers of the people who will use them think they need.  It’s a really good idea to send skeleton/outlines of your documents to the real customers of the document and maybe a good example of a previous one, so you get feedback from your customer.  An associated tip is to deliver a website rather than a document.  Websites have obvious advantages some of which are worth repeating:

  1. They are easier to navigate
  2. There is less duplication
  3. You can read a high level outline of a topic/process and dip into the detail through a hyperlink
  4. They can link to referenced documents that most readers of normal documents would never be able to locate and therefore never read
  5. They can link to applications, utilities, file system folders etc.
  6. The overhead of writing a web and getting it issues as part of a collection of pages that make up a site is MUCH less than the overhead of getting documents through a formal review process
  7. If you want to get clever setup and RSS feed for your web site so that interested parties can subscribe to changes in its content.  Architects and engineers can subscribe to the changes as well during development and also to keep track of changes once the system goes live to make sure operational changes do not compromise the systems integrity.
  8. If you really want to get clever deliver it as a wiki so that the end customers can enhance it with their own content and experience

The picture is of The Deep marine experience centre at Hull, close to where I created my first HTML document using an open source web browser, long before there was really a real Internet

Daily Builds Applied To Systems Integration Projects.

2012-08-08 17.18.36The last post has got me thinking more about the whole concept of daily builds.  I mentioned in passing that the concept is not just applicable to software development but I did not explain the comment.  I went out for a walk and started to think through how the concept could be applied to a systems integration project.  The one I chose is quite topical for me at the moment, a Windows XP desktop refresh and desktop management project.

 

So first let’s look at some characteristics of this sort of project:

 

  1. A standard system image that needs to be deployed tens of thousands of times to many different types of hardware
  2. The need to deploy thousands of applications on top of this standard system image, and to deploy these applications hundreds or thousands of times
  3. The need to access seamlessly thousands of file, print, authentication, management and application servers
  4. An environment that tens of thousands of users will use for perhaps 2-3 hours a day on average, this means hundreds of millions of pounds of deliverables depend on its usability and reliability

So let’s look at the daily, (or perhaps regular), build process and ensuring that the project team “eat their own dog food”, and how these techniques help with a project of this type:

 

  1. It dramatically reduces risk, as deliverables evolve incrementally and at each build cycle the progress is the subject of widespread review.  Misunderstandings, omissions and errors rapidly come to light.
  2. The collective intellect of the project team can be directed at improving the solution
  3. The project team are forced to think, “automation, reliability, configuration control, reproducibility and quality”, on a daily basis
  4. Rapid and efficient build and deployment processes are tested and refined from the beginning
  5. Rapid application provisioning and deployment processes are tested and refined from the beginning, application packages are deployed many times to many different build standards and different hardware types long before they are delivered to real users
  6. Architects get to verify daily that the conceptual and logical architecture are actually being implemented as envisioned, and that changes/optimisations that occur in the physical implementation get reflected back into the architecture
  7. The whole provisioning, deployment, usage and management infrastructure gets integrated and exercised in an evolutionary way, greatly reducing the risk that would otherwise occur when different teams wait until integration testing in the lab.  Prior to the lab the only integration occurred through document review.
  8. Every time the environment is rebuilt, (for physical desktops not every device is rebuilt every day of course, but it’s entirely practical to do this with Virtual Desktops), or a device within it is rebuilt the quality of upgrade, and migration processes gets tested.
  9. Every time a device is rebuilt the success of separating system state, from user state gets tested.  Users get very unhappy when they keep loosing user state information.
  10. User experience, i.e. how the whole environment fits together from the users perspective gets tested,  otherwise user experience only gets tested in pilots, which is normally too late to really make significant improvements.  This is a key issue, most projects are structured around creating bits of a solution, architects are responsible for making sure the bits of the solution work together but its rare to find sufficient focus on making sure that the bits not only work but integrate into an environment that’s effective, usable and discoverable

If you wanted to run a project this way what would you need:

  1. You would need to create a “dog food” environment that the project team use.  You might not start with the whole of the project team just the core developers and architects first, then the project managers, then the programme managers etc.
  2. You would also need system and integration test labs, and probably a customer reference lab, because not all test scenarios can be exercised in a dog food lab with real users with work to do.
  3. You would need to implement the whole solution in this “dog food” lab, as close as possible to the user’s environment.  At first the implementation might be un-configured, i.e. products as they are installed out of the box.  Some elements may not exist at all place holders would be implemented instead just to show that a bit is missing.  For example if “end user application provisioning” is not there yet, make sure the link is on the desktop, and that a web page opens saying this bit is not done yet.
  4. Make sure you tell people every day what has changed.  A set of RSS feeds is ideal for this.
  5. Make it really easy for people to report problems, comments, suggestions and make sure these get logged and that the project team classify and respond.  A discussion database is ideal for this. 
  6. Appoint someone in the lab team who is motivated to encourage and help the testers to identify and describe bugs, issues and comments.  Don’t worry too much about making the testers log bugs etc. in a particular way, getting input should be the priority regardless of whether it’s via the phone, one to one, logged on a web site etc.  That’s why you need someone dedicated to helping the users to log the problems.
  7. Make this person also take responsibility for chasing the developers to make sure they respond.  Testers are much more motivated when they get responses.
  8. Make sure that the management organisation get involved in operation and management of the environment.  It’s a great opportunity for them to learn and to test out the “management experience”, in the same way as I described the user experience above.
  9. Once things start settling down consider switching to testers logging issues through the normal help desk channels.  In the early stages this would stifle the process, but in the late stages when the number of events being logged is much reduced its worth it to help test that process.
  10. Make sure the lab team responsible for the “dog food lab”, think of themselves as customers of the development team.  Make sure they drag capabilities out of development, make sure they are demanding and set down standards of quality and reliability and automation, make sure they gather input from testers and chase up developers for responses.  Make sure that lab team don’t get dragged too far into development otherwise they will stop being effective customers and become too understanding of the developers challenges.

The picture is of the big wheel next to the river at York, it’s relevant because this is where Steve Townend now lives.  Steve worked with me to implement daily builds for an application we were developing and after a couple of week figuring out how to make the daily build process work it was awesome.  The application we developed progressed incredibly well using this process and every day we had a working, but not complete, system.

Eating Your Own Dog Food.

2013-08-06 11.30.38I’m a big fan of using the services you create while you create them, popularised by Microsoft as ‘eating your own dog food’.

Developing services this way improves many aspects of a project and I include it in my tips for improving the way we measure progress in a project.  I have another post that drills into the technique for a Desktop Infrastructure Project.

Joel, writes up an interesting example of NOT eating your own dog food, (i.e. using the IT solutions you are developing yourself), until it was almost too late:

Eating your own dog food is the quaint name that we in the computer industry give to the process of actually using your own product. I had forgotten how well it worked, until a month ago, I took home a build of CityDesk (thinking it was about 3 weeks from shipping) and tried to build a site with it. Phew! There were a few bugs that literally made it impossible for me to proceed, so I had to fix those before I could even continue. All the testing we did, meticulously pulling down every menu and seeing if it worked right, didn’t uncover the showstoppers that made it impossible to do what the product was intended to allow. Trying to use the product, as a customer would, found these showstoppers in a minute.

And not just those. As I worked, not even exercising the features, just quietly trying to build a simple site, I found 45 bugs on one Sunday afternoon. And I am a lazy man, I couldn’t have spent more than 2 hours on this. I didn’t even try anything but the most basic functionality of the product.

Monday morning, when I got in to work, I gathered the team in the kitchen. I told them about the 45 bugs. (To be fair, many of these bugs weren’t actual defects but simply things that were not as convenient as they should have been). Then I suggested that everybody build at least one serious site using CityDesk to smoke out more bugs. That’s what is meant by eating your own dog food.

This is just SO IMPORTANT, as an architect of enterprise infrastructures I get really frustrated when I try and use them, if I did not get that frustrated I would not be motivated to fix the issues, in fact I would not even know they existed.  In my ideal project the project must use the environment its creating, even if it has to distort its processes a bit to fit the product, its key it uses it if at all possible.  Its likely to be a real pain to the project to do this because it means the project is running on/depending on something that a bit flaky but its worth it.

As an architect its especially important because it allows you to validate that your concept is indeed being implemented and that your concept is correct, which it never is!  If you don’t use it I don’t see much chance of either of these being achieved.

I really dread to think what a solution would look like if it didn’t go through this process, if the only way to define what something looked like was a specification, and the prayer that some how the author of the specification and the 50-100 people who developed the solution from it actually were able to maintain not just conceptual integrity but also usability!

The picture is of the River Humber, close to this river is the office where I first introduced the idea of eating our own dogfood and it worked very well.  This is long before Microsoft used it, certainly over 20 years ago.

Some useful facts and predictions driving application delivery and mobility

I picked up a few useful bits of information during iForum this week:

  1. Citrix predict that between 30 and 50% of people will be mobile by 2010
  2. Some form of rights management is required when delivering to unmanaged PCs.  For example XenApp has a type of rights management, ie it can disable cut and paste, save to local PC disk, Print etc based on the results of a NAC check.  Microsoft have a much richer rights management solution, but its not currently integrated with NAC, nor can it be applied to all applications.  My thought perhaps SoftGrid execution environment could be NAC and rights management enabled, and therefore prevent certain things on unmanaged PCs
  3. 10% of people poled in a couple of sessions had increasing IT budgets
  4. 60% of people are expected to be working either from home or in branch offices by 2010
  5. There were 1.2B mobile phones in 2007, expected to be 1B SmartPhones by 2010
  6. 47% of companies now consider data protection now more important than perimeter security, again another hint at the potential growth of rights management if it could be made seamless enough for people who have rights!
  7. An IDC study was quoted that predicted that knowledge workers would be working with 60% of their information sourced from outside the company within 5 years.  I can really relate to this, I think I’m way beyond that ratio already and this >60% is part of my personal knowledge management system, not my companies, although some small part of it is relevant to share.

Credit Suisse – Case Study Note

Delivered by Steve Maytum – VP – End user platforms

  1. Today
    1. 54,000 managed XP desktop, two builds.  Modified the Gina to add a “borrow” button to RDP to a CPS environment or RDP to the users desktop PC,  this is similar to what CSC have done, but my modifying the GINA they have a solution that doesn’t force a locked session to logoff – nice!
    2. 15,000 managed laptops
    3. 4,500 applications
  2. Investing in
    1. 50 unmanaged PCs
    2. 300 thin client devices
    3. 3,200 virtual workstations
    4. 700 seamless published applications, 4,500 concurrent users
    5. 70 streamed apps
    6. Lots of Blackberries
  3. Investment banking is all about agility and power and speed of delivery, 140 changes a week
  4. Private banking is about protection of data and stability, 2 big changes a year
  5. Drivers
    1. Cost reduction
    2. Strategic sourcing
    3. Increasing remote offices
    4. Mobile and nomadic users
    5. Home working
    6. Availability of power and heat, green – in some building they are not able to deliver any more power to the buildings
    7. Business continuity
    8. Regulatory requirements
    9. What their peers are doing
    10. Consumer experience & user capability is driving a need to raise the bar
    11. Increase in technology capability
  6. Remote access security framework
    1. A NAC check provides control over what you have access to, using an SSL VPN –
    2. EPA Factory is used for the end point analysis
      1. Service pack
      2. AV running and have a signature that’s less than 2 weeks old
      3. Personal firewall running
      4. New version being developed to provide information on geographical location, whether they are at the PC console or remoting to it, checking for password protected screen savers
    3. Pass
      1. Access to your PC via RDP
      2. Local printing
      3. Line of business apps
      4. Long inactivity timer
    4. Fail
      1. Just access to email and office apps, plus a softphone
      2. Short inactivity timer
    5. Citrix Access Gateway – Advanced Edition sits behind an SSL VPN
    6. RSA SecureID
    7. Citrix web interface used
    8. Most users just use Citrix to provide access to their existing desktop PCs using RDP tunnelled through ICA
    9. They have lots of users apparently who bring in their personal laptops and rdp to their desktops
  7. Success so far
    1. 8,738 user connections a day
    2. After 6PM 1.26 years of work gets done every night
    3. At the weekend 3.33 years or work gets done
    4. Total of 500 years of productivity
    5. Peak usage is 9PM and 7000 users on a sunday
    6. Number 1 requested service
  8. End state
    1. Citrix PS desktop – 112 sessions per blade
    2. VDI desktop – 40 desktops per HP C Class blade
    3. Trader private blades
    4. SoftGrid for application streaming
    5. IGEL thin clients
    6. Traditional PCs with app streaming
    7. Thin offices
    8. Remote users
    9. Considering putting all the clients on a “dirty” network and do all client – data centre access over an SSL VPN
  9. Interesting point that I’ve made myself many times
    1. yesterday – business demand outstripped technology opportunity
    2. now – technology opportunity has exploded, way beyond business demand or even businesses availability to keep up