Tuesday, 7 December 2010

Kanban for support

My last post covered 2 courses that I had attended on Scrum and Kanban in which I mentioned the similarities and differences I saw between the two.
I wanted to go into a bit more detail in relation to why I believe Kanban is ideally suited for support work so lets start with what Kanban is in software terms.


Kanban is a pull based system that focus’ on restricting the amount of work in progress (WIP) at any time, it is characterised by a large board that shows the state of work in system at any time. 
The board is very simple it consists of 3, or more, columns; the first column simply contains ‘cards’ with the work to be done in a prioritised list, the second column contains the items that are currently being worked on and the third column holds the ‘cards’ that have been completed.
The ‘cards’ for the work items may be written on index cards, post it notes, or anything else you want, with the card representing a piece of work to be done and each piece of work should be roughly the same size as all the others to enable consistent metrics to be gathered.
The image to the left shows a simple Kanban board with the 3 columns and as you can see the WIP column is limited to 2 items.
One thing that is different about this board is the addition of the extra line at the top of the board which I’ll come back to later.
That’s all there is to Kanban, as I said in my previous post it is a simple system with little or no management overhead, its one and only rule is limiting the amount of WIP.
What I’ve described here is basic Kanban, Kanban at its simplest, and often in development this will be enhanced by adding extra columns (test & deploy are common) to better capture the steps undertaken in developing software and you may also find a parked area at the bottom of WIP for work that cannot currently be worked on due to external factors blocking the tasks.
The metrics I mentioned revolve around the time it takes for a card/task/piece of work to move around the board.  So you should be able to track how long it takes from an item being added to the bottom of the ‘To Do’ column until it reaches the top, then you can track the time it takes to move across the board and get done. 
These metrics can help with some basic planning as you can then start to have a discussion with the business about when they may be able to expect any item to be done possibly even generating some SLA’s that can then also be monitored.

Using Kanban for support

So with a clear definition of Kanban why does it suit support work? 
Well the board provides a very easy and visible way to see how much support is outstanding and what is currently being worked on, if you add additional columns to the board you can also determine if items are complete and waiting for testing or deployment.
The ‘To do’ column allows you to know which items are the highest priority and should be done first; although there isn’t a rule against changing the order of items in the ‘To do’ column it will mess up the cycle time metrics so best to try to limit how often this happens.
Kanban’s rule about limiting work on progress nicely dovetails with support and needing to focus on solving one problem at a time, not trying to deal with multiple things at once.
When handling support you don’t want a process that will add a lot of management overhead, Kanban with no timeboxed iteration, task estimation, review meetings or scheduled retrospectives provides a light weight framework to deal with scheduling of the work to be done.
New items to be worked on can easily be added to the ‘To do’ list at any time and if following the normal basic Kanban rules you should be able to work out how long it will be before the item is completed.

Critical support issues

Now if you remember I said I’d come back to that extra line at the top of the board shown above, and this line is specifically for critical show stoppers, it is limited to just a single card/task/work item with the column limit set to 1 i.e. you can only do one at any one time.  This extra line allows you to handle these critical incidents without having to rearrange the whole board and is a useful visual tool so that other people know that there is a major problem that has to be dealt with now and should be expedited across the board possibly involving all members of the team.

Integration with support ticket/request system

There is one aspect of using Kanban where it falls down and that’s the physical board as it can’t integrate with any electronic systems and requires manual updating; now I’m usually the biggest advocate of physical boards as they make what’s happening extremely visible in the office, the danger of a virtual board is that if you don’t have a screen showing it in the office people don’t tend to go and look at it unless they have a specific need to do so.
For support though I believe the need to integrate the support system is paramount and the only way to do this is via a virtual board ideally with a link to your support ticket system that way you can get the best of both worlds, especially if you can get a large monitor or TV that can display the Kanban board at all times.  One such system is Lean Kit Kanban that exposes a nice REST API which you could then develop code against to integrate with your support system to enable the use of Kanban to manage the support.

So who’s doing this?

I did a little bit of research, far from exhaustive, and found some links to companies that are utilising Kanban for support:
I would suggest that you take the time to read these articles and see how other companies have implemented Kanban for handling their support work.


To sum up:
  • Lightweight process adds little or no overhead
  • Visual, easy to see how much support there is in the system
  • Rule limiting work in progress helps ensure people focus on the item they are working on
  • Physical board not a necessity , in fact probably want to use a virtual/electronic board
Hopefully I’ve shown how I believe Kanban fits in with support work and how you could utilise it to help you better manage your own support.

Friday, 26 November 2010

Scrum, Lean and Kanban Training

So over the first 3 days of this week I attended a Certified ScrumMaster course and a Lean & Kanban course back to back and it has been interesting to see the differences between them which I’ll get to in a bit.

Certified ScrumMaster

The course was run by Paul from Agilify with some help from Geoff and with some very good exercises demonstrated very quickly the benefits of an iterative approach to work and how retrospectives can drive continuous improvement.
Over the 2 days Paul & Geoff explained where Scrum came from, how it works, the Agile Manifesto, the 5 values of scrum, etc and how you can use all these things day to day to help you.  The second day the value of iterative working, fail fast, etc was all demonstrated by splitting us into 2 teams and building a lego city in four 11 minute sprints included planning, review and retrospectives, this gave people a compressed experience of Scrum which allowed the course attendees to see how the various parts of the process fitted together.
The course was finished with Paul & Geoff going over some of the things a ScrumMaster does & doesn’t do, Scrum Fact or Myth and a competition between the teams in relation to everything we had covered over the last couple of days.
I thought that the course was excellent and I believe that the attendees who had never heard of Scrum before went away from this course with a really good idea of what Scrum was about and how you went about doing it.

Lean and Kanban

In contrast this was a single day course covering Lean principles/practices and Kanban, it was run by Bazil Arden who has a lot of experience in Scrum and Lean.
Bazil started off by outlining how Lean has evolved from the Toyota Production System in the 1930’s through to today so you get the idea that this isn’t some new fad methodology and in fact the manufacturing industry has been using it during this time.
Lean is a comes with a toolkit of different principles, practices and techniques to help identify problems in your organisation and ways that you can tackle those problems.
Finally Bazil took us through Kanban covering its origin and how it is used today  with software development, he outlined the Kanban is used in its most basic form and then how you can ‘enhance’ Kanban to provide additional value for your organisation.

The differences & Similarities

So the main difference between them is that Lean is a complete methodology that also includes a work scheduling system where as Scrum is a lightweight process just for project management that has no proscribed techniques or practices for identifying and solving problems.
The main similarity is that both Lean & Scrum look to an empowered team to get the most of of them and this ranges from being able to come up with solutions to any problems they face.
Both Scrum and Kanban make use of a board to track the work in progress with the intention of making any problems very visible so that everybody can see there is a problem and can then look to tackle them.
After seeing how Kanban should work in practice I believe that the basic version of Kanban is not the best fit for the delivery of software projects but I can see how it would be perfect for a team handling support.


I believe that for software development Scrum is the better fit for managing the work involved in a defined project, however, if you are dealing with support I can see how Kanban would provide the same sort of benefits as Scrum but with even less overhead.
There is absolutely no reason that you couldn’t take the Lean practices and principles such as 5 Whys and use them with Scrum.
I think that ultimately it comes down to what works for you, just take the time to look into the different tools and decide what works best for you.

Wednesday, 17 November 2010


So when you’re doing Scrum (or another iterative process) the constant pressure to deliver business value can be a bit relentless and a side effect of this is that you do not get time to investigate new/alternative technologies, unless it is for a genuine business need, you may also find that often technical debt can build up due to lack of time in a sprint to fix it.

So to remedy this situation you can implement a firebreak week/sprint where the team is not tasked with delivering business value rather they are given the time to be able to investigate new technologies, clean up technical debt, etc.

The team can utilise this time in any way they see fit, it should still align in some way with what the business needs so that the business is still getting value from what the team does, but it is the team that gets to decide what they do and how they organise themselves to do it.

For the business the hardest part of allowing a firebreak is often the lack of control over what the team does and the concern that they’ll sit there all day doing nothing but in reality most teams grab the opportunity to do things that will benefit them in their work either now or in the future e.g. clean up technical debt, investigate a technology that could be used.

During the firebreak there does not have to be any daily stand-ups and you don’t necessarily create stories and tasks relating to what you are doing but as with all things agile you customise it your way so if the team still wants to stand up and tell everybody what they are doing then let them.

If the team are investigating many different things then it can be valuable for them to put together grok talks to give to the whole team to share any knowledge has been gained during the firebreak to ensure that the knowledge doesn’t remain silo’d with any individual(s) .

The value to the business of allowing a firebreak is that it helps to reenergise and motivate the team and provides an easy way to allow for R & D without impacting on normal delivery of business value. 

The firebreak can also be used to align development and business cycles, if you follow 2 week iterations then you find that the developers will have a sprint/iteration that finishes a week before the end of a quarter so what do you do - simply not worry about it and allow another sprint to start so that it overlaps, or would you rather have the next sprint start at the beginning of the next business quarter?  If you need to align the sprint cycles with business quarters then a regular firebreak at the end of each quarter allows the cycles to be aligned and keeps the team happy for the reasons outlined above.

So if you are doing scrum and finding the experience a bit like a hamster wheel perhaps you need a firebreak.

Monday, 15 November 2010

Tech Ed–Post conference thoughts

Ok so the conference is over, I blogged about each day and now its time to reflect on the subjects covered:

  • The new members of the family
  • Agile
  • CMS
  • ASP.Net MVC / HTML 5
  • WCF
  • Silverlight/WPF
  • Parallelism

The new members of the family

I took time at the conference to ensure that I got an overview on the 2 newest members of the development family web matrix and light switch as up to this point I hadn’t seen either and wanted to know if they would possibly fit into my toolbox and I believe the final verdict is that they I don’t see them an appearance in my work any time soon.


I attended a couple of sessions on Agile but although really good sessions I didn’t take a lot away from them, this was through no fault of the presenter I just happen to have a fair bit of experience in this area. See my post for details.


I got to see Umbraco & Orchard CMS’s demo’d by very knowledgeable people in both environments and it was interesting to see the differences between the two and its given me a head start in evaluating them.  It will be especially interesting next year when Umbraco bring out a MVC version to see how that stands up against Orchards native implementation.

I believe Umbraco is the more usable at this time simply because its not pre-release code and has a very large community supporting it.


There were very few sessions on MVC with the majority of them being around the newly released MVC 3 and I didn’t manage to attend any of these.  I did get to attend a session on ‘Design Considerations for ASP.Net MVC Applications’ which whilst interesting wasn’t quite what I was expecting and again see my post on this for more details

The HTML 5 session was really good giving me a very good understanding of what “HTML 5” is, some of its capabilities but more importantly that whilst in a cross over period where HTML5 & HTML4.1 co-exist additional work will be required to allow the users a seamless experience whatever technology they are using.


There were a couple of sessions that I attended on WCF RIA & Data services both of which are newish additions to the WCF family and it was interesting to see the differences with RIA effectively staying SOAP based and Data Services moving to a REST based model.

Whilst the Data Services looks fantastic I have seen people I know struggle to be able to use it in quite the way that is promised but until I have a go at it myself I can’t say whether it is hard to do more than trivial applications, if I get to look at it I’ll blog about it.


The majority of the sessions I attended ended up being about Silverlight and/or WPF, this wasn’t explicitly by design but after all the furore over whether it was dead or not and I wanted to find out what the future held for Silverlight.

I think through the various sessions I attended and people I spoke to that the original idea of Silverlight being the cross platform technology from Microsoft is not longer the case but it is far from dead and has a future in the creation of LOB applications for the enterprise and as ‘islands of richness’ in general web sites much like the way that flash is used now.


There were a lot of sessions on parallel programming which is not surprising given that C# 4 has introduced the Task Parallel Library and the new CTP for the Asynchrony functionality.

Due to various scheduling issues I only managed to attend one session but it was a brilliant session which was less about the actual technology and more about how you could apply it to gain the performance benefits it promises.


I found this conference interesting in that my team got to decide a fair number of the sessions I went to rather than me simply picking the ones I may attended otherwise.

I also prioritised these sessions as we are about to start Silverlight development in work and as such I wanted to know as much as I could about the subject and ensure that we were making the right decision to do the development in the technology.

I think that the conference was good but perhaps could do with a little more developer focused content, in the past there has been an entire week given over to just developers.

Saturday, 13 November 2010

Tech Ed – Day 4

Ok so last day of the conference and it was only half a day with 3 sessions and as expected I had started understanding where everything was so I didn’t have to walk so much.

So the sessions that I attended today were;

  • Your Questions on MVVM Answered
  • Light up on Windows 7 with WPF and Silverlight
  • WCF Data Services A Practical Deep Dive

Your Questions on MVVM Answered – Laurent Bugnion

So first session of the day was an interactive session all about the MVVM pattern, Laurent commented that he expected nobody to turn up as it was the last day of the conference and the first session of the day but the room was full with people wanting to have their questions answered.

Before Laurent started taking questions he quickly restated the MVC pattern and then MVC with passive view before describing the MVVM pattern and the advantages of using it.

Then he started taking questions which ranged from better control over a drag ‘n drop data source to should you include code in the code behind.  Laurent answered all the questions and where possible showed code demo’s of the problem and solution so you could better grasp what scenario you may come across and how to solve it.

The session seemed to be over all to quickly but I hope I’ll be able to side step some common issues when doing development using MVVM.

Light up on Windows 7 with WPF and Silverlight – Pete Brown

So if you’ve been following my blog posts you’ll have noticed that I’ve attended several sessions with Pete, this wasn’t by design rather Pete just happened to be presenting interesting topics in the various timeslots.

So this session was focused on what extra stuff you could do when developing on Win 7 and the extra UI features available.

From the Silverlight perspective it was all about ‘Out Of Browser’ (OOB) applications and what OOB offered you and to get the most out of OOB you really need it to run with elevated privileges because that then gives you access to more functionality on the desktop like the speech API and automating MS Office.

WPF has a load more functionality that generally seems to revolve around the task bar and the functionality available with the task bar item such as being able to show a progress bar on the applications task bar icon, adding jumplist functionality, etc.

Pete finished off the session by showing how easy it was to connect and work with external sensor devices like GPS or a joystick when you used .Net 4.0 and WPF.

WCF Data Services A Practical Deep Dive – Mario Szpuszta

The last session of the conference was all about WCF Data Services and oData and it was obvious from the beginning that Mario knew his subject matter.

He began with a quick overview and demo of oData and how you are able to use it; with this done he built his own oData service, WP7 client and showed the phone consuming the data.

With the WP7 client completed he then moved on to creating a console application that would allow not only consuming data but also updating and inserting data.  Unfortunately he ran into a couple of problems but it was interesting to how he solved the problems including how to use fiddler to debug the service and the messages being passed around.

Unfortunately Mario didn’t manage to cover everything he wanted to but he promised to make all slides and code available for download so that we could look at it ourselves in more detail.


So today was sort and sweet and for the majority of it revolved around Silverlight, WPF and MVVM and I found all the sessions useful and have definitely taken away things to look at and explore in the future.

As per the other days I have more extensive notes for my team but if you want them drop me a line and I’ll sort them out for you.

Friday, 12 November 2010

Tech Ed–Day 3

So I’m starting to find my way around a bit which is typical when you’ve only got 1/2 a day left at the conference.

Today I almost managed to go to all the talks originally planned for, I changed one at the last minute after checking on the content, the sessions I attended were:

  • When working software is not enough: Death of an Agile Project
  • Architecture in Agile
  • Silverlight experts panel
  • Deep Dive into “HTML5”
  • Design Considerations for ASP.NET MVC Application

When working software is not enough: Death of an Agile Project – Mitch Lacey

Despite its title this presentation wasn’t really about agile at all.  Mitch was using a project that he worked on as a case study in how just producing working software using agile can often be the least of your worries.

What Mitch highlighted in his presentation is that no matter how good you are at developing software in an agile manner when it comes to statements of work, contracts, etc. things aren’t that easy and if you aren’t dealing with the right people in the client organisation, or worst still, don’t know everybody who is actually involved you will run into a lot of issues.

The main thing I took away from this presentation was if the customer wants fixed schedule, fixed cost and fixed features create a contract for doing just that even if you know its going to change and then charge the customer for every single change they want to make.  Also if you estimate its going to cost £1 million and the customer only wants to pay 1/2 that then walk away unless you have a really good reason to foot the bill for the other half of the work (Mitch said that the company he was with absorbed the extra cost as they had another £5 million contract with the customer about to start).

Architecture in Agile – Mitch Lacey

So I went from one session with Mitch to another and I was looking forward to hearing his thoughts on how architecture and architects fitted into an agile environment.

I was a disappointed in this session as the majority of it seemed to be more like a sales pitch to architects who were perhaps not working in an agile environment currently; if I had no agile experience then I believe this would have been a good introduction to agile principles and practices with Mitch explaining how you slice the system, create throwaway code to get it working, etc.

For me Mitch covered a little of what I had originally attended the session for right at the end explaining how he saw an architect working in an agile team and utilising emergent design/architecture working out any architecture that was needed in a just in time fashion.

One really good anecdote was an experienced architect who worked on one project with him and saw the architecture that was created after the project was successfully completed said to him that it was a really elegant design and even with his 20 years experience he didn’t think he could have come up with an elegant design if he worked the way you normally work i.e. big up front design which shows that simply have x years of experience isn’t always what you need or even helpful.

Silverlight experts panel – Gill Cleeren, Laurent Bugnion, Pete Brown

This was an interactive question and answer session with the presenters fielding questions that were asked by the audience.

I asked the first question which was ‘is Silverlight dead?’ and felt I got a corporate answer being told its not dead and that Silverlight 5 was on the way rather than the discussion I had hoped for relating to where Silverlight sat currently and where they saw it in the future.

I won’t relate all the questions and answers here as there were about 20 of them and some with answers from all 3 presenters, but during the course of the session I sort of got the answers I was looking for so to summarise what I believe Silverlight is now being used for and where its going in the future:

  • Silverlight's main focus is most likely to be internal line of business applications in the enterprise providing quicker development and ease of deployment and maintenance.
  • It may be used on the internet but only to provide specific functionality much the way flash tends to now e.g. play video
  • Silverlight is intended to create web based applications not general sites for people to browse.
  • Html 5, when it comes, will the technology to create sites/applications outside of the enterprise but you are still likely to find Silverlight fulfilling a the web application niche where you need rich functionality.

Deep Dive into “HTML5” – Giorgio Sardo

This was a very well timed session coming after the Silverlight Experts panel as up to now I have not looked into Html5 at all and so was very interested to hear all about it.

So Html5 is the future of the web, well most of us knew that already, but what I wasn’t aware of was that what people are calling Html5 is in fact a collection of 80-90 specifications that the W3C are looking at and a lot are at different stages in the path to recommendation some are only at the draft stage (beginning) and others are at recommendation already e.g. next version of javascript.

Giorgio went then went on to demonstrate the video and canvas elements as well as CSS 3 & SVG 1.1 showing us some of the power that will be available to us in the future.  Giorgio also mentioned that currently there is very little tooling available out there although there is a download for VS2010 which could help you now as well as some other libraries and utilities.

The one thing I did take away from this presentation is the amount of complexity that web developers/designers will have when trying to use these new features but at the same time making the sites work with browsers that only support Html4.1 which there doesn’t seem to be much discussion over.

Design Considerations for ASP.NET MVC Application – Dino Esposito

Based on the abstract for this session I was expecting to hear and see demos on how best to structure ASP.Net MVC applications to make best use of the technology.

However, the session started with Dino talking about SOLID principles and how at an abstract level ASP.Net MVC controller action method was no different to ASP.Net Webform's event handler with the only exception on closer inspection being ease of testing.

The talk then continued on lines of how to best meet Single Responsibility Principle in your controller, what a service layer was, and the different ways to access data in the view (ViewData, Model & Dynamic in case you were wondering) with strongly typed views using a Model as the best way for any reasonably sized project.

With the majority of the session gone Dino then briefly covered Dependency Injection via implementing your own IControllerFactory before moving on to ASP.Net’s aspect orientation implementation via ActionFilters and how you could create a way to dynamically inject the filters at run time to introduce new aspects without having to recompile and deploy the application, which he then demonstrated using a framework he has created which does just this.


I found today a mixed bag with the first couple of sessions interesting enough but didn’t really tell me anything new. 

The next 2 sessions helped me fill in gaps in my knowledge so I now have a far better idea how and where to use Silverlight and the real impact Html5 will be having on developers.

The final session was a little disappointing but interesting to see dynamic injection of cross cutting concerns, although I’d probably need more information about what problem this functionality actually solves before attempting to use it.

As per the other days I have more extensive notes for my team but if you want them drop me a line and I’ll sort them out for you.

Thursday, 11 November 2010

Tech Ed–Day 2

So day 2 at Tech Ed, still not quite managing to find my way around and getting sore feet from all the walking.

Due to some talks being far more popular than the organisers anticipated I only managed to catch to get to 4 sessions;

  1. Building Web Sites with Orchard
  2. Building Business Applications with Visual Studio LightSwitch
  3. Patterns for Parallel Programming
  4. Introduction to WCF RIA Services for Silverlight 4 Developers

The sessions that I missed out on were:

  1. How Frameworks Can Kill Your Projects and How Patterns Can Prevent You from Getting Killed
  2. Web Applications in Danger: Attacks Against Web Sites and RIAs

With any luck the sessions may be repeated or failing everything else I should be able to view the videos online.

Building Web Sites with Orchard – Bradley Millington

This session was all about a CMS called Orchard that has been built on top of ASP.Net MVC, it was first announced at Tech Ed last year and its currently pre-release software with an expected release date to January 2011.

Orchard has is built around the concept of Layers, Zones & Widgets which gives the system a lot of flexibility when it comes to how the pages are laid out and how/what content appears in the pages.

The team building Orchard hope to build a community around it much as Umbraco has done and have already created Gallery functionality that will allow people to upload modules that extend Orchards functionality.

Orchard has been designed to be localised from the ground up so not only can you localised the Admin UI you can add translations for the content, it currently doesn’t have any functionality to detect a users culture settings and automatically use the appropriate translations but you can do that yourself and hopefully it would be in the RTM.

Just as per Umbraco you can use Live Writer as an interface to allow business users to enter content into the system.

Building Business Applications with Visual Studio LightSwitch

LightSwitch is the new product from Microsoft in the Visual Studio family aimed primarily at business users who are currently using Excel & Access to create applications themselves rather than asking developers to do so.

LightSwitch has generated a lot of bad feeling in the developer community as generally it is not seen as a ‘professional’ developers tool, I believe the issue is that it is in the wrong product family as it should belong in the office family of products along side Excel and Access the tools the users are already using.  If the product was in the office family I believe that you would see a lot of the bad feeling disappear and may even get some traction from the developers in getting the business people to use it.

The product itself allows users to select from a data source, or sources, and it will then generate a UI in Silverlight for the user over the data. A data source can be a database, SharePoint, RIA service, etc so the user has a lot of flexibility over where they can select data from and how they link it together.  Only issue this could cause is the user is unaware of the time it may take to query these sources and performance could be an issue.

The generated screens are simple enough and the product will allow you to customise them to give user friendly names, add validation rules, etc but the user will need to be able to write some Vb.Net or C# in able to accomplish some of these tasks.

LightSwitch also comes with deployment support to allow the user to decide how it will be packaged and deployed, will it be a desktop app or a browser app, the user decides, my question at this point is how does this align with an Enterprise that controls all software that is deployed to the desktop centrally?

It is possible to create your own extensions for LightSwitch and have them utilised within the product and Microsoft seem to be looking to developers to do just this to enhance the whole LightSwitch eco system.

Ultimately I don’t see this tool being used by developers but it may gain some traction with business users that its aimed at, all I can say is please move it to the office product family to avoid the tears and tantrums that abound from developers.

Patterns for Parallel Programming - Tiberiu Covaci

This session was not on the original session line up, the presenter for the original session was ill and so Tiberiu (Tibi) was drafted in to fill the gap and he did so admirably.

Tibi set off at a cracking pace and didn’t let up the whole way through explaining to use about the benefits of being able to parallelise code and how to go about it.

To help demonstrate the various parallel techniques and discuss the patterns relating to them he used an example of cooking a meal and having code replicate the steps in the recipe, this provided a nice simple frame of reference that I believe the whole audience could relate to.

He then went on to put into practice how you can work out what can be parallelised and how you then implement that in the code using the functionality that is available in the .Net 4.0 framework.

To round off he gave us a load of links to resources to help learn how to do this and also a list of books that would help.

Introduction to WCF RIA Services for Silverlight 4 Developers – Peter Brown

The final session of the day for me was with Peter Brown which coincidentally was the same as yesterday.

Today Pete was explaining how you implement the RIA services that have been introduced in Silverlight 4 and what you can do with them.

Pete started off by explaining a little about the new Business Application template in Silverlight 4 and how it fits with the DomainService concept.  He then went on to create a DomainService and have it implemented in the client showing the client retrieving data through the service.

A note of caution was raised by Pete when he said that you don’t get good separation of concerns and that testing could be difficult although he felt that the functionality that was offered at least balanced out these issues.

Pete then went on to demonstrate how you could paginate, sort and filter the data simply by setting various properties on the Domain datasource that your UI is bound to.

Basic client side validation is also possible utilising the DataAnnotations assembly that is also used in MVC to indicate things such as max length via attributes on a meta data class for the entity linked to the DomainService although it is not possible to do cross field validation through this technique.

Finally Pete showed how you can add authentication simply by decorating the DomainService with attributes and that both Windows and Forms authentication was supported.


Although I wasn’t in sessions all day (nice to have lunch at a more leisurely pace) I found the majority of the sessions interesting and I have enough information for me to perhaps look at some of the things I have seen in more depth.

As per yesterday I have more extensive notes for my team but if you want them drop me a line and I’ll sort them out for you.

Wednesday, 10 November 2010

Tech Ed 10 Berlin–Day 1

Whilst this post is entitled Day 1 it was day 1 for me not Tech Ed which in fact started on Monday with pre-conference workshops and opening keynote.
I must also mention how confusing the conference centre is and trying to work out exactly where you are is a bit of a challenge but hopefully this will improve as the week goes on.
So here is a list of the sessions that I attended on the first day were :
  1. A Lap around Web Matrix
  2. The Future of C#
  3. Systematic Approaches to Project Wide Refactoring
  4. Code Like a Pro: Tips, Tricks and Techniques for Writing Killer Silverlight Applications
  5. Building Great Websites Fast using Umbraco, an Open Source ASP.NET CMS
  6. 10 Things Every New Silverlight and WPF Developer Must Know

A Lap around Web Matrix – Bill Staples

This session was the first time I’ve seen a presentation of the Web Matrix product which is primarily aimed at the ‘hobbyist’ programmer and I attended the session to see if the product could be useful to professional developers.
The presentation revolved around a set of demo’s showing how easy it was to create sites/applications with little or no work using the template systems.  When Bill then drilled into it a little it was obvious that this application is not suitable for developers creating enterprise LOB applications but if you are a one man web shop creating sites I can see that it could be of a lot of use making you more productive.
A nice feature is the ability to open a web matrix project in Visual Studio and take the site further and with more tooling support, only caveat there is that you have to have ASP.Net MVC 3 installed.

The Future of C# – Mads Torgersen

This session was partially eclipsed by Anders Hejlsberg PDC session and the initial announcements he made and focused on the same subject which was the new Asynchrony functionality which intends to make it easier for developers to create asynchronous code.
After a brief introduction Lucian Wischik took over to show some code and how the new functionality will work and how it makes life easier for you, this was fairly high level stuff and Jon Skeet has in fact put together a far more in-depth set of blog articles about the functionality.
What was interesting was a question by the member of the audience about whether the new asynchrony functionality would be available on the existing .Net 4.0 framework to which the answer was that the CTP will run on it but when it is RTM it will be on a new version of the framework (.Net 4.5? .Net 5.0?)

Systematic Approaches to Project Wide Refactoring – Gary Short

Gary has been voted top presenter at many conferences so you know you’re going to see a good session when you go and see him.
Although this was only a short session I found it immensely interesting as Gary was giving forth with a way to work out how you could go about working out what to refactor when looking at a application/system in conjunction with Technical Debt (TD).
He outlined different types of  TD with Known & Planned, Unknown & Unplanned, etc combining that with a prioritisation – Urgent, Important and Urgent & Important. Gary then went on to how the business sees the application/system and the tension developers have with the business in relation to getting the business to agree to the refactoring in the first place.

Code Like a Pro: Tips, Tricks and Techniques for Writing Killer Silverlight Applications – Jeff Prosise

This was an really good session with an excellent presenter that focused on some of the more technical things you may want to do with Silverlight as you would expect from a 400 level session.
Jeff covered subjects ranging from dynamic loading of XAP, localised resources, self referential pages (back button support & deep linking), discretizing work so that UI appears to have long running thread in background updating UI and creating custom Behaviours for designers to use in Blend.
Jeff finished up with some small gotcha’s like culture agnostic XAML parsing, thread agnostic libraries and resolving the memory leak issue in Silverlight 4 by making users upgrade to the GDR1 release.
A good session with some interesting tips, I would say that his localisation resolution isn’t the only way to do that and Guy Smith-Ferrier has other techniques that may suit your situation better.

Building Great Websites Fast using Umbraco, an Open Source ASP.NET CMS - Niels Hartvig

Niels was the original founder of Umbraco so he was well placed to comment on how you use it and how it has evolved.
Umbraco prides itself on the user have 100% control over how the product works from the mark-up to the Admin UI, it has a large user community who developer additional functionality for the main CMS system.
Umbraco is ASP.Net Webforms based, but an ASP.MVC version is planned for release hopefully next year, it utilises its own http module and custom server and user controls to be able to render the content and as you have 100% control you can create your own user controls and add them into the site so if there is some functionality specific to your enterprise you can easily plug it into your site created with umbraco.
There is a comprehensive admin UI that will allow the creation of sites, adding content, meta data, etc
Niels also demonstrated how he has created functionality to allow you to use Razor syntax instead of the default XSLT for generation of the mark-up.
One nice thing to note is that Umbraco now supports Live Writer as a content editor which is good because at a previous job we didn’t use Umbraco solely because the business users didn’t like the UI and we had no time to build them a nice one.

10 Things Every New Silverlight and WPF Developer Must Know – Pete Brown

This was a good session that all developers who are starting development in Silverlight or WPF should attend or at least have a more experienced developer pull them to one side and go through with them.
The 10 things that Pete listed were:
  1. Hand code XAML
  2. Expression blend
  3. Layout System
  4. Dependency Properties
  5. Asynchronous Programming
  6. Binding
  7. Value Converters
  8. Architectural Pattern
  9. Threading
  10. Our limitations
If you have to ask about the number above you’re probably not a developer Winking smile
I’ve not listed out all the content under each heading otherwise this long post would be even longer but I’ve got it all stashed away.


A very full day with some nice sessions, I have taken more notes for my team and if you would like them let me know and I’ll let you have a copy but with most of the sessions videoed and slides, etc available online you can probably just watch the content yourself and draw your own conclusions.

Monday, 8 November 2010

Book Review: DSL’s in Action

dslsinactionAfter my last review Manning Books were kind enough to let me have another book to review namely  DSLs in Action by Debasish Ghosh.

When I started reading I discovered the book focuses on languages on the Java Virtual Machine (JVM) specifically Java, Ruby, Groovy, Clojure and Scala and I freely admit that I have no experience in any of these languages, so I would not only be learning what Debasish has to say about DSLs, their use and creation but also getting an introduction into the languages mentioned.

The book is split into 3 parts over 9 chapters and with additional appendices, all the examples in the book are based on a equities trading domain (stocks & shares) and through out the book Debasish provides the reader with information about the domain to help you grasp what the DSLs being discussed are trying to achieve and how different languages can implement the various concepts being discussed.

I did run into one problem quite quickly in that the electronic copy of the book I had to review did not contain any appendices and Debasish mentions that he has put a lot of the basic concepts he will be discussing in the appendix 1 and in fact says after chapter 1 you should read appendix 1 before continuing. 

Part 1: Getting Started with Domain Specific Languages

Part one has 3 chapters which aim to introduce the reader to the world of DSLs what they are, how they help, existing DSLs you may be unaware of (SQL would perhaps be the best known) and the different types of DSL - internal and external. With this groundwork laid Debasish finishes off part 1 by covering integration patterns for both internal and external DSLs.  You will find that this part is light on code having to deal with all the theory and background to DSLs rather than focusing on how you create them in the language of your choice.

Part 2: Implementing Domain Specific Languages

Part two consists of 5 chapters and is all about implementing DSLs with Debasish stating at the start of chapter 4:

Every architect has his own toolbox that contains recipes of designing beautiful artifacts. In this chapter we will build our own toolbox for architectural patterns that you can use for implementing DSLs.

He then starts with patterns for implementing internal DSLs before moving on to show how internal DSLs can be created in Ruby, Groovy, Clojure, and Scala; then Debasish moves on to creation of external DSLs with tools such as ANTLR and XText as well as how to build an external DSL in Scala.  This part is heavy on code with all five chapters revolving around showing the reader how to build a part of a DSL for the example domain (stocks & shares trading) in each of the languages and external tools.

Part 3: Going Forward

In part three Debasish attempts to take a look at the future of DSL’s with growing language support, DSL workbenches and evolution of DSL’s. This chapter has very little code as Debasish is trying to show the user what he believes will happen with DSLs in the future and the sort of support you should expect to see in languages and tools to help you create DSLs.


It would seem that Debasish is a fan of Scala having dedicated 2 chapters in the book to explaining how to create internal and external DSLs with the language; having not developed with Scala I cannot comment on whether the focus on the language is justified or not and can only bow to what I must assume is Debasish greater knowledge in this matter.

You will find though as you read through the book there are more and more references to material that is in the appendices which Debasish recommends you read, so unless you have good working knowledge of DSLs and of all the languages mentioned I would expect you may need to be flicking backwards and forwards to ensure you understand all the material in the book.

DSLs in Action is a good book for anybody that is interested in DSLs especially so if you are developing on the JVM, even more so if you develop in Scala, and should provide you a good grounding in deciding the type of DSL to implement and the tools and/or skills to do so.

Many thanks to Manning books and Debasish Ghosh for allowing me to review this book.

Tuesday, 19 October 2010

Book review: C# in depth second edition

c#indepth As the title of this post suggests this is the second edition of the popular
C# in Depth by the legend that is Jon Skeet published by Manning Books.  I never managed to read the first edition so this review will be from the perspective of a first time reader rather than any sort of  comparison.

The book is split into 4 parts plus appendices:

  • Part 1 - Preparing for the journey
  • Part 2 – C#2: solving the issues of C#1
  • Part 3 - C# 3: revolutionizing how we code
  • Part 4 - C# 4: playing nicely with others

This isn’t a short book at 493 pages of specific content and a further 30+ pages of appendices, I’ve read longer books but with the content being all technical it may well feel longer due to the detail.

Part 1- Preparing for the journey

This part is made up of 2 chapters, the first of which takes a breath taking dash through various language features and shows you how the language has evolved from version 1 through to version 4, Jon states that his intention is not to explain everything he mentions at this point but rather to give a flavour of how the language has changed over time.

Chapter 2 then takes us into delegates and the type system spending time illuminating us over how reference and value types really work.

Part 2 – C#2: solving the issues of C#1

With all the changes that have been made to C# over the years it isn’t until you start looking back at C# 1 that you remember the pain we developers put up with and how C# 2 was such a major change when it arrived.

The 5 chapters in this part cover topics ranging from the heavy weight changes such as generics and variance (co & contra) to the more lightweight like separate getter/setter properties and static classes.

It is also in this part that Jon first explains covariance and contravariance which to seems to be a subject that a lot of people (including myself) have struggled with, once Jon has explained it you may well say what I did “oh I know about that I just didn’t know it was called co and contra variance”

Part 3 - C# 3: revolutionizing how we code

So we get to part 3 and in the evolution of C# we reach version 3 and as Jon states in the introduction:

“Almost every feature in C# 3 enables one specific technology: LINQ.”

C# 2 brought many different improvements but they weren’t focused on any one thing whereas C#3 had many improvements which could be used independently but together are more than the sum of their parts.

Part 3 consists of 5 chapters just like part 2 with each chapter introducing a different feature and then showing how it builds on the previous to help provide the LINQ functionality.

The main focus is LINQ to objects but in the final chapter of this part, chapter 12, Jon explores subjects such as LINQ to Sql, LINQ to Xml, reactive extensions, etc he states he is not attempting to give in-depth coverage of these subjects but rather give the reader an idea of how far LINQ permeates and some of the things that you can do with it.

Part 4 - C# 4: playing nicely with others

By the time we reach part 4 we’ve covered, or at least mentioned, all the functionality that is available in C# 3 in a fair amount of depth and we move on to the new aspects of the language introduced with C# 4.

Part 4 has 4 chapters with the main content being in the first 2 and the second 2 are more speculative but I'll come back to that in a bit.

Chapters 13 covers a fair number of changes that have been made to the language such as named & optional parameters, generic variance & COM interoperability.  Again Jon shows the thought behind each of the changes and how they are useful separately but together they build to provide a larger set of functionality, an example of this shows Word automation in C#3 and the equivalent code in C#4 which takes half the code when utilising C# 4 functionality such as named parameters and embedded COM primary interop assemblies. 

Jon originally introduced variance to us back in ‘Part 2 – C#2: solving the issues of C#1’ explaining the support C# already had for it.  He now revisits variance but this time from a generic perspective explaining how co & contra variance has been added for generics to make it easier to use so you could now have a function return IShape and covariance would allow the function itself to return any type that implemented IShape which previously you wouldn’t have been able to do.

Chapter 14 then moves onto what could easily be considered the heavy weight change of C# 4 and the dynamic keyword.  Jon gives an overview of what dynamic typing is, how it is implemented within the framework through the dynamic keyword and then lifts the curtain to expose some of the inner workings with the ExpandoObject, DynamicObject and IDynamicMetaObjectProvider.  He also shows you how you can utilise the Dynamic Language Runtime showing us some examples of using Iron Python within a C# application to potentially provide end users with a way to modify the behaviour of an application through scripting.

As I mentioned above the final 2 chapters are a bit speculative covering subjects that are either brand new now or may come about in the near future.

In chapter 15 Jon discusses Code Contracts and his belief that in the next few years they will become part of normal development and that people will look back on code written before contracts and wonder how you managed without them.  Code contracts are a way to explicitly state the behaviour of your code effectively guarantying what it will or won’t return when you call it with the option for the premium & ultimate editions to enable this checking when building your code to effectively catch errors before you have even written a test for it.

Chapter 16 is the final chapter and in the introduction Jon says:

Rather than leave you with an abrupt context switch from Code Contracts to the appendixes,I wanted to wind down with a few thoughts about how far we’ve come and where we might be going

So rather than cover subjects with his normal technical depth Jon sums up the journey we’ve taken and does a little crystal ball gazing to give us the benefit of his experience as to what he believes is likely to be important in the near future.


Once you’ve finished the 16 chapters if that isn’t enough you can keep going through the 3 appendices:

  • Appendix A covers LINQ standard query operators explaining what to use the operators for and providing examples of each.
  • Appendix B is all about generic collections in .Net listing the various collections and providing additional information and insight from Jon into the collection and its use.
  • Appendix C provides summaries of the different versions of the .Net framework, the C# language, the CLR and some related frameworks such as the compact framework.


I will admit that I wish I had read the first edition of this book 3 years ago, I am probably one of the “alpha geeks” that Jon mentions in his summary, I love to ‘look under the covers’ to learn a bit more information about how things work even if I may not get to use all of that knowledge.

It is interesting to see the evolution of C# through its various versions and it is only when its all brought together that you can see just how far and fast that it has evolved and one thing that does stand out is how important delegates are within .Net and how fundamental they are to LINQ and most likely future developments within the language.

Although Jon states that this book is aimed at a level suitable for junior developers I believe that even more experienced developers will gain something from reading this book even if its just Jon’s insight into some of the language features and its uses.

So would I recommend you buy this book? Yes, I believe you will find it invaluable.


My thanks to Manning books for allowing me to review this book and of course to Jon Skeet for updating his book, I’m already looking forward to reviewing the 3rd edition covering C# 5.

Saturday, 9 October 2010

Developer! Developer! Developer! Ireland 10

So I had a great time presenting at DDDIE10 today and as a bonus it was recorded and will be up on channel 9 at some point in the near future.

As I promised the slides are available here.  If you attended my presentation at DDD SW in June you’ll find that I’ve changed the slides (and slide order) slightly to try and make it more understandable.

If you view the slides online the links on the resource page won’t work but if you download it then you should be able to get to the links.

As I said in my presentation if you want any more information feel free to get in touch.

Thursday, 16 September 2010

The managers role within agile – part 2: the agile manager

In my first post I outlined what a manager did in a traditional environment and how agile processes changed that.  Now I’m going to outline how the role has changed and the responsibilities a manager has when an agile process has been implemented.

What do I do?

When an organisation begins to try and implement an agile process it is most often the managers that worry the most about what their role will be once the new roles and responsibilities have been explained, this maybe acerbated by the organisation getting excited about the self-managing team and saying that there is no need for managers.

The actual role a manager plays will very much depend upon the organisation that they work for and the amount of agile adoption that the organisation wishes to achieve.  Frequently an organisation will simply replace existing project management processes with agile processes but as I mentioned in the previous post this isn’t all a manager does.

Just to recap the responsibilities that I identified a manager as having, and the scrum roles that take on those responsibilities are:

  1. Interfacing between the team and the rest of the organisation – Team
  2. Managing work available for the team – Product Owner with Customer
  3. Overcome issues stopping the team from working (impediments) - ScrumMaster
  4. Handling team administration (KPIs etc) – ScrumMaster
  5. Technical expert - Team
  6. People management - ?

So in a new agile environment we can say with a fair amount of confidence that in most organisations the manager will still have responsibility for people management, the exact nature of this may change slightly but is likely to fall to the manager.

There are some organisations that do give the people management to the team so that they are completely self managing but I believe that it depends on the organisation, most larger organisations are setup with
HR departments/processes that have to be followed outside of processes for work an exception to this is somewhere like Google where annual reviews have been done away with completely

The Facilitator

People management is not the only responsibility that the manager will have, within agile a managers role is best described as a facilitator - helping the organisation, but more importantly the team, adopt and use agile processes and practices.

Although the manager may have had a lot of their previous responsibility moved to other roles the task of helping adopt agile is not a small one Henrik Kniberg in his presentation The Manager’s Role in Scrum says that the manager can be ‘the best catalyst or the worst impediment’ showing their importance in the adoption of agile practices.

As a facilitator the manager should be focused on helping the team get the work done and doing whatever is needed by the team to allow them to complete the work, in my earlier post Scrum Pain Points & Resolution - Management I listed of some of the tasks a manager may undertake as a facilitator some of which I’m going to expand on below plus a couple of extra behaviours.


In larger organisations where steering committees et al are the norm then the manager may be the person that will be communicating with these committees or simply senior management.

It is vitally important that a manager in this position is engaged and believes in the process of adopting agile practices, if the manager is not engaged it will be apparent to the other people that  the manager talks to and can lead to difficulties for the team as the agile practices may well be questioned.

The manager needs to be an evangelist going to the rest of the organisation and telling them how good agile is and why they should be doing it – this isn’t spin this is just explaining how the system works and showing how the team is delivering as a result.


To work successfully in an agile process the manager needs to take a leaf out of the ScrumMasters book and become servant-leader to the team, this can be a very big hurdle for a lot of managers who have been used to Command & Control (C2) as suddenly they aren’t the one who is making the decisions.

I believe more than any other reason that it is managers who have difficulty doing this that causes the most problems for a team trying to move to agile working. 

This can of course be exacerbated by more senior management continuing to try and work in the way they always have, C2,  leaving the manager stuck in the middle. A succinct example of this can be found in  Manager 2.0: The Role of the Manager in Scrum with the tale of Francis a manager who tries to be agile but with pressures from his manager Simon reverts to C2 practices.


There are frequently problems in agile adoptions simply because a manager will not ‘let go’ and trust the team to do what is needed,  depending upon the type of manager this can either be the hardest thing in the world or business as normal.  Even in C2 type organisations you have managers that trust their team and leave them to be fairly autonomous and there are managers that don’t trust their team and micromanage them.

An agile team is built on trust – team members trust one another to deliver the work to the standard that is expected, the team trust the Product Owner to ensure the backlog is prioritised correctly, etc  and the manager needs to trust that the team is doing the right thing and this links to taking up a servant-leader type role.

If the trust is lost for whatever reason then it can take a lot of time (and effort) to rebuild this trust.  The biggest problem in lose of trust is it can be the start of a slippery slope back towards C2 and you will most likely see the morale of the team drop as you head down that slope.

Trust the team will do what is necessary, yes they make mistakes but through the retrospective process you should see them address the failures and come out with actions to prevent those failures in the future.

Sara Ford a program manager for Microsoft has a very good blog post around this subject How I Learned to Program Manage an Agile Team after 6 years of Waterfall


For a fledging agile team it is crucial that the manager support them, the team may have no experience with agile processes and practices and therefore will need the manager to help them while they get to grips with what it is they need to do. If somebody questions the way work is now being done the manager must defend the team, if they don’t the agile adoption could be in jeopardy as it is likely that the team will be told how to do the work most likely reverting back to old non-agile practices e.g. TDD is taking too long so stop doing it and test once you’ve completed the development.

For a more experienced team that has being doing agile for a while and is comfortable with the practices and processes the support will be different, its then all about helping the ScrumMaster to remove impediments, working with the Product Owner(s) to help them with their backlogs, etc.  This is still very important to the team as it frees them from resolving issues that takes time away from their primary activity which is should be developing software.


Over the two posts I’ve outlined the differences between the traditional managers role and the managers role within an agile process and specifically in this post I’ve elaborated on some of what I see as the key tasks and behaviours of a manager working with an agile team.

Hopefully you can see a manager is very far from redundant in an agile process but that the same time the role is completely different from a traditional management environment and that a manager needs to change their behaviour to be able to help the team to succeed.


I’ve not managed to find many online articles specifically about managers in agile but I recommend looking at The Manager’s Role in Scrum by Henrik Kniberg and The Agile Manager by Roman Pichler both of which are slide decks from presentations that they have given on this subject. The one article I have found which is referenced in many different places is Manager 2.0: The Role of the Manager in Scrum by Pete Deemer, I mentioned this in the section under servant-leader but linked to the InfoQ article rather than the pdf as this second link does.

Wednesday, 15 September 2010

The managers role within agile – part 1: along came an agile process

When an organisation attempts to adopt agile practices with their focus on team responsibility and self management a question you normally hear is so what does a manager do in the agile world?

Just recently this seems to have been a hot topic with many blog posts and articles published on the web about this very subject so I thought it worthwhile to put my thoughts down.

Before we had agile

It is worthwhile taking the time to look back to before we had agile concepts/processes and to remind ourselves what a manager has traditionally done so that we can better understand their role in today's agile environment.

So lets look at what a manager would be expected to have responsibility for:

  1. Interfacing between the team and the rest of the organisation
  2. Managing work available for the team
  3. Overcome issues stopping the team from working (impediments)
  4. Handling team administration (KPI’s etc)
  5. Technical expert
  6. People management

We can see that the managers role is multi-facetted, one moment they are  handling who does what work, then needing to ensure KPI’s and statistics are up to date, then meeting with the business to discuss forthcoming work for the team, finding the resolution to a technical problem the team have and finally dealing with any people issues (poor performance, frustration at lack of career progress, etc).

Another key point to remember at this point is that in the traditional environment the focus is on the manager to deliver not the team; it is the manager that will be put directly under pressure and dependent upon the organisation they may or may not be able to alter deadlines/milestones on a project, or what is to be delivered, if struggling to keep to a project schedule.

When you look at it like this it is a lot for any one person to try and take on especially if the manager hasn’t been given any specific training in how to manage and not forgetting that they have most likely been promoted from a technical role that has nothing to do with management.

Given all of the above you can see how the Command & Control management style is popular, the person doing the managing needs to be able to tell the people they had responsibility for/authority over to do what needed to be done so that they themselves can move on to the next task that they needed to accomplish.

Then came agile

Agile processes such as Scrum, XP, DSDM, etc came about because of the way that software development was managed.

The main difference in an agile process to your traditional process is changing the focus from the manager to the team, it becomes the team that is responsible for delivering the work but it is also the team that is able to decide what work they are able to do (pull based working). This in itself is a huge leap forward as the team is looking to create sustainable delivery to enable better planning and scheduling of work.

In edition to this new roles appeared to do the work that had previously been handled only by the manager spreading the responsibility amongst more people.  Some people at this point say ‘but surely that would cause chaos, no one person is aware of everything that is happening’ but in the agile world lots of people know what is happening: the team, the ScrumMaster, the product owner and in fact anybody that wants to know what's going on should simply be able to look at the team board.

This reliance on a single person is one of the fundamental weakness of traditional management and in extreme cases creating a single point of failure where if the manager is not around/available the team may not be able to do/continue work as they need a decision from the manager before continuing.

So what roles having been added? Taking scrum as an example we have 3 new roles:

  1. Customer – the person that ultimately wants the work
  2. Product Owner – the person that liaises with the Customer to determine what is required and then is responsible for working with the team to schedule the work to be done.
  3. ScrumMaster – the person that looks after the process for getting work done, handles process admin for the team such as KPIs like velocity.

These roles separate and specialise responsibilities always with the aim of supporting the team to ensure that the work is available for the team to pull in.

If we go back to our list of manager responsibilities we can which role within agile handles that particular responsibility:

  1. Interfacing between the team and the rest of the organisation – Team
  2. Managing work available for the team – Product Owner with Customer
  3. Overcome issues stopping the team from working (impediments) - ScrumMaster
  4. Handling team administration (KPIs etc) – ScrumMaster
  5. Technical expert - Team
  6. People management - ?

These are fairly sweeping generalisations but illustrate that in agile the load has been spread across different people, making things more robust (no single point of failure) and allowing people to focus on what their responsibility is help ‘feed’ the team with work and to ensure that the team has sufficient work to do.

From our list the only item we haven’t attributed to a new role is the people management aspect of the managers role which in most part agile processes haven’t touched as they are more concerned with getting the work done.

Next time…

In this post I’ve outlined what a manager has traditionally done and how agile processes have affected the role, in the next post I’ll go into what being a manager entails in an agile environment.

Tuesday, 31 August 2010

Speaker Training

Last Thursday I was at Microsoft Reading TVP attending a course on how to improve your skills as a speaker.  This was a free course for sponsored by SQL Bits and run primarily so that speakers at the forthcoming SQL Bits 7 could be taught how to prepare and deliver good quality presentations.  As not all the spaces available were taken by SQL Bits presenters the remaining spaces were open to anybody in the community which is how I managed to get on the course.

The ‘host’ for the day was Guy Smith-Ferrier who with 20 years experience of giving presentations to audiences big and small was ideally qualified to be able to help us improve our skills and Guy was ably assisted by 4
‘group leaders’ Mike Taulty, Dave McMahon, Simon Sabin and Andrew Fryer all very experienced presenters in their own right.

To help reinforce the information we were going to receive we had to come prepared with a 5 minute presentation which we would give first at the beginning of the day and then for a second time at the end of the day to enable us to put some of what we had learnt into practice.

The day started with coffee at 9:00 which was greatly appreciated and it also gave time for people to arrive as the traffic was really heavy with some attendees being delayed by it.

We started the presentations at 9:30 with ‘How to explain absolutely anything’ which not only covered how to decompose your subject but also structuring the presentation and the difference between demo and production code.  One of the things that I found especially useful was a tip from Mike Taulty about altering your content for the amount of time that you have available – if you only have 5 minutes what would you want to tell/show an audience? what if you had 10 minutes?

With the first presentation over we broke out into 5 groups of 4 people and gave our own presentations.  The intention was to give the 5 minute presentation and then have 5 minutes for the group to provide feedback on your presentation. The time allotted for this session was very tight and unfortunately we were not always able to keep to this schedule, time between presentations for people to set up their laptops (mine didn’t want to behave and I must have burnt 5 minutes just trying to get it to play nicely) and the feedback lasting longer than anticipated made my group in particular overrun.

After our session we were to break for coffee before moving onto our next presentation given by Guy but as I mentioned my group overran and we ended up missing the coffee altogether and unfortunately delaying the start of the next presentation.

‘Planning your Presentation’ came next which telling us the standard slides we should look to include in a presentation, the volume of text, how best to construct your slides, strategies for your slide deck and thinking about text vs pictures in your slides.  The tip that sticks with me from this presentation is tell ‘em, tell ‘em and tell ‘em again which several of the group leaders mentioned more than once in discussions.

We then moved straight onto ‘How to give great demos’ beginning with considering if you should ‘start at the end’ to show what you’ll create during the talk, covering various strategies for the type of demo (live, using snippets, canned) , what to do if you cannot perform the demo live, demo dos and don’ts and finally the difference between understanding and remembering.

We then broke for lunch which us a chance to talk to other people about what we had covered and some, such as myself, to go through our presentations and change them before we had to give them again later in the afternoon (breaking a cardinal rule of presenting – never change your presentation just before you are about to give it).

The afternoon kicked off with ‘Preparing your laptop’ which revolved around ensuring that your laptop display would be optimal for the people attending the presentation covering subjects ranging from changing your resolution and DPI to the fonts and colours that you use.  The general advice here from the experienced speakers was alter your laptop to the modified settings as soon as you know you are going to be doing a presentation and get used to using it.

Again we moved straight into the next presentation which was focused on ‘Presenting your presentation’ and Guy took us through what to do 15 minutes before your presentation, introducing the presentation, handling questions (to and from the audience), the use of humour and ending the presentation.

This was the end of the formal presentations, we broke for coffee and then proceeded to give our own presentation for the second time, this time we broke into different groups of people under different group leaders so that the group would be seeing the presentation ‘fresh’.

After we had completed the presentations everybody reconvened to provide group feedback on the what we had learned from giving and watching the presentations.

Then there was the normal eval forms to fill out but quite unexpectedly there was swag as well.

During the day the only resource that we were limited with was time, there wasn’t enough time to do all the presentations that Guy has created on this subject, no specific time for changing our presentations should we feel the need and the time allowed in the session for us to give our 5 minute presentations seemed ‘optimistic’ having a bit more time may make life a little easier.  Guy has already tweeted about this and I believe that it is all going to be taken into account if the day is run again.

So was the day worth it? Oh yes.  Not only did we have Guy sharing his knowledge of presenting but the discussions involving the other group leaders yielded yet more tips borne from their experience providing a wealth of additional knowledge.

It was a brilliant day, I’m sure it will be run again and it will be even bigger and better.

If you are at all interested in giving presentations and you get the opportunity to attend this in the future I wouldn’t hesitate to apply to go on the course you won’t be disappointed and most likely learn an awful lot.

Friday, 13 August 2010

Guathon London

Today I went to the Guathon event in London at the Odeon Covent Garden, this meant I had to get up at 4:30 this morning to catch a train to get me to the event on time which wasn’t nice.

I managed to get to the venue by 8:30 but the venue wasn’t open and the pavement got crowded as more and more dev’s turned up but the doors opened at 9 just after 9:30 we were ready to go and what was nice was free wifi access which made tweeting easier for everybody.

The day kicked off with a session on VS2010 and ASP.NET 4.0 Web Development where Scott firstly went through changes/enhancements to visual studio IDE which turned up some useful information (such as being able to change intellisense to a consume mode to make TDD easier by pressing CTRL+ALT+SPACE), and then Scott went onto the changes to web forms in .Net 4.0 showing things such as the QueryExtender for Linq data sources to allow users to enter in free text type searches without you having to add code to change your Linq provider, SEO toolkit and lots of other goodness.  You definitely got the impression the ASP.Net webforms is far from dead in Microsoft’s eyes which is contrary to a lot of what you hear in the blogosphere.

The first session overran by approx 30 minutes which resulted in the second session ASP.Net MVC 2 being split over lunch break and some of the demo’s dropped to enable Scott to get through as much material  as possible.

During the lunch break the heaven’s decided to open and so more than a few people came back a little wet (I thought I was bad but @plip probably wins as he was soaked).

The 3rd session was run by Mike Ormond who talked about Windows Phone 7 development, which was very interesting and judging by the twitter comments generated a lot of debate about the platform.  At the end of the day I caught up with Matt Lacey (he runs the WP7 user group) and asked him about his thoughts on WP7 had a brief, but interesting, conversation about the platform, what Microsoft are doing and what may happen in the near future.

By the time we got to the fourth session we were running late so we didn’t break for coffee but ploughed on with ‘First Look at Web Futures: ASP.NET MVC 3, SQL CE and IIS Express’ with Scott having about 45-50 minutes to fit a 90 minute presentation into, including building an app from scratch.

First Scott discussed IIS Express and SQL CE and both look very promising, I will be eagerly awaiting their release, then Scott went on to create an app showing the use of Code first and the Razor engine  (I particularly liked the Razor engine syntax it looks very clean and I will have to have a play with it when I get the chance).  Then with time rapidly running out Scott did Dependency Injection in MVC3 in 8 minutes showing how to use a DI framework, he used Ninject in the demo, to remove your coupling and allow for good unit testing.

All in all a very good day was had by all and Scott was on form coding samples, answering questions and generally being knowledgeable on the development environment, and Mike did a very good presentation on WP7 just a shame he couldn’t give us all one to play with.

A very big thanks must also go to Phil Winstanley and Dave Sussman for helping to arrange the whole event, well done guys you did good.

Tuesday, 3 August 2010

Self managing team – myth or reality?

Recently I have been musing over the self managing team and whether such a thing really exists.

One of the pillars of agile is the self managing team that is always looking to improve what they are doing and committed to delivering working software at the end of every iteration.

Whilst in theory this all sounds fantastic in practice things don’t always work out like that and what happens when the ‘self managing team’ doesn’t manage?

All the various literature about this describes a group of motivated, empowered individuals, who want to succeed and want to ensure that the work is completed and delivered.  This group of individuals will work together to over come issues that they experience and will ‘manage’ themselves ensuring that if anybody in the team is not performing as expected they will either compensate or ‘encourage’ the person to perform as expected.

In reality a team may bear little resemblance to this with some people not wanting to bear any responsibility for the work “I did what you told me, if its wrong its not my fault”, others doing work they associate with their role “I’m not a tester – that's your job!”, some people simply want to work by themselves and be left alone, etc. It is with a team made up of these types of individual, the dysfunctional team, that the agile ideal of a self managing team seems to fall flat if not simply fail. 

So does the self-managing team exist?

To be honest I think that it depends on the environment that you are in with larger companies having a better chance to create and sustain a self managing team than a small company.  The reason for this is down to resources and larger companies often have the resources to pump into an agile adoption that smaller companies don’t.

If you read books such as Succeeding with Agile by Mike Cohn then self-managing teams seem to be the norm for agile/scrum environments and in companies that have had the pleasure of Mike coming in and coaching them I’m sure that’s correct, but in smaller companies that don’t have the resources to be able to bring a coach in they can struggle. 

One of the key things is top down adoption with management understanding what agile adoption means for the company and supporting it (another thing mentioned in Mike’s book) and if this doesn’t happen in an organisation big or small the self-managing team will not be a reality as command & control and death march projects will be the norm.

So are self-managing teams a myth or reality? I firmly believe that in larger organisations committed to agile they are a reality but for large organisations not committed to agile or smaller organisations with limited resources I tend to doubt they exist.

What do you think?

Monday, 2 August 2010

Nu – package installation made easy

I heard about Nu today on The Morning Brew which had the following 2 links:

Now I followed the instructions in Bil Simsers article and after some initial problems with running Ruby from the command prompt on Win 7 (not sure why but opening a command prompt at a folder rather than from the start menu resulted in being told ruby was not installed – go figure).

I followed Bil’s article down to the ‘diving in’ section and within a few moments I had installed nHibernate, nUnit, Rhino Mocks and nLog.  To say I was impressed would be a bit of an understatement, in just a few moments I’d got the latest versions of OSS software I use and all their dependencies without having to download and run several installers.

Later that day I also saw this and got very excited only to ultimately be disappointed to find out that it was only some mocked up images (although I’m sure that somebody will probably have built it by the time I post this) but firmly believe that this should be the way references should be added in the future.

I will be looking to use Nu myself from now on but wonder what does this mean for projects such as hornget, which I know Steve Strong tweeted about fairly recently, and other package managers that have been created.

I do ask myself is Nu the piece of software that changes the .Net landscape for package management by showing us who use .Net how it should be done?

Wednesday, 14 July 2010

Glenn Block ‘The Way of MEF’ presentation

Today I’ve been at a half day presentation given by Glenn Block who was until recently ‘the face’ of the Managed Extensibility Framework – MEF.

Before I get into the content I want to say I found the day really worthwhile and I am glad that I attended and just as I tweeted at the end of the day I have come away with the firm belief that in the future if you are doing dev on the .Net platform you may have to use MEF and I’ll explain more on that in a bit.

The event was held at Microsoft’s Thames Valley campus and the provision of coffee, Danishes and biscuits first thing was greatly appreciated.

The day kicked off with Glenn introducing himself, who he was, the fact he had worked outside of Microsoft creating commercial software and that he now was working on the WCF team.

Glenn then launched into the presentation and as a fair number of people in the audience hadn’t done any MEF at spent the first session running through the basics of imports, exports and composition.  Nothing very earth shattering for anybody who has attended a user group meeting, web cast, read a blog or had a play with MEF itself and there were some comments from other attendees that they found this session a little boring but could see it was necessary for people who hadn’t seen MEF at all.

After a short coffee break we started the 2nd session which covered use of dynamic types, creation policies (shared, non-shared, any), CompositionInitializer in Silverlight and a community tool mefx that will help to find problems with composition showing the import and export of selected assemblies.  Glenn kept reiterating that MEF was mainly designed for 3rd Party extensibility to allow authors other than the main software producer to extend existing or future products.

We broke again for more coffee, and came back for the third and final session.  As we were running a little late Glenn then asked to hold all questions and he would do a Q&A session at the end, by doing this he was able to accelerate what he was showing us and abandoned the slide deck to jump between various demo solutions to explain the concept he was talking about.  This session was  very interesting covering topics ranging from Silverlight deployment catalogs to composable behaviours in WCF (Glenn’s latest thing he is working on) with a stop off at meta data and Lazy<T>.

With the demo’s over there was time for 30 minutes of Q&A which was just as interesting with people asking questions about security, preventing malicious code from an import, run time meta data, export providers and more.  Each question was answered, even if the answer was it can’t be done but often suggestions for blogs to read that would provide in depth answers to the questions being asked.

Some of the most interesting nuggets came about during the Q&A or during the wrap up such as:

  • MVC3 looking to use MEF
  • Additional work being done with hierarchical containers around scope of objects e.g. in a web app have objects scoped to the life of the app, others scoped to the life of the session and others scoped to life of a request.
  • Glenn mentioned that the ideal situation would be to build MEF ‘into’ the .Net framework and that it was being looked at.

What I’ve taken away from the day is the following:

  • MEF was originally designed to provide easy extensibility for 3rd parties
  • If you are building a new app don’t use IOC for dependency injection use MEF and compose
  • MEF may well become a common foundation used in the majority of Microsoft technologies

It is for these reasons that I believe to dev on the .Net platform you should be looking to get a good handle on MEF and work out how to best leverage it not only when designing and coding your apps but when interacting with base functionality that is likely to appear in the framework itself.

Glenn said that he was going to enlighten us in the way of MEF-fu, well I think I’ve taken the first steps on the path but it could take a while to master it.

Some additional resources:

Glenn’s Blog
MEF on codeplex
Kent Boogaartt blog recommended by Glenn

Thursday, 1 July 2010

Scrum: Pain Points & Resolutions – summing it all up

In this series I’ve covered how to overcome problems in relation to Time, Management, Support, Planning & Scheduling and integration into the wider business.

This list is by no means definitive and your circumstances will most likely be different but hopefully you’ll have some ideas on how to tackle issues as they arise.

Another area where you run into problems is adoption of the various agile technical practices that you can use such as Test Driven Development, pair programming, continuous integration, etc but there are many articles, blogs & books out there that will help you overcome these type.

When it comes down to it the main way to resolve problems you run into is through communication, be sure to talk to people. You can often resolve problems and issues by talking to the people involved it is when the lines of communication break down that the problems become insurmountable.

Additional resources

I’ve included here some resources I’ve found useful as I’ve been progressed with scrum and agile.