Monday, 2 August 2010

Nu – package installation made easy

I heard about Nu today on The Morning Brew which had the following 2 links:

Now I followed the instructions in Bil Simsers article and after some initial problems with running Ruby from the command prompt on Win 7 (not sure why but opening a command prompt at a folder rather than from the start menu resulted in being told ruby was not installed – go figure).

I followed Bil’s article down to the ‘diving in’ section and within a few moments I had installed nHibernate, nUnit, Rhino Mocks and nLog.  To say I was impressed would be a bit of an understatement, in just a few moments I’d got the latest versions of OSS software I use and all their dependencies without having to download and run several installers.

Later that day I also saw this and got very excited only to ultimately be disappointed to find out that it was only some mocked up images (although I’m sure that somebody will probably have built it by the time I post this) but firmly believe that this should be the way references should be added in the future.

I will be looking to use Nu myself from now on but wonder what does this mean for projects such as hornget, which I know Steve Strong tweeted about fairly recently, and other package managers that have been created.

I do ask myself is Nu the piece of software that changes the .Net landscape for package management by showing us who use .Net how it should be done?

Wednesday, 14 July 2010

Glenn Block ‘The Way of MEF’ presentation

Today I’ve been at a half day presentation given by Glenn Block who was until recently ‘the face’ of the Managed Extensibility Framework – MEF.

Before I get into the content I want to say I found the day really worthwhile and I am glad that I attended and just as I tweeted at the end of the day I have come away with the firm belief that in the future if you are doing dev on the .Net platform you may have to use MEF and I’ll explain more on that in a bit.

The event was held at Microsoft’s Thames Valley campus and the provision of coffee, Danishes and biscuits first thing was greatly appreciated.

The day kicked off with Glenn introducing himself, who he was, the fact he had worked outside of Microsoft creating commercial software and that he now was working on the WCF team.

Glenn then launched into the presentation and as a fair number of people in the audience hadn’t done any MEF at spent the first session running through the basics of imports, exports and composition.  Nothing very earth shattering for anybody who has attended a user group meeting, web cast, read a blog or had a play with MEF itself and there were some comments from other attendees that they found this session a little boring but could see it was necessary for people who hadn’t seen MEF at all.

After a short coffee break we started the 2nd session which covered use of dynamic types, creation policies (shared, non-shared, any), CompositionInitializer in Silverlight and a community tool mefx that will help to find problems with composition showing the import and export of selected assemblies.  Glenn kept reiterating that MEF was mainly designed for 3rd Party extensibility to allow authors other than the main software producer to extend existing or future products.

We broke again for more coffee, and came back for the third and final session.  As we were running a little late Glenn then asked to hold all questions and he would do a Q&A session at the end, by doing this he was able to accelerate what he was showing us and abandoned the slide deck to jump between various demo solutions to explain the concept he was talking about.  This session was  very interesting covering topics ranging from Silverlight deployment catalogs to composable behaviours in WCF (Glenn’s latest thing he is working on) with a stop off at meta data and Lazy<T>.

With the demo’s over there was time for 30 minutes of Q&A which was just as interesting with people asking questions about security, preventing malicious code from an import, run time meta data, export providers and more.  Each question was answered, even if the answer was it can’t be done but often suggestions for blogs to read that would provide in depth answers to the questions being asked.

Some of the most interesting nuggets came about during the Q&A or during the wrap up such as:

  • MVC3 looking to use MEF
  • Additional work being done with hierarchical containers around scope of objects e.g. in a web app have objects scoped to the life of the app, others scoped to the life of the session and others scoped to life of a request.
  • Glenn mentioned that the ideal situation would be to build MEF ‘into’ the .Net framework and that it was being looked at.

What I’ve taken away from the day is the following:

  • MEF was originally designed to provide easy extensibility for 3rd parties
  • If you are building a new app don’t use IOC for dependency injection use MEF and compose
  • MEF may well become a common foundation used in the majority of Microsoft technologies

It is for these reasons that I believe to dev on the .Net platform you should be looking to get a good handle on MEF and work out how to best leverage it not only when designing and coding your apps but when interacting with base functionality that is likely to appear in the framework itself.

Glenn said that he was going to enlighten us in the way of MEF-fu, well I think I’ve taken the first steps on the path but it could take a while to master it.

Some additional resources:

Glenn’s Blog
MEF on codeplex
Kent Boogaartt blog recommended by Glenn

Thursday, 1 July 2010

Scrum: Pain Points & Resolutions – summing it all up

In this series I’ve covered how to overcome problems in relation to Time, Management, Support, Planning & Scheduling and integration into the wider business.

This list is by no means definitive and your circumstances will most likely be different but hopefully you’ll have some ideas on how to tackle issues as they arise.

Another area where you run into problems is adoption of the various agile technical practices that you can use such as Test Driven Development, pair programming, continuous integration, etc but there are many articles, blogs & books out there that will help you overcome these type.

When it comes down to it the main way to resolve problems you run into is through communication, be sure to talk to people. You can often resolve problems and issues by talking to the people involved it is when the lines of communication break down that the problems become insurmountable.

Additional resources

I’ve included here some resources I’ve found useful as I’ve been progressed with scrum and agile.

Books

Blogs

Articles/Presentations

Wednesday, 30 June 2010

Scrum: Pain Points & Resolutions – Integration into the wider business

So you’re using scrum or another agile method in the development team, possibly in more than one team, but at this point it is still seen very much as a development practice and nothing to do with the rest of the organisation.

If your scrum adoption is successful then you will frequently find that its influence will spread out into the organisation especially around the sprint cycle with the business aligning itself with the heartbeat of an iteration. This doesn’t mean that all parts of an organisation will adopt scrum practices but they will at a  minimum understand how the process works and what, if anything, is expected of them.

To get a wider adoption in an organisation there are often several hurdles to overcome some of the most common ones are: Resistance to change, Scheduling work between teams, Clients and Job security.

Resistance to change

This is perhaps the most difficult hurdle to overcome, to do so needs the senior management within a company to recognise the value in scrum and sponsor its adoption within the organisation.

If you do not get this support from senior management you will run into problems, people will resist any changes that may be suggested by a scrum team to improve the business or changes in their own processes to help with the scrum team(s) e.g. people accepting the product owner role, business prioritisation of work to be done, etc.

If you don’t already have ‘buy in’ from senior management then the easiest way to resolve this is to get one or more members of senior management involved in the scrum process so that they can see how it works and the value that they gain through it.  If they can see the process at work and understand the value they gain through it vs. how waterfall projects run they can promote it to other members of management. 

If you follow this approach then you must be confident that you will succeed in completing all the work committed to and deliver something of business value, you need to set yourself up to succeed. If you fail then the management is likely to write the whole thing off, and whilst you may be able to continue with various agile practices and processes you will have to wait a while before attempting to convince management to adopt a different process.

You also need to ensure you get the right person in management, you need somebody who is interested in trying something new, open to change and willing to talk to other members of management and fight your cause.

Scheduling work between teams

If you have a project/product that spans teams, or teams still working waterfall, or simply need to co-ordinate with teams/departments elsewhere in the business you can run into problems scheduling when work will take place or just delivery of completed work.

To get around this the scrum master needs to engage with the other teams scrum masters/managers/product owners to co-ordinate when work will be done, delivered, ready for use etc.

The goal is to ensure that the separate parts of the organisation are able to deliver what is expected when it is expected, a side effect of this is that other non-scrum teams tend to start adopting some scrum practices to try and ensure they are meeting their commitments.

If the other teams are also doing scrum then it is usually just a case of ensuring the product owners, if they are different people, to understand cross team dependencies so that stories are prioritised correctly and which sprint they should be delivered in.

If teams do not wish to engage in the process you may need to follow the ideas in my last post in relation to scheduling work where a business arbitrator decides what work is done and when to ensure that the business works in a co-ordinated way.

Clients

Frequently business people are worried about the perception of the company by clients when they adopt scrum and the fact that the team is working in sprints so that delivery dates generally need to fit in with the sprint schedule and that this can be seen as inflexible because the team won’t accept changes to stories they are working on during a sprint.

In my experience the exact opposite is true, when you explain to clients that you are following the scrum process that they will be intimately involved, that the team will be delivering working software at the end of every sprint and that they are able to change the exact details of what they want up to the point where the story is included into the sprint then clients generally are very happy.

Scrum is no longer an unknown process, a lot of people in the business world understand what it is and if they don’t then take it as an opportunity to sell it to them, get them excited about the process and their involvement in it.

If the client is a little unsure their doubts usually fade when the team starts delivering working software, it inspires confidence that the software will do what they expect; react to changes that the client wants so that at the end of the next sprint they can see their changes have been implemented and their confidence will increase.

Job Security

When people first learn about scrum, the ‘self managing team’, continuous improvement, etc people often worry about their role in the organisation, they fear their role will disappear and as such they will lose their job.

Its only natural that this will happen and the way to help people overcome this is educate them about the process so that they can see that although their roles may change they won’t lose their jobs.

People may not be happy with a change in their role and this is again where senior management support is required to help the individuals through any transition period to find a suitable role/position within the organisation as it transitions to a new way of working.

Tuesday, 29 June 2010

Scrum: Pain Points & Resolutions – Planning & Scheduling

Very frequently when you are running with scrum or another agile process you’ll get objections like ‘you can’t plan with agile methods’, ‘how can we schedule what work to do?’ and ‘how can we commit to any deadlines?’

The myth with scrum and agile methods is that it is not possible to work out when functionality will be delivered, people usually fall back to waterfall and Gantt charts and say ‘look, if we do it this way we know when the work will be completed and can tell the client’

Unfortunately most people that have spent any time working on software projects know that this is pure fiction.  Having a chart that tells you when a piece of work will be delivered is a complete suspension of belief, just because the chart says it will be so doesn’t mean it will be.

So how can you schedule using scrum? by using the tools that we already have – backlogs, velocity & story pointing.

Planning

You start by ensuring you have a backlog with stories that have the correct level of detail to enable the team to perform story pointing, this will then tell you the size of the work to be done; you can then use the teams velocity to be able to work out the amount of sprints you are likely to need to be able to complete the work.

What becomes particularly important is having a range for the teams velocity e.g. maximum velocity achieved, minimum velocity achieved and average or mean velocity which equates to a what the team will be able to achieve, should be able to achieve and may be able to achieve.

2010-06-22_2133When these values are overlaid on the backlog it may prompt the product owner to reprioritise the work between the may be and should be done lines to try and ensure that a particular story makes it into a sprint and is completed.

 

Estimate vs. Commitment

One question often asked is why use story points, why not provide estimates in hours or days?

One very good reason is that story points are a unit of relative bigness, in and of themselves they have no meaning. If you say to a product owner a story is estimated to take 5 days to complete then frequently the product owner hears ‘that will take 5 days to complete’ when what the developer really meant was ‘if I am undisturbed and able to work on this and this alone and it all goes to plan I believe it will take 5 days’.

This is the trap of estimate vs. commitment where development say one thing and the business hears something entirely different, this is often the cause of friction when work is not completed when the business expects.

This is where story points help because they aren’t tied to any particular time frame, number of days or hours, so the likelihood of an estimate being taken as a commitment is reduced.

Scheduling

Scheduling the work to be done may not be a pain point, it all depends on the environment and amount of ‘resources’ you have.

If you are on a normal sized scrum team (7 people + or – 2 people)  that works for a single product owner on a single project/product then once the product owner has decided what work should be in a sprint you are good to go, no issues about resource or scheduling.

However, if you are a team that has multiple product owners and works on multiple projects/products it can become a real issue trying to schedule what work will be done when especially if you have smaller teams.

Whatever happens the team should not attempt to decide which product owners work they will do, it is not up to the team to decide which work has the most business value it is up to the business.

In a perfect world you could hold a meeting that all the product owners attended and they would form a self managing team and decide which work was of most value and attempt to work out a schedule between them of when to do each product/projects work.

Unfortunately we don’t live in a perfect world and most often than not each product owner will believe that theirs is the most important work and should be done first.  The easiest way to resolve this is to have an arbitrator for the business, somebody who can and will decide which work is of the most value to the business at that time and as such should have priority.

For this to work the arbitrator has to be a senior figure in the organisation, somebody that the various product owners will listen to and abide by their decision.  The arbitrator chairs the product owners meeting and will hear each product owners case as to why their work is the most important and the actual or perceived value that will be gained through doing the work, the product owners will also state how many sprints that they need to complete their work based on planning done with the team.

Once everybody has stated their case the arbitrator will be able to decide what order the work should be done in, there by scheduling the work for the team for the next sprint as a minimum but quite possibly for the next
3-6 sprints.

To ensure that the business can react quickly to changes in its environment (market opportunities, competitors products, regulatory/legislation changes, etc) this prioritisation meeting should take place every 2-6 weeks allowing for the schedule of work to change but ensuring that the team isn’t disturbed in the work they are doing.

Deadlines

So we have covered how to and avoid confusion over estimates and how to schedule work (if necessary), now the business wants to be able to commit to delivery but they are still feeling nervous about whether the team will be able to deliver on time.

So how do you allay these fears? You complete the work you commit to.

The team needs to ensure that the work that is taken into the sprint is completed which means you have to be honest with yourselves about decomposing your stories into tasks, about how the work is progressing and communicate with the team (including the product owner and customer) so that if anything crops up to affect the schedule it must be raised immediately.  By doing this the product owner and/or customer can make decisions as soon as possible on any stories that may need to be dropped from the sprint, changed or re-prioritised.

By doing this the business/customer is able to ensure that the stories with the most business value are completed.

Next pain point

The next pain point I’ll cover is in relation to integrating scrum into the wider business and problems that may be encountered.

Tuesday, 15 June 2010

Scrum: Pain Points & Resolutions - Support

If you are starting out with scrum as a team, or even new to scrum as a manager, the biggest hurdle to overcome can be handling support requests whilst attempting to complete the work the team has committed to.

The first question that is often asked by the team is - how can we commit to work in the sprint?

The easiest way to do this is when planning the sprint you work out the number of hours that the team has available for work and then take a pot of hours off the top to accommodate support during the sprint.

Doing this allows the team to then be able to task out the stories and commit to the work that they feel they are able to accomplish in remaining hours in the sprint.

The second question is usually - how will support affect the sprint?

With the hours taken off the top the sprint can continue as normal but the time taken for any support work needs to be recorded to show the time that is being spent and one of the best ways to do this is using a burn up chart.

SupportRecorded

A burn up charts provides a visible way to record the amount of time being spent on a particular task, story, etc.

The chart has a horizontal line showing the maximum number of hours that has been allotted, the blue line in the image, and each day the amount of hours that is spent is recorded as shown by red line in the image.

 

The third question is – will support cause us to fail the sprint?

The time available in a sprint is a fixed resource, this is a key concept that the business needs to understand. Excessive time spent on support can, and will, cause a team to fail a sprint .

breachedburnup

If  the burn up shows the amount of time breaching the allotted time, as shown in the image, then simple mathematics tells us the team will have insufficient time to be able to complete the work they have committed to. 

At this point the team needs to talk to the business to decide what work from the sprint will be dropped OR that they the team should stop doing support.

It is most unlikely that any business will agree to stop doing support so stories will have to be dropped from the sprint to allow the team a chance to complete the remaining work, it is up to the business to decide which of the remaining stories provides the most business value.

The fourth, and final question, which is usually asked by the business is – if the team sets time aside for support what happens if they don’t use it?

If the team has found the amount of support less than anticipated they should be looking to pull additional work into the sprint from the backlog. 

As with everything to do with support there is a balancing act between the amount of hours you need to keep for support and work you could pull in. As an example if you are half way through a sprint and haven’t used any of the time put aside for support the team should feel confident that they can halve the amount of time in the support pot, but this doesn’t mean you should pull in the same amount of work as hours dropped since most likely there is insufficient time left in the sprint.  The best course of action is for the Scrum Master to do what they would do in sprint planning and calculate the amount of available hours, less the work already in the sprint, and the team work out what can be pulled in.

Next Pain Point

Next I'll cover planning & scheduling work with scrum

Friday, 11 June 2010

Scrum: Pain Points & Resolution – Management

After the issue of time you frequently encounter objections relating to the management of scrum teams.
One of the tenets of scrum is the self managing team but this frequently concerns management as they want to know:
  • Who controls the what work is done?
  • Who takes responsibility for delivery of the work?
  • If things are not working out who steps in to sort it out?
These sort of things are normally done by a team manager (not to be confused with a team leader - whole different role) or in a bigger organisation may be shared between a manager and a project manager.

The simple answer to all of the above is - the team. 

The team being self managing means that they decide what work is done but don't forget that a scrum team includes the product owner that represents the business and as such they decide what provides the most business value.  Since the team decides what work they do they have to be the ones responsible for delivery of it and need to understand that it is up to them this understanding ideally comes from the Scrum Master but can come from a manager as I'll explain shortly.

If a sprint is not going well and not working then again the team need to resolve the matter, they must resolve the situation to allow the work to be delivered and again a good Scrum Master should guide the team through this, it is only if the team is unable to resolve the situation that the manager need to be involved.

So after all that you frequently get asked - so what is a managers role in scrum?

Within scrum a manager needs to move away from a command and control (C2) type management and instead work to turn themselves into a facilitator, another servant leader type role, who is there to help:
  • To help the team improve their processes
  • Solve high level impediments that the Scrum Master is unable to resolve
  • Technical & Product evangelist to other managers & members of staff in the organisation
  • Handle resourcing issues
  • Synchronise backlogs across teams
  • buy snacks for the team
  • even clean the office if it will help
In reality the list goes on and on, the bottom line is that a manager needs to do whatever is necessary to support the team to deliver business value.

Authority

So you have a self managing team but what if:
  • the team simply aren't self managing?
  • the team are just going through the process but aren't really committed to it?
  • the team simply are not delivering work?
If the scrum master/manager recognises that things aren't working then they will try resolve the situation with the rest of the team by attempting to coach them to get them back on track.  If this isn't successful it will fall to the manager resolve the situation.

At this point the likelihood is that manager is likely to revert to behaviours exhibited by C2 managers and tell the team what they have to do to get the scrum process working, it should be the last thing the manager wants to do but may be the only course of action available.  What is crucial is that the manager is looking to fix the process rather than simply reverting back to 'the old ways'.

The article 'Agile - A Way of Life and Pragmatic Use of Authority' on InfoQ gives a very good overview of this situation and what can/could be done.

Final thoughts on managers


Paraphrasing from a presentation by Henrik Kniberg on the managers role in scrum:
The manager can be the best catalyst or the worst impediment
What this says to me is that managers are vitally important in the process and need to engaged with it otherwise you may find your attempt to adopt scrum could be in trouble

Next Pain Point

Next I'll cover support and how that can be handled within scrum.

Thursday, 10 June 2010

Scrum: Pain Points & Resolutions - Time

When you want to try scrum the biggest pain point is often the amount of time, real and perceived, that will be needed to implement the process.  This post attempts to outline the most frequently raised objections and potential resolutions.

Time required for all the meetings

The first objection is frequently related to time require to hold the various meetings which in turn reduces the time available by the developers for actually developing/supporting the software.

This can be a fairly easy objection to overcome as the majority of these activities were always going on in the organisation and scrum just makes this time visible to managers.  It is true that additional time will be taken up by daily stand-ups but as they are time boxed and should be no more than 15 minutes it is a small price to pay for ensuring everybody is up to date with the state of the work.

Time to perform related tasks

The next objection that is usually raised is related to the extra work people will have to do on top of their normal duties, it usually becomes more of an issue when you explain that you really want a dedicated person to take on the role of ScrumMaster let alone the need for a person to handle the Product Owner duties.

This is a far harder objection to overcome with only a limited number of resolutions to the problem.

The easiest way to resolve this is by having the support of senior management who understand the need and can reorganise roles/work as necessary to allow people to undertake their new role/responsibilities.

If you don’t have the explicit support of management to reorganise the work but you do have their implicit support to allow you to try and implement scrum as long as “the work gets done” another possibility is looking to share some of the work done by the people that are to take on new roles/responsibilities to give them space to do the work that is required.

The hardest situation is where you are trying to implement scrum without management knowing, and more than one team have done this.  To be successful the people involved have to make time to be able to perform the work which they will need to do and you’ll need dedicated people that understand what you are trying to achieve.

New technical practices slowing development

In relation to time this is the last objection where management believe that practices like Test Driven Development, Continuous Integration, Collective Code Ownership let alone Pair Programming, will simply add time to development.

This is the easiest objection to overcome as there have been many studies done in and around these practices showing the advantages and benefits of each and you can lay out what the organisation should expect you to be able to deliver if you are allowed to implement them.

Start by picking the practices you feel most confident with so that you can show tangible benefits of what you are doing which will make the managers feel happier with what you are doing and more likely to allow you to attempt the practices you are less familiar with.

So what's next?

Hopefully when you’ve overcome these objections you will be able to start doing scrum but be prepared as this will then cause more problems and objections to be surfaced but I’ll try and cover those in the next few posts.

Wednesday, 9 June 2010

Agile – Making progress

I have been meaning to post this for a while but DDD SW got in the way so this is a catch up on the situation from my post Agile - Bringing it back on track, as promised I will be posting my ‘Pain Points & Resolutions’

So we had successfully completed the sprint and started our meetings, the team was a little more confident in what they were doing this time.

The demo to the various product owners went well the only niggle being that a member of the team was questioning if we should release a piece of work we had done as during the sprint the product owner had identified another story that they wanted done.  A small discussion ensued with the outcome being that the understanding that we will always deploy work done in a sprint, we don’t hold any work over until further work is done.

After the demo the retrospective went well with the team selecting items they wanted to focus on in the next sprint, the only point of contention being the manager who looked after the team started demanding items to be looked at and trying to change what had been selected by the team to focus on.  The team didn’t accept this and although uncomfortable for the manager the discussion that followed showed the team starting to self-manage and decide their own fate with the result being a small compromise with the team picking the items to focus on but acknowledging the managers concerns.

The sprint planning went fairly smoothly, the hours were calculated, the stories were ready with enough detail (thanks to the Scrum Masters backlog grooming) and the tasking went well.  Only issue we ran into was the team attempting to do more than was on story but fortunately when this was pointed out the team stopped.

So it looks like we’re making progress, I’ll post on what's been happening to the team after my series of posts on ‘Pain Points & Resolutions’.

Tuesday, 8 June 2010

Presentation slides – So you want to try scrum

DDDSouthWest2BadgeMedium[1]

 

Sorry for the delay (not used to making presentations available online), but you can now get to my slides here.

As promised I will be posting ‘Pain Points & Resolutions’ shortly.

Sunday, 6 June 2010

DDD SW

So yesterday was DDD SW and I along with about 29 other presenters tried to inform and entertain the people that attended our sessions.
The day went well and I think that everybody who attended would agree that it was a really good day.  I also went to the geek dinner afterwards which was great and I would suggest that if you attend a community event and get the opportunity to go to a geek dinner do go.
As I promised all of the people who attended my session I will be creating a series of articles following on from the presentation all about pain points experienced when attempting to use/adopt scrum and potential resolutions.
All I need to do now is work out what I could present next …

Friday, 4 June 2010

DDD SW presentation

I am doing a presentation at Developer! Developer! Developer! South West (DDDSW) tomorrow entitled ‘So you want to try Scrum?’ which is intended to cover the basics of Scrum and some common pain points that are frequently encountered by people trying to adopt scrum/agile processes.

I will be following up the presentation by posting the various paint points and some solutions on the blog as additional material as well as making my slides from the day available.

Hope to see you tomorrow, if not come back here to pick some tips on resolving pain points in scrum.

Friday, 14 May 2010

Agile – bringing it back on track

So in my last post I outlined the situation I encountered when I joined my new company, now here’s what happened at the end of the first full sprint.

The scrum master worked with the product owners to groom their backlogs which then came to the team to story point to allow us start calculating velocity for future planning.

The scrum master was out of the office so I stepped in and ran the review, retrospective and sprint planning.

In the review we had no product owners turn up so to set expectations I got the members of the team that had worked on various projects/products to demo them to the rest of the team, this served 2 purposes:

  1. Team gets used to demoing the work that they’ve completed
  2. Share knowledge within the team

The review went well with the team asking questions about the various projects to get a better understanding of what the different team members had been working on – its not full understanding of the code but it better than nothing plus good to see the team wanting to share the knowledge.

We then moved onto the retrospective and I got the team to do Start, Stop, Continue to identify what they’d like to change in the process, after we went round the table getting feedback from the team we all voted on the things to focus on.

After the retrospective we did the planning, now this was the first time in a long time that the team worked out the number of hours they’d have to work during the sprint (based on 6 hours a day and 9 days in the sprint, last day given over to the sprint meetings) then tasking time ‘off the top’ for meetings, training, etc.  Once we had the actual amount of hours to work with we got the stories newly groomed by the scrum master and tasked them out, and we continued until we reached about 80% of the available hours and then decided as a team to commit to the work.  We tasked out some additional stories in case the team got through work quicker than expected just to have tasked stories in hand to simply pull in and work on.

Doing this was a different way of working and some members of the team were a little uncertain as to why we were doing this and what we would gain, what was nice was the members of the team that did understand explained it to the others rather than me telling them and coming across as ‘dictating’ to them.

During the next sprint we worked through the stories on the board, the scrum master protected the team so the pink stickies on the board were reduced (unplanned work) and the product owners attended the daily scrums.  We got hit with a massive support problem but had taken a substantial number of hours off the top to handle this so we didn’t have to drop any stories out.

At the end of the sprint we had succeeded.  All committed work completed, tested and ready to be deployed (we have to get approval for system deployment by a global change board, otherwise would have deployed in the sprint) and we had managed to handle the support, including a massive problem what we got hit with.

It seemed that my actions were helping but that was just one sprint, the next sprints will decide if any lasting change has been made.

Next post will go into the next set of sprint meetings and what happened in the next sprint.

Thursday, 6 May 2010

When agile goes bad

Now I’ve not posted much about agile/Scrum/lean even though I helped introduce Scrum to my previous employer.

When I first started at my new job when I arrived and I saw the board I knew something had gone very wrong.  There was no sign of any stories, the ‘cards’ on the board represented tasks, there was no visible definition of done, I could go on.

I had joined mid sprint so instead of trying to make any changes immediately I observed what was going on for the rest of the sprint.  The daily standup was taking place with people being told by the scrum master to answer the 3 questions but very quickly people started launching into discussions, the sprint review had no demo to any product owners or even the team and the retrospective seemed to simply have people list what they thought went well and what went badly with no actions to change anything.

Although the team thought that they were ‘doing agile’ it seemed very much that they were simply reacting to work that was being thrown at them and this included during the sprint.

The poor scrum master was pulling his hair out trying to protect the team from interruptions and work just being dropped on them but management frequently by passed him.

To try and help the first thing I did was to work with the scrum master to take everything back to basics.  We got stories defined for the next sprint, at sprint planning we worked out the hours that could be committed to and then decomposed the stories into tasks until we reached the umber of hours that the team felt they could commit to.

During the sprint some more work got thrown in but we used a luminous pink sticky to show all unplanned work to help give managers a simple visual indication of what was happening.  We also got hit with a massive support problem but because it ate into our hours the team along with the scrum master determined what stories should be dropped out to compensate.

At the end of the sprint the team hadn’t completed all the work but we had a better understanding of why and what we may be able to do about this.  In the review the work was demo’d and in the retrospective I did Start/Stop/Continue with actions to try to remove some of the problems.

I’ll cover what happened in another post.

Tuesday, 4 May 2010

Asp.Net Dynamic Data – should you use it?

I’ve wanted to write this post for a while but have held off until I had completed my posts on dynamic data (DD) display and validation attributes and my thoughts on Linq-to-Sql.

Disclaimer: this post is all about DD on .Net 3.5 not .Net 4.0, I’ve not had a chance to play with the .Net 4 version yet.

Now DD provides an excellent way to build CRUD applications against a database, in a matter of moments you can have a functioning application and as its Asp.Net you can easily customise the application to add security, change its appearance, etc and through its Meta data driven buddy classes it becomes easy to alter what DD renders.

However, it is when you want to extend a DD application to do more than its simple CRUD operations you start to run into its limitations.  If using Linq-to-sql, without extending the entities, then you are effectively forced to work in a Transaction Script style which is fine for simple applications but very quickly you start to struggle as the complexity rises, plus, it is almost impossible to do TDD with DD & Linq-to-sql combination.

If you extend the entities in Linq-to-sql or use Entity Framework then you can start to work in an Active Record style but due to the intertwined nature of the logic and the data its still almost impossible to do TDD (an interesting aside is this post I found which mentions ‘the Active Record Anti-Pattern’ which is the first time I’ve heard of one of the main architectural patterns described as an ‘anti-pattern').  By extending the entities you do gain the ability to add logic to them making some tasks easier but as the complexity continues to rise you again run into problems trying to make what should be easy work for you.

An example: In my last job the final piece of work that the team worked on we decided to use DD as it provided us the opportunity to get the site working in a short period of time.  Now everything started fine we got the site working quickly, styled it appropriately and it all looked good, then the changes began. With each change the logic that was required increased in complexity and if we hadn’t been using DD we could have had a nice suite of tests to help us test the logic but we didn’t, so we could only perform functional testing which meant that the boundary and corner cases were difficult to test and mistakes were made. 

What I and my previous team learnt from this is that when using DD there is a tipping point you reach where the functionality DD provides is outweighed by the complexity of extending it.

So should you use Dynamic Data? As with most things it depends – if you only need CRUD actions like a simple data entry application then its perfect, but if you know that you’ll need to extend or implement complex logic I would suggest that you use standard Asp.Net web forms or MVC as it will ultimately be quicker and easier for you to develop and test.

I’d love to hear from anybody who has experience from using DD and how they got on.

Thursday, 22 April 2010

Linq-to-sql to use or not to use – my conclusion.

A while back I posted about Linq-to-sql (L2S) and wondering if I should/could use it in a layered fashion and at the time I fully intended to try and come up with a coded solution to show how it could be done.

After much searching I found this article on Code Project showing how to decouple L2S and use unity to build the objects and I thought that I’d use that as a sort of template to see if I could reproduce the functionality myself.

However, after a brief experiment, I gave up on attempting to recreate the code myself, and the reason - its just too much trouble to do so.

As I was trying to create the code I re-examined why I was doing this in the first place and whilst I was looking to be able to use L2S in a layered fashion, I also wanted to do TDD and not need to have to create my own framework to do so.

It was after this bit of introspection I stopped trying to create the code necessary to do this.  The code project article demonstrated how much work would be required to decouple L2S in such a way as to enable me to do what I wanted and although you should be able to wrap all the additional functionality in an assembly you’d need to always use the extra code.

You may think ‘well if its in an external assembly its no bad thing’, but why do this when there are other ORM’s out there that will allow you to easily mock the data context, use POCO classes, TDD, etc?

I was also pointed towards PLINQO by Eric which allows you to provide additional business logic in the partial class for each entity and through its code generation it can do it very quickly, but this doesn’t allow you to mock the DataContext and break the layers apart.  It was because of this that I decided to not pursue PLINQO as a solution to my L2S conundrum.

So after all this what conclusion have I come to? I doubt I’ll be using L2S any time soon for anything other than trivial or prototype applications.

Of course you may have other thoughts and I’d love to hear them.

Monday, 19 April 2010

Presenting at DDD South West

Well I've been lucky enough to be picked to give a presentation at this years DDD South West where I will be presenting a session called 'So you want to try scrum?'

The session will be, unsurprisingly, about scrum; the intention is to cover scrum briefly so that everybody knows what I'm talking about and then spend the majority of the time on 'pain points' that a lot of people experience when trying to implement and work with scrum.

In preparation for this I'm going to try and blog a bit about the pain points to provide some additional material for anybody who comes to watch me.


Thursday, 15 April 2010

Ok, so maybe I didn’t break VS 2010

So following up on my previous post I’ll admit I haven’t actually broken VS2010, but something still isn’t right.

One of my colleagues attempted to convert the project and it worked without any issues, the main difference being he’s running on XP 32 bit and I’m running on Win7 64 bit.

So first up I tried running VS 2010 as admin – but it didn’t seem to make any difference. Then since the build seemed to be stopping at CppCodeProvider and previous items listed in the build mentioned ‘Just my code’ so I googled that and found this forum post so I tried turning off the debugging option for ‘Just my code’ and got further when I ran the code the page displayed but looking at the build output there were loads of errors.

So I tried running VS 2010 as administrator again and voila it all worked.  So I tried turning the ‘Just my code’ debugging back on and ran VS 2010 again as administrator and it still worked.

So now I’m really confused, when I originally ran VS 2010 as administrator it didn’t work, now it does.

I don’t like having to run as administrator so I’m still investigating what the issue is with the project and believe that its related to the fact its a converted project.  What ever I find I’ll blog about.

Tuesday, 13 April 2010

I’ve broken VS2010 already

So today VS 2010 was released (ok if you’re going to be picky it was released yesterday at 7pm BST) and this morning I eagerly downloaded and installed it and then got the rest of my team to do it as well.

I grabbed a .Net 2.0 project and opened it up in the new IDE, completed the conversion wizard which went smoothly and then tried to build the project and it failed. What?! I just built this in VS2005 and had no problems so what’s going on?

Well it turns out that a previous developer had referenced a .Net 3.5 assembly (System.Web.Extensions if you really want to know) and VS 2005 didn’t complain but VS 2010 would not allow that reference since I was targeting the .Net 2.0 framework.

To say I was disappointed was an understatement.

I know what you’re thinking – just change the target framework to .Net 3.5 or even better .Net 4.0 and everything will be fine.  Well I can’t go to .Net 4 since its not rolled out yet and since its been working up to now in .Net 2.0 what I really want is to be able to use the new IDE without a lot of deployment bother.  But in the spirit of ‘this software isn’t going to beat me’ I did try and change the target framework to .Net 3.5 and what happened? Nothing.  By which I mean it didn’t run.  It compiled happily but when you ran it (Asp.Net web forms) the browser appeared but the Application_OnStart was never reached and if I put any break points in the code they came up with ‘will never be hit as symbols have not been loaded’.

So I’ve posted on Stack Overflow and will continue to try and sort it tomorrow, if anybody has any ideas please let me know.

Wednesday, 31 March 2010

Asp.Net Dynamic Data – StringLengthAttribute

The name of this attribute says it all – it allows you to specify the maximum length of a string and Its signature is almost the same as the RequiredAttribute
StringLength(“Maximum Length”“Error Message”, “Error Resource”, “Error Resource Type”)
The only issue is that when you set this attribute it sets the text box max length which means that it is impossible for the user to exceed the maximum you set so any error message will never be displayed!

However, this is the same behaviour that Dynamic Data does by default based on the table definition, so the attributes real use is either when you want to restrict the length of a string so that its shorter than the database allows, or if using Entity Framework and you create your own entity you can limit the length of data entered using the standard dynamic data functionality.

As per always to use simply decorate the property with the attribute

[StringLength(15)]
public object City { get; set;}

The user won’t actually see anything they just won’t be able to enter any text longer than the length you’ve set.

Monday, 29 March 2010

Asp.Net Dynamic Data – RequiredAttribute

The RequiredAttribute overrides the model to allow you to specify a nullable field has to be completed.

Its constructor allows you to specify the error message you want to display instead of the standard error message:

Required(“Error Message”, “Error Resource”, “Error Resource Type”)

Just decorate the property with the attribute as per below:

[Required(ErrorMessage = "You must enter a phone number"))]]

public object Phone { get; set;}

And the user will then see:

2010-03-29_2146

Simple.

Tuesday, 23 March 2010

MS Ajax Library under threat?

Just been reading a post by Stephen Walter about Scott Guthrie announcing Microsoft is putting its weight behind jQuery as the defacto client side technology for Microsoft products.

The post expands on what MS are doing and specifically putting resources into templating within jQuery.  Up to now the Ajax Library has been the way that MS have ‘suggested’ you do client side data binding but with their adoption of jQuery is the Ajax Library under threat?

As the amount of functionality in jQuery increases why should developers continue to use the MS Ajax Library? Especially if it seems another technology with more widespread acceptance appears to be coming along to replace it?

Will the MS Ajax Library go the way of Linq-To-Sql where MS support it but developing it is not a priority?

I for one will not be spending my time looking at the MS Ajax Library unless I need to for an existing project, my own spare time will go towards increasing my jQuery knowledge.

Thursday, 18 March 2010

Asp.Net Dynamic Data – RegularExpressionAttribute

As you would expect this attribute allows you to apply validation to a field in the form of a regular expression, if the data entered doesn’t match the expression it isn’t valid.

RegularExpression(“Pattern”, “Error Message”, “Error Resource”, “Error Resource Type”)

The expression will be evaluated against the field value when the user attempts to insert or update the data and if it doesn’t match then the error message will be displayed.  Just like the Range attribute you are able to specify either specific text or a resource for an error message.

Simply decorate the field with the attribute as per usual:

[RegularExpression(@"^\d{1,7}$",

ErrorMessage = "Quantity must be a whole positive number.")]

public object Quantity { get; set; }

And then when a user tries to put text in the field they will see:

2010-03-17_2113

Wednesday, 17 March 2010

Asp.Net Dynamic Data – RangeAttribute

The range attribute allows you to specify that a specific field should only be able to accept a certain range of values.

By default the RangeAttribute will work with Int32 or Double values but it is possible to specify your own type and then define the max and min values as strings this then provides the ability to do things like date range checking.

For Int32 and Double the constructor looks like this:

Range(“Minimum Value”, “Maximum Value”, “Error Message”, “Error Resource”, “Error Resource Type”)

For the custom type the constructor looks like this:

Range(“Type”, “Minimum Value”, “Maximum Value”, “Error Message”, “Error Resource”, “Error Resource Type”)

We can see that Range shares similarities with the DataType attribute that I covered previously so if the field being validated fails validation you can either specify a string or message or use a resource for the error message to be displayed.

So to use it you simply decorate your meta class with the attribute on the field you want validated

[Range(1,100,ErrorMessage ="Value must be between 1 and 100")]

public object Quantity { get; set; }


Which then ends up being displayed on the site as:

2010-03-17_1355

Tuesday, 16 March 2010

Asp.Net Dynamic Data – Validation Attributes

I’m leaving my musings on Linq-to-Sql for a moment and returning to dynamic data to cover off the validation attributes.

The table below lists the available attributes;

Attribute Description
RangeAttribute Specifies the numeric range constraints for the value of a data field in Dynamic Data.
RegularExpressionAttribute Specifies that a data field value in ASP.NET Dynamic Data must match the specified regular expression.
RequiredAttribute Specifies that a data field value is required.
StringLengthAttribute Specifies the maximum length of characters that are allowed in a data field.

To use these attributes you use them on a meta data class just like you do for the display attributes.

Over the next few posts I’m going to detail each of these attributes.

Monday, 15 March 2010

Linq-to-sql to use or not to use

Following on from my previous post I mentioned the problems encountered trying to use an ORM for the first time, specifically linq-to-sql.

In the recent project where my team used Linq-to-sql (L2S) we were using Asp.Net Dynamic Data and we ended up with lots of data access in the code behind.  I plan to cover our experience of using Dynamic Data in an application in a future post but let me say now its not what I expected – and not in a good way.

So the team moved onto another project and we wanted to be able to put a proper layered design in place with a presentation, business logic and data access layers and immediately ran into a problem in that if we wanted to create an app that conformed to good design principles e.g. Dependency Inversion Principle, then we were going to run into a few issues.

Now I know that I’m way behind the curve on the use of L2S as most of the blog posts and articles I’ve been reading are from back in 2007, but I have been having a bit of difficulty finding information about using L2S in a layered and tiered system which worries me since it may be a case of “it doesn’t do it but you can make it do it” which is a code smell to me.

So far I’ve found a few articles and the one thing that strikes me is the amount of additional work that is required to make L2S work in a layered way, I know that it doesn’t support POCO but at the same time simply using it as an ORM seems to be more difficult than I would have hoped.

The following articles are a good summary of the information I’ve found and I’m still looking into this to see what I can find.

Using the IRepository pattern with Linq-to-SQL
Linq to Sql – How-to Separate the entities and the DataContext.
LINQ to SQL Queries out of a Business Layer

Friday, 26 February 2010

ORM – peril of the first time users

My team has recently got to grips with Linq-to-Sql and in the whole its all gone smoothly but there was one issue that the team had to overcome.

Our team is normally very good about data access, up to this point we have used stored procedures for all data access but only when needed. However, with Linq-to-sql that started to go out the window with calls to the database being made all over the code.

One of the worst examples that killed performance was calling the database to retrieve data for each record that is being displayed in a grid, now this is something that normally the team would have never done.

We had been using linq-to-objects for a while and so were used to working with objects in memory with no performance problems.  Now linq-to-sql syntax looks like linq-to-objects and you can believe you are simply calling a method on an object rather than the database and it is this reason that I believe we had the a problem, simply we forgot we were calling across the network to the database.

So the cautionary tale is this – when using an ORM don’t forget all your previous database knowledge and if you are not containing the data access to its own layer then be mindful of where you are using it.

Sunday, 14 February 2010

Asp.Net Dynamic Data – Data Type Part 2

In my previous post I listed the different DataType’s that dynamic data supports through its DataType attribute.

Like some of the previous attributes the DataType attribute supports multiple arguments, the difference between this attribute and the others with multiple arguments is you want to be looking to use the named parameters.

The available arguments are:

Argument Use
ErrorMessage Sets message to display if the control fails validation.
ErrorMessageResourceName Name of resource to use for displaying an error message. Often used to display a localized error message
ErrorMessageResourceType Used in conjunction with ErrorMessageResourceName to specify the type of resource

To use you simply specify the relevant named parameter and value to use:

[DataType(DateType.Date, ErrorMessage=”You must enter a valid date.”)]

And this should then display your error message when an invalid value is entered in the field but unfortunately this does not work.

It appears that there is a bug in the current implementation of Dynamic Data where it does not pick up the error message, or resource, that is specified  on the attribute.  It is possible to get Dynamic Data to use the attribute error message but that involves further code where you set the CompareValidator error message explicitly see this blog post by Rick Anderson for details.

Friday, 12 February 2010

Asp.Net Dynamic Data – Data Type Part 1

When dynamic data is building the columns to display it works out the field template to use from the data type of the field it is going to display.

The data type attribute allows you to override what dynamic data wants to use with a more specific data type.
The standard data types supported are:


Data Type Description
DateTime Represents an instant in time, expressed as a date and time of day.
Date Represents a date value
Time Represents a time value
Duration Represents a continuous time during which an object exists.
PhoneNumber Represents a phone number value
Currency Represents a currency value
Text Represents text that is displayed.
Html Represents an HTML file
MultilineText Represents multi-line text.
EmailAddress Represents an e-mail address.
Password Represent a password value.
Url Represents a URL value.

As well as those listed another type of custom is supported allowing you to extend this functionality further yourself.

To use the DataType attribute you add the attribute to the display meta data:
[DataType(DataType.Date)]
public object OrderDate {get; set;}

This will then change the way the OrderDate field, a DateTime field, so that only the date is shown


2010-02-12_2133



The image shows the order date field with the DataType attribute set and the RequiredDate field that doesn’t.

Thursday, 11 February 2010

Asp.Net Dynamic Data – UI Hint Part 2

Like the DisplayColumn attribute the UIHint supports multiple arguments allowing you to pass additional data to the field template.

The signature for the attribute is:

UIHInt[field template to use,Presentation Layer, Control parameters]

The arguments are:

Field template – covered in the last post

Presentation Layer – From what I can discover this is not currently used but the suggested values from intellisense of HTML, Silverlight, WPF or Winforms indicate the maybe its roll will change in the future.

Control Parameters – a param object array allowing you to pass as many additional parameters to the field template as you want but you need to pass key value pairs e.g. “argument1”,10

If you build your own custom field template then the control parameters may provide you a mechanism to provide additional data you need.  The only problem is that if you are using the attribute on a class the values will be hardcoded, which doesn’t allow for much flexibility.

Wednesday, 10 February 2010

Asp.Net Dynamic Data – UI Hint Part 1

The UIHint attribute allows you to tell Dynamic Data what field template you would like to use when working with the property.

Before I get ahead of myself lets briefly cover Field Templates.  A field template is what dynamic data uses to display the value for a data field.   Each field template is implemented as a normal Asp.Net user control which contains the basic UI definition and validation.  You can easily create your own custom field templates to provide the display or functionality you need.

What the UIHint attribute allows you to do is to specify the field template that you wish to use for a particular field/property on the table.

By doing this you override the default behaviour of dynamic data which normally decides the field template to use based on the data type of the field.

[UIHint("DateTime")]
public object ShippedDate { get; set;}

Doing this ensures that when ShippedDate is displayed in a list, detail or edit it is the DateTime field template that will be used.

Tuesday, 9 February 2010

Asp.Net Dynamic Data – DisplayColumn Part 2

In my previous post I outlined how to use the DisplayColumn attribute to specify the column you want to use to display to represent the foreign key.
The attribute is smarter than that though in that it supports multiple arguments.
The signature for the attribute is:
DisplayColumn[“Column to display”, “Column to sort by”, Sort order]


This means you pick the column you want to show, the column to use when sorting and the sort order.


Most times you set the display and sort columns to the same value but the flexibility is there to pick another column if you need to.

Monday, 8 February 2010

Asp.Net Dynamic Data – DisplayColumn Part 1

This post is broken in 2 to cover the basic use of the attribute and then some of its options in the next post.

This attribute doesn’t appear on the meta information that controls display, but rather it is meta information on the partial class that matches the entity in the ORM.

When a table contains a foreign key to another table that is included in the ORM dynamic data normally uses the first string/text column that it finds in the related table to display to the user as link in both the list and detail view and a dropdown list in the edit view.

This attribute provides the ability for you to choose which column you want to use to display for the foreign key rather than the column dynamic data would choose.

To implement this attribute you first have to create the partial class that matches the entity in the ORM (see my post here about a gotcha with extending the data context) and then add the attribute:
[DisplayColumn("ContactName")]
partial class Customer
{
}


By doing this you will then see the Customer.ContactName instead of the Contact.CompanyName value displayed for the link in the list and detail views and used to populate the drop down list in the edit view.

Check here for the MSDN definition of the attribute.

Sunday, 7 February 2010

Asp.Net Dynamic Data - Description

So I’m feeling foolish now, this attribute is as simple as the previous 2, so because of this now assume all the attributes are simple unless I state otherwise.

The attribute allows you to enter text that will be displayed as a tooltip but only when the property is displayed in edit mode.

As per usual just decorate the property with the attribute and set its value:

[Description("Name of the company")]
public object CompanyName { get; set;}


Which in edit mode you then get:

2010-02-07_2022

Wednesday, 3 February 2010

Asp.Net Dynamic Data – Default Value

Ok perhaps I was wrong in my last post, Display Name may have to vie for the title of ‘simplest hint’ with Default Value.

This attribute allows you to specify the default value of a field when performing an insert of a new record. To use it simply decorate the property in the meta data class with the attribute and the value you wish to use:

[DefaultValue("UK")]

public object Country { get; set;}


When entering a new record the screen now populates the country field with UK:



2010-02-03_2045

Monday, 1 February 2010

Asp.Net Dynamic Data – Display Name

This is perhaps the simplest hint to understand.

You put this in the meta data and tell dynamic data what you want displayed as a column header in the list/listdetail view and a field name in the detail & edit views.

The attribute is as easy to use as it sounds simply state the name you want to see:
[DisplayName("Company Name")]
public object CompanyName { get; set;}


And it appears in the UI:

2010-02-01_2153

Wednesday, 21 October 2009

.Net 4 – Enable standard controls for dynamic data

Just seen a new blog post by Scott Hunter today about some of the data enhancements that are coming for .Net 4.0.

The thing that got me excited was at the very bottom of the post where he mentions changes to the way dynamic data can be used.

What you should be able to do is attach dynamic data functionality to a existing control simply by specifying the extension method EnableDynamicData so to add the functionality to a ListView you would just enter:

ListView1.EnableDynamicData(typeofProduct))

This would then provide all the normal dynamic data functionality for that control. Sweet!

Sunday, 18 October 2009

Silverlight 3 – dialog

Had one of those thick moments the other day whilst trying to debug a silverlight 3 app.

Used the new SaveFileDialog and the debugger was throwing an error when the dialog attempted to execute the .Open method.

It took me a moment to realise that the problem was that the browser needed to execute the action and that the action being executed by the code represented a security risk.

Once I ran the code to a point after the .Open I could debug to my hearts content.

Unity configuration intellisense

I’m late to the whole IOC party and since in work we use the Enterprise Library as our standard for majority of functionality we may want (like logging, error handling, etc) so Unity is the first stop I’ll make for an IOC.

Now the first thing I expected was to have intellisense to help me with the configuration so you can imagine my surprise when after I referenced the assembly the Enterprise Library configuration tool didn’t offer me ability to configure unity and when I dropped into the web.config to add the config manually there was no intellisense.

So I opened google and had a look around and found Unity Community Contributions which contains a set of extensions to unity one of which is an xsd that provides intellisense (its easiest to get it here as you can down load it by itself outside of the main project).  There is one draw back - you have to configure each project that you want to use it with, you can’t set it up so that its used by default each time.

Once you’ve downloaded it you need to Place the xsd in <drive>:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas.

To be able to use it you need to configure the properties of the config file that you will be putting the unity config in.  To do this open the config file and then go to the properties pane where you will find a property called schemas.

At the end of the schemas property you find the standard … button to open a dialog.

You should find the unity xsd listed and all you have to do is to select use and click ok and then you’ll have intellisense.

Friday, 2 October 2009

Web forms a dead end?

Read a blog post by Dino Esposito The dead-end of Web Forms which is where Dino says:

When introduced, Web Forms was a cutting edge solution and it just engineered current best practices. But it was ten years ago. We could argue whether it was the right choice to engineer ASP practices ten years ago. There's not much more you can expert or achieve with Web Forms than you do today. OK, tomorrow, with version 4. This is the dead-end of Web Forms.

Does it mean that no more development will be done on web forms by Microsoft, I doubt it, will there be a lot of new development, that depends.

Web forms is a mature product this means that any changes to it will be incremental amending/enhancing existing functionality to improve how it works rather than jumping forward with innovative new functionality.

If you look at the changes in .Net 4.0 we see some of the new functionality being introduced having already been developed for MVC, for example routing, and I think going forward you will this trend increase with new functionality appearing in MVC first and then being added to web forms afterwards. 

If Microsoft themselves believe that web forms has hit a dead end then why not open it up to the community as open source the same way as MVC, if this happened at worst no changes would be made by the community but at best you may see new innovation and the product revitalised.

 
In the end that there are most likely 10’s of thousands of web sites built using web forms and they are not going to be replaced overnight with MVC. Web forms may be ‘a dead end’ as Dino puts it but a lot of people will be employed to maintain, extend and yes build sites with web forms for a good number of years yet.

Sunday, 27 September 2009

TDD, Unit Test or nothing?

I’ve been thinking about unit testing recently prompted by a talk at .Net Developer Network, a presentation recorded at Agile 2009 and a blog post by Joel Spolsky.

Now I try hard to remain open minded about different methodologies but that doesn’t mean I won’t ask questions about them and often find that the way that the people involved who are ‘championing’ the methodology answer those questions effect my attitude towards it.

In the recent talk at .Net Developer Network given by Stephen Oakman and Ronnie Barker of the Agile workshop  they focused on using TDD, in this case Test Driven Design, to create an application but being very disciplined (strict?) and following TDD to the letter:

  • write test
  • test fails
  • write just enough code to pass test or refactor existing code.
  • test passes
  • check in to source control.
  • Repeat

Now I have no objection to that know I haven’t been doing pure TDD which has prompted me to look into it further – a good thing.

What did occur to me though is the amount of time it takes to create the code although Steve & Ronnie say you get faster as you go along I felt myself wanting to take a bigger step forward than they were doing i.e. write a bit more actual code rather than lots of small tests. 

My issue, if you like, is that the process seemed to take a very long time and although it would have been thoroughly tested it will still need to have the tester go through the code, remember “unit tests prove the correctness of the code, testing proves correctness of the functionality” but I also know what my manager would say with the increased time to finish any piece of work.

The next day Joel Spolsky blogged about ‘The Duct Tape Programmer’ which seemed to be the complete opposite of what Steve and Ronnie talked about and I can understand the desire to simply ‘code it’.  I have also recently watched a presentation by J B Rainsberger called ‘Integration tests are a scam’ which did highlight the fallacy of some of the testing we as developers do.

I think that as developers we need to use our own judgement as to what we test and whether the investment in writing tests will pay off for example should you bother writing tests for an application that consists of a single call to the database to record minimum information and has a half life of about a month?

I fully believe in unit testing this doesn’t mean pure TDD but does mean writing tests, if you can utilise the VS ‘Create unit test’ functionality’ or if you can use a tool like PEX to generate edge cases and the other tests that would take a long time to create then use it and save yourself time and be happy that your code is tested.

Friday, 25 September 2009

Asp.Net Dynamic Data – Attributes

I’ve been doing some more dynamic data and trying to work with the meta data for a table to change the display.

Previously I’ve mentioned ordering columns and although not explicitly mentioned it the hiding of columns you don’t won’t by applying the [ScaffoldColumn(true)] attribute where you set true so DD generates a column and false if you don’t want it displayed.

The attributes that I can find in relation to display of the data are:

Attribute Name Description
Scaffold Column Indicates if column should be displayed or not
DisplayName Name to be displayed in the header of the column for the property
DefaultValue Default value to use when inserting new data
Description Text to show in a tooltip when mouse hovers over the field
DataType Used to specifies the name of a type to associate with a data field to provide additional functional around that field e.g EmailAddress
UIHint Used to specify the FieldTemplate to use when rendering the column.
DisplayColumn Specifies the column to display for filters or foreign keys.  Default is to use first string column that is found

Over the next few posts I’ll being going into these attributes to show the source code and the results.

Original post