I'm a bit of a fan of Eric Steven Raymond's work "The Cathedral and the Bazaar" and often find myself trying to implement some of it's ideas when working.
Outside of programming though, how is the principle applied? Some of the issues are similar, some are not.
We must all be familiar with the following scenario.
1. A change in business practice is suggested and agreed by Senior Management.
2. Project brief is formed to implement above change.
3. Committee/working party/steering group is formed to make sure all parties (/departments/stakeholders) are involved in project (or at least represented).
4. Because of 2, decision making is perhaps longer than desired since everyone needs to be involved before certain milestones can be passed. In particular, decisions get delayed until the next meeting, which themselves get delayed depending on availability of staff.
5. Process moves on. Just prior to change being implemented, all staff are emailed with a summary of what to expect.
6. Change is implemented.
7. Person X points out that the change is flawed as it fails to take into account something of vital importance.
The scale of the problem uncovered in 6 can vary wildly, but we'll concentrate on the show-stoppers here. I've heard someone, quite late into a project point out to a project team that what they were proposing was basically illegal. In another case deep into implementation of another scheme (when thousands of man hours had already been expended) someone noticed that something (representing about 20 or 30% of total expenditure) had been missed out entirely.
On the face of it, this scenario (which I have witnessed more than once) seems like a blunt rejection of the 'many eyeball' principle. Lots of people are involved in a process and yet it still fails. Major things are overlooked which should (in theory) be the major strength
So, on such occassions - what went wrong?
A lot of criticisms might occur of the project, and I'm going to talk about three that I've heard
1. The wrong people were involved in the project.
Certainly, in the example given above, the obvious solution would be for Person X to have been involved from the start. But is that it? More people involved at the group stage? Maybe. But won't that slow things down even more?
Beyond more people there is also the issue of the people who were involved. I have seen a number of projects where the people involved were far too senior. There are good reasons why senior people are involved, but often it's the wrong reasons which see's their time wasted in detail-heavy work which they have no interest (or knowledge) in. In addition, it's not considered how much overhead adding a "disproportionately" senior person will cost in terms of difficulty in scheduling meetings, etc.
2. The project was poorly led, or there was no sense of accountability.
In many cases it is true that it is either not clear who is leading on a particular project or their leadership is so weak as to be non-present. In addition, in the quasi-public sector I'm not sure it's particularly common for people to face any disciplinary measures or even censure for failing.
If we are talking about accountability for sections of a project then this can be a problem, but in the specific example we are considering - where something has been missed - it might not have possible to make someone accountable for that side of the project. Sure, if we divide up the project and so all legal isuses lie with a named individual then perhaps that person is much more likely to make sure everything is covered than when collective responsibility reigns. I think that might depend on the individual in question, and the organisations attitude to making mistakes.
3. Too many people were involved.
Have you ever heard the adage "If Moses was a committee, the Israelites would still be stuck in Egypt."?
It's not the most common of attitudes now, although it persists in some (men more than women, I find) and applies more generally to project failures and slowness in decision making in particular. With information gaps like this it seems counter-intuitive that you could ever have too many people involved. I do think however that combined with an unclear decision making process and a lack of accountability (see above) this can be an issue.
4. Project was rushed.
For anyone who has sat through dozens of hours of meetings, or waited weeks for action points to be fulfilled it might seem incredible that the project was anything approaching "rushed". But in fact, from one point of view, it must have been rushed if glaringly obvious things were missed, unless we're saying that the staff involved would never have noticed, in which case perhaps we do need to consider if we have the correct people in place.
There are other criticisms, but that's a reasonable summary of the main ones, keeping this as non-specific as possible. But I want to focus on how we could have managed the process, how we could have harnessed "many eyeballs" as elegantly as possible to try and remedy issues like this.
Many Eyeballs (An Approach)
1. Eliminate De-Facto Secrecy
The type of work environment I am talking about in this blog is social housing. We are blessed (in my opinion) in that our work is unlikely to be financially confidential and it is doubtful there will be any intellectual property issues. While the government is keen to encourage competetition within the sector hopefully people do not take this to it's logical conclusion.
Anyway, if we presume the change above does not affect any individual persons privacy (e.g. is not a HR change) then there is no reason why anything should be kept secret. And in fact, for most projects I've worked in, nothing was a secret. Indeed, we were keen to discuss with as many as possible. But in quite a few regards we had a kind of "de-facto secrecy" in place. If I have a document on a shared drive, ten directories deep and named obscurely then it may as well be password protected for all the people who are likely to look at it.
What does this mean - email the whole company with your project status notes every week? Almost certainly not! (although this is what you will find some people actually end up doing) Email is a pretty poor medium for such communication, and as I have discussed elsewhere simply imposes a cost on the whole organisation (many of which do not care about your project).
The obvious way of running a project then is via an intranet based system, a topic I'll come back to in later posts. For now, if you'd like think about something that's a cross between Microsoft Sharepoint and vBulletin's forum software.
2. Reevaluating Crunch Dates
On most of the projects I've seen operate, that weren't particularly large in size (perhaps involving up to about 7 or 8 core project team members) emphasis is always given to deadlines through the project. In particular, extra emphasis is gvien to the final deadline, the implementation date or D-Day or whatever you choose to call it. I'm calling these "crunch dates".
The logic behind the emphasis on specific dates (or the end date) is fairly clear; they focus everyone's minds on what needs to be done. Indeed, they're so ubiquitous to be an almost invisible part of the process; obviously we'll have a deadline.
But here's a question : Do deadlines actually work?
By 'work' I don't mean whether we always meet our project objectives - obviously we don't but that isn't down to using x or y methodology - it's just a fact of life. What I do mean by work is whether they ensure the most efficient and effective use of our resources. I'm not sure they do.
Go to any University on or the day before a big essay / thesis deadline date. I guarantee you will see libraries full to the brim with people desperately trying to print their work or get it bound, or worse still people still writing or researching the damn thing. A month or so before some of these same people could probably be found propping up the bar.
The traditional analysis would be that sure, that happens but that's just because students are lazy bums. And no doubt they are. But are we saying that this behaviour is limited to students? Then why do I get emails from Quantity Surveyors in their 40's or 50's at 11pm the day a project is due in? Why have I been asked by directors for assistance on a presentation they were due to deliver that afternoon?
I do not want to say that it's "human nature" to do everything at the last minute - clearly that's not true. But it does seem like behaviour common amongst certain types of people in certain circumstances. Is it a problem? From experience, yes. The projects I've done near the deadline (and by lord there's been a few) have common problems; I'll often realise that a resource I need for the project is not available "at the last minute" - I won't have time to properly error check it and it will generally be inferior to something which has had time to settle.
Now, of course, some deadlines are "natural". The government want RSLs and local authorities to have achieved Decent Homes by December 31st 2010. Therefore, that is our deadline for carrying out the works. Working backwards we can easily build up a coherent (and semi-realistic) schedule of how many properties we should have made decent each month from project start to the end of 2010. This is slightly different from many projects though - the actual difficulty with Decent Homes (aside from funding and consultation and programming and....) is going to be actually getting the works done. It's a "physical" problem.
A lot of projects don't fit this framework. If your project is "develop a strategy to tackle anti-social behaviour" then you cannot simplistically gantt chart it out - the work will be heavily uneven.
So what's the alternative to "crunch dates"? I would suggest an approach that could broadly be termed 'incrementali' wherever it's practical. Instead of working in blindly in isolation with monthly reviews we should be working collaboratively with feedback on an hourly basis. I do not mean that everything is reviewed in full every sixty minutes but that someone (a project manager if you like) has a continuous "feel" for how things are going - the health of the project. Using the health analogy further; one could wait for a yearly physical to see how your health was doing, but more generally it's going to be a better idea to always know how one feels (physically) and adjust your behaviour accordingly.
Another advantage of an incremental approach is that when there is "extra" work (e.g. someone notices we've missed something) then there is much less incentive to ignore a problem for want of meeting a beloved deadline.
3. Release Early, Release Often...
OK, this one isn't mine but it applies here. As I've suggested, it's better if we are continually updating as we go along. That way, the project team is continually aware of how well (or not) they're doing. But more than this, I think (where practical) this should be an opportunity to release more globally.
An example of what I mean, (and more importantly, what I do NOT mean) is as follows. A new rent module is being installed in a housing management system. One would not change the way something behaved if it affected transactions (for instance) as an error would be incredibly costly (from all sorts of perspectives). Changes here would only be made after significant testing in an isolated environment, with much more traditional testing procedures in place.
But what if the change is simply to the way the screen operates (e.g. cosmetic changes). Could it be possible to warn an arrears team (or a selection of the team) and then warn them of any changes, and then change their screens on a daily (or more often) basis, in line with their feedback?
Clearly a lot of this depends on how systems operate, the type of data that could be affected, whether clients could realistically be updated to individual team members so quickly and so on. Handled horribly such changes could lead to system crashes, users giving our incorrect information to customers or simply people being confused and suffering productivity falls as a result. But I do not think these sorts of issues will necessarily arise.
Traditionally, any changes (even if only to the cosmetic look and feel) of a system would be handled along the lines I outlined at the beginning. The housing management system my organisation uses is basically a Unix application running in a widow on Win9x/2k boxes. A transfer to an up-to-date windows look-and-feel application (based on IBM's Genero) is scheduled to take place. Towards the end of 2008!
Such changes are ripe for gaps to arise. One user (sometimes the most senior in a department, or the most willing or even helpful) will be selected and will test the new screen in depth. But one user can miss something. Most systems offer more than one way to do something and it might be that half the department does something in a way this user is not familiar with (this is particular true where systems have been in place for years and have legacy functions in place). In addition to this, testing is an extraordinarily artificial experience for most people and you will probably find most users get bored incredibly quickly. The only way something can truly be tested is through usage.
The truth of the above was amply shown when my organisation updated their main CRM system. An entire office was now being brought on-line which had never used the system before. The potential number of users going from 300 to 600 in one day. Perhaps not unexpectedly the system crashed repeatedly on the first few days and the problem was identified as simply one of load. Load testing is probably quite a difficult thing to model accurately when you take into account of people in different offices using a system for different things and so on, but still, didn't someone think perhaps it would be better to go live a bit at a time? Was it really necessary to have everyone start together?
The details of this particular example aren't very important but in my opinion it was not. But decisions like this are made (in line with the cruch-date philosophy) because it's sometimes thought it will be too "confusing" if people come online at different times. But confusing to who? Users? Well, no - as a user in team x you would simply be told that you'd be starting on the new system on June 1st. The fact other people would be starting on another date is essentially irrelevant in most cases (there are exceptions of course).
No. I think when people talk about systems being too complicated they are talking from a top-down perspective. This is a traditional problem when looking at economics. The global (or even national) economy can seem incredibly complex and complicated when view as a unit. It can be tempting to want to "simplify" things - perhaps centralising production decisions to a single office/computer/individual and other ideas of that ilk. But in most cases, from a user (or individual) perspective economics is fairly simple. I go to work to earn an income. My decision to work where I work is based on a number of factors like salary, location, type of work, respect from co-workers and so on. I do not need for my decision making to be "simplified" any further by being assigned a job for life. Back on our project, different users using different systems at different times, or perhaps certain users using different versions of different modules, etc is only complicated if one chooses to view it. So long as there is solid management of the specifics by those involved there is no problem.
Another final example : There is a form we ask all residents to fill out when they leave their property (a termination of tenancy form). We wish to make some ammendments to increase the total amount of informaiton collected (which will eventually go into our housing management system, which already has fields in place for recording this data). This is simply an addition to the form - we are still collecting everything we used to. We have a "Alpha Version" of the new form which has been knocked up by three staff members. Traditionally, this would go to their manager who would give his or her OK before passing onto some decision making body (e.g. Senior Management Team meeting or Board). Such a process could take two months. But why not simply put into place almost immediately after the first manager has seen? I am presuming that he or she has the ability to see potential problems (e.g. if any information requested could cause problem with internal audit or diversity & equality group) and the later approval is a rubber stamping exercise. In these circumstances, I see no reason why you wouldn't put the form to use straight away. If later we find that there is more information we have to collect then so what? Yes, some fields will be blank in the housing management system but that is already the case now! It is like Richard Dawkins' exasperated response to creationists : "Half an eye is much better than no eye at all!". Similarly, in some cases (but not all) half a form is better than no form at all.
The advantage of this approach is that aside from mere speed we will get real feedback from the people filling the form in and the front-line staff who assist them. From experience, this feedback is far more valuable than committees stuffed with the most learned of persons...
Summary
So, is that it? Not by a long shot, but that's a couple of things to think about. A project which works collabaratively, with immediate updates to information where possible and who tests work produced as regularly as possible is one which is half-way there. Without wishing to get all zen here, this "flow" of action, testing and refinement is one which is almost always superior to the existing model of project teams, milestones, and cataclysmic changes. In project terms at least.
Before I finish it should be noted that my remarks here should not be extended to areas where they do not belong. The above discussion could be misconstrued as a re-hash of the reformism vs revolution debates which socialists and other leftists have argued about for centuries. There are similarities and some of the arguments I have made above apply but there is one key difference.
In my model I am assuming that everyone in the organisation is on the same side. Where it is presumed there is conflict (or genuine disagreement in end goal) then it is quite possible incremental changes are not at all desirable. To take a housing related problem : Anti-social behaviour. Let's imagine there's an estate which is plagued with low level nusiance and related problems (grafitti, vandalism). This estate has a three foot fence around it's perimeter which is not sufficient to keep the perpetrators of such behaviour out. It would be silly to say that we would want to increase the size of the fence by one inch per week - and even if that were something practical then it would be unwise because it is likely that the individuals concerned would continue their behaviour and adapt as the changes took place. If however, overnight the fence went from 3 to 7ft then it's much more likely they might give up altogether. This is especially true if the fence was combined with other measures (e.g. general clean-up of area, introduction of community schemes, higher police presence and so on). In short, in conflict - blitzkreig tactics may often be superior to a program of continuous improvement.
In general : With all projects (whether they are minor or the transformation of human society) key importance needs to be placed on the effect and result of human psychology.