Sunday, 28 January 2007

Estimating Value : Labour Costs (Part 3)

In my last entry I discussed the different ways software companies could try to increase the revenue lifespan of their products. This interests me because, to go back to my original analogy, there is a labour value issue. I am likely to value a software subscription more if I feel there is a reasonable amount of work going into it.

But what about where that isn't the case? If I "subscribe" to the movie Casablanca then what realistically are the makers going to do with it (that wasn't horribly degrading the original product)? Perhaps some sort of discussion group or trivia about the movie, but these are things we can already get for free?

Moving away from intellectual property from corporations for a moment - how about the fruits of individual labours?

The work I do today I am paid for in a months time. Whether or not my work produces utility beyond this point or not is besides the point, the exchange is made. I may stay at my work for one month or for ten years - my pay will still be on a month by month basis. This is of course because I am a salaried employee.

For others, it is not the same. For others, work pays today and tomorrow. And beyond.

Recently, a campaign involving Cliff Richard and others asked the government to extend the period of copyright on musical performances. A BBC article outlines the issue :

Currently, performers in the UK can receive payments for 50 years, at which point their work goes out of copyright. But Sir Cliff says they should be given the same rights as songwriters, who get royalties for life plus 70 years.

"It seems to me we should ask for parity," he told BBC Radio 4's Today programme. "It doesn't seem just."

According to the singer, many musicians recording in the 1950s rely on their copyright payments as a pension.

"It seems terribly wrong that 50 years on they lose everything from it."


This to me is very interesting. Leaving aside the parity issue, what is suggested here is that people deserve remuneration for labours performed over fifty years ago.


But should this apply to other workers? Should the builders of a house receive a small sum every year for every house they built which remains standing? Should the designer of a bridge receive a penny for every hundred people who walk across it? If not, why not? Well, arguably :

1. Musicians are different from bricklayers because they are using some creative gift which morally should stand on a separate level from mere construction.
2. Musicians are different from bricklayers because the product they contribute towards retains it's utility value (i.e. the enjoyment derived from the recording) indefinitely, whereas houses require continuous maintenance.
3. Musicians are different because they have arranged their affairs so as to rely on such royalty income in place of a pension.

Of these, I should find arguments three least satisfying of all. I have no pension arranged but I appreciate this is idiocy on my behalf and that the world owes me no favours because of my miscalculation. Presumably (if it was considered at all) most musicians didn't (if they had the option) factor in living much past seventy and as such they suffer from the same problems as the rest of Western Europe. Of course if particular individuals should fall into hardships then of course the welfare state should be there to support them.

The first argument is stronger, but I'm not really sure that it can rationally be defended. Why should a man who works with his mind be significantly advantaged when compared to a man who works with his hands? Or alternately, if I happen to be born with musical talent then why should my rewards continue for many decades beyond my labour when a midwife receives no additional payments beyond her labour (no pun intended). Should a surgeon receive a tithe from every patient he saves in perpetuity?

Which brings us to the second argument - a stronger restating of the first point. If the public continues to gain "value" from a musical work, then why should people not pay for this right? Well, while my views are fairly clear on this subject, the debate does seem to consistently divide people.

I would say, quite bluntly - no they deserve no special status. The world does not owe you a living. No-one demanded you become a musician, and if after all your efforts you can find no-one to pay you for them then why should you expect payment? Once again, I will re-emphasise that I would not see anyone fall into hardship and I would always encourage assistance to be given to those who require it. But why should we have special laws enacted to restrict the actions of others (i.e. copyright) simply because people made little provision for their futures?

My own view is quite simply that creative endeavours (however noble) should not be privileged in law. In defence of this view I should say the following.
- I am not here referring to contractual agreement between an artist and company, or worker employee. If your employer says he will pay you a certain sum every month for the next twenty years (instead of paying you up front for your work) then this is a private matter between yourselves. Similarly, arrangements between software supplier and it's customers are not more or less "wicked" if they involve regular maintenance costs. What I am opposing here is the idea that morally such labour should be privileged (and by extension, such privilege should be defended by law and ultimately by violence).
- Creative acts are undertaken by all manner of salaried staff. The contributions I have made to my own companies systems will not last for decades, but would certainly contribute "value" should I leave tomorrow. Despite this, it does not seem that I should expect special payment for this, simply because my work was of a particular kind. If musicians should expect moneys for decades then in motion pictures what of set-designers, costume designers, make-up artists and others in movies? Do they not contribute to the "value" of a movie? Again, I am not referring to contractual agreements but what we should like to see enshrined in statute.

That copyright extension campaigns should arise now is not a coincidence and is linked into many other debates on the use of intellectual property in a digital age but also the general pensions crisis faced by most of Western Europe. People are living longer which in turn is outstripping pensions and other savings. Musicians mainly recording in their twenties or thirties may not have considered that they would live past seventy or eighty, but this is increasingly a possibility. They're not alone. Many thousands of others will not have much (if anything) to supplement whatever state income they have at 65 and beyond. We shall all be in the same boat.

There are numerous policies out there supposed to remedy these problems, it is hard to know whether they will ever be soluble while we persist with individual financial destinies and top-heavy demographic structures. Many of them (like raising the retirement age) seem to be either delaying the inevitable or simply unworkable.

If however, the legal position of "content holders" retains (or gains) additional privileges and protections then it would massively increase the drive towards self-employment (already a strong position due to taxation). Conversely, for employers there would be large advantages to retain salaried employees and to enthusiastically contribute towards FOSS projects. Or, in another sense : what will happen when the choice is between free amateur YouTube-esque video (the standards of which will improve enormously over the coming decades) and a Hollywood blockbuster for which you have to pay ever time you watch it? I'm pretty sure I know what I'd pick. For those who think "amateur" efforts will never rival their professional counterparts then I would like you to examine the newspaper circulation figures (some of which in London at least actually reprint people's blogs!).

To finish off, if you'll indulge me the following quote:
What the bourgeoisie therefore produces, above all, are its own grave-diggers.
 - Karl Marx


Marx was referring here to proletarians, the final stages of which is arguably in development today with the great transformation of the Indian and Chinese economies. But could it not be considered here too? Perhaps not capitalism but certainly are not the gravediggers of profitability the open content produced directly as a result of these fruits of capitalism (i.e. dissemation of PCs, cheap powerful camcorders, etc).

Hyperbole? Sure. But it's worth thinking about and I think Dr Marx would certainly approve.

Saturday, 27 January 2007

Estimating Value : Software (Part 2)

So, following on from my comments about value I want to look at software in particular. Simplistically, a new product from a software supplier faces four main competitive threats.

1. Other companies who offer a similar product.
2. Other companies who offer a radically different product which could make their product obsolete.
3. Open source projects who offer similar products for free.
4. The last version of the companies own software.

To explore this, let's look at a fairly high profile example : MS Office, the great cash cow of the Microsoft Corporation.

Tens of billions of dollars have been generated by different incarnations of Word, Excel and the rest, and it's still one of the companies largest earners. Beyond mere financial impact it's fair to say the suite has had a cultural impact. Almost every business I have visited has used MS Office in some capacity and the one example of a government department I know which tried switching to Open Office actually switched back after staff complaints. And this is very much the point - it's so widely that almost everyone knows how to use it (quirks and all) and this in turn means it is widely used. On a personal level I've personally been using Microsoft Word since 1991 and I'm embarrassed to say that it remains on at least one of my PCs at the moment, although I tend to use a text editor for most purposes these days.

So...could anything unseat Office? I'm certainly not aware of any commercial desktop office suite which could even remotely be said to be close to challenging Microsoft in this area. So of our threats above, we can eliminate 1 straight off.

Two is a different story. For years people have predicted the end to large locally stored applications and the development of true Web Applications. And recently, there's been some movement in that direction - the most high profile recent additions to this field being Google Spreadsheets. And such products do indicate a new way of doing things where you wouldn't have to have large client installations on your home or office PC. Instead (the theory goes) you would simply login anywhere in the world and be able to interact with the same system. And if that sort of future develops, Office as we know it could become unstuck. Of course, MS would be (and are already) offering their own alternate products.

However, I think anyone who has used something like Google Spreadsheets at any great length would concede they are not quite ready to replace programs like Excel - at least not in the corporate environment where the big bucks are earned. And indeed there are still PCs out there which do not have a persistent reliable connection to the internet (although this figure must be falling rapidly in terms of proportion of total PCs). For these reasons and others I think it's unlikely that web apps will realistically challenge MS Office within the next three years.

Skipping onto #4 for a moment, I would say the main competitor to MS Office 2007 is MS Office 2003 (followed by Office XP and even '97). The biggest threat to MS's revenue stream is not that other companies will produce an Office killer but that users will simply not bother upgrading. Office for a while now has been dangerously close to "good enough" for a great many users (even though it's still maddening to use at times). If a product is "good enough", even if it's not great then why bother upgrading?

Yes, 2007 is better than it's predecessors but in the eyes of many users, it's not good enough to justify a new licence. Using the terms from my last entry - there simply isn't enough perceived utility value in upgrading. Of course, one cannot explain such an attitude by labour value - the £200 to upgrade to MS Office 2007 is undoubtedly a tiny fraction of the development costs - but it would still represent poor value for some consumers.

Again, this is not to say Office is perfect. I am still mystified why after fifteen years and many billions of pounds worth of investment the automatic numbering system is still so dreadful in Word in Office 2003 at least the tabling options are at the very least non-intuitive. But are those problems enough to convince people to upgrade? Seems doubtful. (Especially given at least numbering still seems terrible in Office 2007 from what I've seen).

So, undoubtedly marketing hype will convince some to upgrade. Others may genuinely need the new features offered in the new edition. But is that sustainable indefinitely? What about Office 2009 or whatever that itiration is called?

Case in point. At my workplace we upgraded to Office 2003 recently. This was not because of extra functionality nor because of users demands but simply because MS stopped supporting our previous edition. Unless we wish to use unsupported products (given our risk averse IT department this is unlikely) we had to move up. Effectively MS boosted the utility value of Office 2003 by deliberately reducing the utility of their older product.

Will that work indefinitely? It seems unlikely.

Which brings us finally to the point. Much like exhausted oil reserves, the prospect could loom that revenues could dry up. Sure, it might not happen now, or even in Office 2020 or whatever we'll have by then. Either way, I think it's fair to say that the commercial desktop office application could eventually become an unviable business prospect (at least in the form it exists now). New business would offset some of this, but again that can't go on forever - you might get to the stage where everyone in the world has a copy of Office, but could you get them to buy two? This isn't Coca Cola we're talking about now.

One solution which could offset a lot of these problems is a different model for selling software (and other content). Much like our datasets described previously we'l pay a charge for the software which won't buy the licence outright but will instead be renewable on a monthly / annual basis. It won't be phrased this way of course. Instead, you'll be paying for server maintenance, free upgrades and unlimited technical support. And a bit like World of Warcraft, if you stop paying your fee, your software simply won't work.

Is that a feasible model? Whether the numbers can be made to work is another issue, but from a consumers perspective this would seem like a dreadful deal. Even if the cost was absurdly low (say, £1 for a year of Office) you would be in the position where the product could be withdrawn at any time and could undergo price changes in the future. You also could get interesting political consequences where software functionality could be removed centrally and affect everyone simultaneously. This is even more worrying with regards content - imagine watching Star Wars one morning and finding out that Greedo shot first? Or, more seriously that Eurasia has always been at war with Oceania.

Paranoid fantasies aside, what about value? If we've got to the stage where the applications are (almost) good enough now, it's going to take some pretty compelling reasoning to persuade people to pay a montly fee for the rest of their lives.

And here is where we return to the Open Source alternatives. Yes - Open Office is not as good as MS Office 2007 - I've said it. But when compared to a software which (theoretically) could have a continuous charge levied with it, surely this advantage will shrink rapidly? Many users may simply keep old version of MS Office of course, but those who want access to XML document types or whatnot will very quickly see that Open Office is "good enough". And so unless Microsoft make their software incredibly easier to pirate (hardly unprecedented but not easily feasible with online applications) their home-consumer market will slowly vanish. I suspect their business market will not be far behind it either.

Of course it might be that MS can develop enough new features in their products to keep up the utility value in a monthly subscription. At present, it's not clear this will be easy though.

And so, the overall problem can be stated as thus : How do businesses ensure that consumers continue giving them money after they've already supplied the product? If the product is good enough, the consumers may never need another one. Given all businesses require a continuous stream of revenue to maintain their operations how can this be resolved?

Obviously it's not just a Microsoft problem. Outside of the mainstream it's interesting to see the different approaches undertaken by different suppliers to this overall problem. Our asset management system cost something in the region of £60k when first bought. There is an additional support cost of something like £4k per year which obstensibly pays for our access to support services but perhaps more importantly is a condition of the original licence. We cannot opt out - not if we do not wish to cease using the software. It's fairly obvious that in the first year or so we contacted their support desk several dozen times. The second year probably involved about fifteen calls. This year I will be surprised if we exceed five. Now, I am not complaining that we do not have many problems with the software, and our fee pays for additional updates to the software, made intermittantly. But the value of these updates (at least in my opinion) is steadily declining. The software is not doing anything particularly special, and there is no patented secret formula at it's heart which couldn't be reproduced in a non-intellectual property infringing manner.

And indeed, in time, the inevitable will come. Someone young scamp will produce a FOSS alternative. Such a project will gain momentum and (fairly quickly) will approach the point where the additional utility provided by our software will not justify the annual expenditure. Only companies which enthusiastically innovate and develop their products (i.e. leading their markets) will have a chance at retaining profitability. The rest shall perish.

Wednesday, 24 January 2007

Estimating Value : Now and Tomorrow (Part 1)

When you get a cab home after a night out, do you ever get the feeling - maybe five minutes after you're home and comfortable - that perhaps the £20 you spent wasn't quite worth it? I mean, you could have got the night-bus for £2 or walked for free, but you didn't. You valued the comfort, warmth or security of a cab over twenty pounds. Or at least, you did half an hour ago. Once you're home such things don't necessarily seem as clear.

Or perhaps you've ever bought something ludicrously expensive in an impulse. Maybe a stereo, or some other shiny electrical goods. Seconds before hand it all seemed like a good idea, but now...who knows? I suppose the question is basically : how much are things really worth?

These things have certainly happened to me. And trying to determine what something is worth is something which we as individuals struggle with on a daily basis. And beyond the personal we have to do the same in relationships and at work. And beyond the mundane and day-to-day things this topic is at the heart the most challenging debates in economics or philosophy.

I do not claim to be able to shed much light on things as an economist or philosopher. I do however wish to consider things from the perspective of an individual, or an individual business when, in this case, an investment is being made.

Consider recruitment. In some places, recruitment is a long and arduous journey, fraught with perils, difficulties and barely comprehensible procedures. My workplace is no exception. Part of this journey a question is asked - perhaps the most important of all : "How much does it pay?"

And so my department have just started recruiting for two new posts. And, as expected, the topic of salary inevitably arises. Now, the final rate the successful applicants ends up getting will depend on their specifics but obviously we need some indication now so we can advertise the job. And as usual this will depend on input from management, HR and finance. And in this case it's also needs input from your truly.

Can't We Use A Random Number Generator?

To be quite honest with you, I'm not sure I know how to approach the question.

How much is this position "worth"? God knows. Of course, analytically I can think of a few approaches.

1. Market Analysis
We write the job description and person spec and then simply compare with the rest of the market. What do other organisations pay for similar roles? What do similar jobs in our own organisation pay?

But then what does that mean in terms of individual variance. What price should we pay for the premium of the better candidate? And how do we know the market has got this right? Sure, if other companies are struggling to recruit or retain certain positions then we might assume they're not paying enough. But what if they're paying "too much"? How might this be determined? The overall impact on their cost structure might not be significant and it's not like many businesses publish statistics on how many staff are "too good" for the role they do.

2. Utility Value
If we find that similar jobs are paying £100k then we would of course abandon recruitment. "It just wouldn't be worth it" we might say. But what do mean by this?

Well, one way of looking at it would be that value generated by such a position would not justify the expense of hiring someone. But again, what do we mean by "value" here?

Sure, with some jobs it's easier - a salesman might be able to show how much revenue he or she has personally produced over their peers. What about where it's not so easy?

The only simple answer is that we make an educated guess. Are we likely to boost revenues by hiring person x? Cut other costs? Increase productivity? Meet statutory obligations? Reduce risk? Etc. etc.

3. Labour Value
One way to assess what an individual's work is worth would be the costs involved in someone else replicating this work with freely available resources. Picasso's paintings are worth so much partially because it is impossible to properly duplicate. Conversely, a brick layer will find his work easily copied by most with a basic training. The "utility" produced by a staff member is less valuable if anyone else can do the same.

And so the value of a man's work will depend on how much other men will do it for, or for what an outsourcing company will do it for or even what a machine can do it for - this is his labour-value.

4. Morality
It is silly to assume that discussions of pay do not involve some notion of morality - that is not the case in any real-world example I have encountered. At least rhetorically the level of pay a person receives will depend on :

- How much they need to live on.
- The presumed difficulty of what they do.
- The presumed unpleasantness of what they do.
- What other people get (both doing similar jobs and beyond)
- What their pay could alternatively be providing
- Their personal characteristics.

Some of these will be more common in some environments than in others but I think it suffices to say that "moral" arguments over pay will probably be more important in the public sector or indeed anywhere with a particularly high profile.

Of course, all of these arguments will bleed together. If I feel my pay is unfairly low then I will probably think this for a given reason - i.e. that my work is very important to the company, or that other people in the same sector earn much more for similar jobs or that my pay is not enough to live on. Similarly, even if you generate a vast amount of utility-value for your employer then if there are many thousands of other employees willing to do your job for a very low pay then it is likely you will not earn much.

How much is data worth?

So much for employees. But what about other things? In my last entry I discussed buying OS map data at work. I suppose my overall point was that I didn't feel it was worth it, in the current form data is supplied. This topic was discussed in a bit more detail in another blog, on which you can find some of my comments. See : http://giscussions.blogspot.com/2007/01/gi-is-worthless.html

But can the schema above help us to evaluate how much other investments like this are worth? How much is this mapping data really "worth"?

By definition, the market rate is the price we were quoted - £16k. It's probable for reasons discussed elsewhere that this is probably too high. For our purposes here we'll use a much lower £5k value.

Would this be worth it? Well, analysing the investment in terms of utility-value, we are back to the problems of measurement. We might assume (or hope) that costs can be lowered by GIS data - but how do we quantify this? The OS give some copy that we might expect to save £2 for every £1 in the first year. Which sounds pretty good, but as I've said, this is what everyone says. Realistically, can we prove it?

Probably not. However, given that our overall budget is many many times larger than a £5k we would only need a modest reduction in costs for our investment to be repaid. If for instance, we had a 0.05% reduction in maintenance costs across the board then this would represent double our spend saved in one year. For our purposes here, we'll say this is feasible, although I have some real concerns about where time saved "goes" after changes in processes and systems. That's the subject for another time however.

What about labour value? Well, hopefully here the link with my previous entry should be clear. Costs quoted by a company for a dataset like this must be compared not only to commercial rivals and the cost saving expected but also the cost of internally producing the data (or doing a similar thing in a different way).

Indeed, almost any dataset like this could be evaluated by the total number of labour hours it would take to make a satisfactory reproduction of it (not everything is reproducible of course).

I will provide a very crude guess and say that to map our housing stock to the quality we realistically require at least a thousands hours of someone's time (this number should not be taken as a serious analysis I hasten to add). Which we might redefine as £20,000. And, in this very simplistic example the £5k investment wins easily.

The morality of the investment is perhaps not the same as with recruitment, but there are still concerns. I would personally suggest it is immoral to spend such a sum on data which should be in the public domain, or which has so many restrictions on it. These can likely be ignored here. Other moral concerns (as to whether we should "waste" money on maps at all) could easily be rebuffed if we could demonstrate likely savings as a result.

Ongoing Costs

And so it all seems straight-forward. The £5k figure we're saying is a market-generated figure and it's easily justified when looking at the labour-cost of the project or the likely utility we'll receive.

Unfortunately, things do not end here. We have only considered barely half the issue yet. For a start, there will be implementation costs and the like, but we shall ignore those for now. What we have not considered is the ongoing costs of data itself.

The £5k is not to buy the data. It is to licence it. Or, to put another way - to borrow it. After two years we have to pay £5k again. And then again, two years later. This is not layaway - the data never becomes ours and if we stop paying we lost all functionality immediately.

And so our cost-benefit analysis becomes more complex. The utility-value will remain largely unaffected while we keep paying, but the labour-value changes.

Essentially, we did not want to do it in-house because the costs were too high. Our own map would have cost £20k. Hugely uncompetitive. But, looking forward :

In HouseExternal Map Data
Year 0£20k£5k
Year 2£21k£10k
Year 4£22k£15k
Year 6£23k£20k
Year 8£25k£25k


...and so on. The specifics of the table are of course completely speculative, but one can see the point in any case.

Simply put, the issue is that after the initial production of the dataset (drawing the map in this case) the labour-cost drops. Maps do require investment to keep them up-to-date, but except for specialist maps, this investment will not be the same amount as took to draw them in the first place. Places change, but not that fast. To repeat : most of the estates we own and manage have had the same street names in and on them since the 1960's and 1970's. Where there are changes in road names or the location of green areas these would almost always be :
- involving us anyway
- well publicised
- with a large amount of prior notice

And so for these types of amendments (surely 1-2% of the full dataset per year) we would be well placed to make changes ourselves.

To take another example. The Royal Mail address manager software is licensed at £1,250 for the first year and £500 for subsequent years. But how many of Royal Mail's post-codes change a year? For our new properties there are new post-codes certainly, but this is handled through a separate arrangement when they're built. So of the remaining, how many change on our estates? 5%? Less? I suspect the figure is closer to half a percent.

To look another way, if we buy the data to check our own database, we might find we make correction to 10% of all records in the first year. In the second, barring some training or data integrity issues we're unlikely to make corrections to more than half a percent of all our addresses (there's no reason to assume we couldn't get it down to 0.01%). Paying £1 to correct each wrong address (as in the first year) is probably justifiable. Paying more than £20 per wrong address seems less so.

Whichever analysis you feel is more convincing (or relevant) you hopefully get the idea that products like become less valuable as time goes on - both in terms of labour or utility-value. The cost also thankfully declines with the Royal Mail software, but no way near as quickly as the value does.

Subscription Models

The above examples might be thought of as rather silly but I'm sure you can see the point. If payment for something does not reflect either (or potentially both) the value obtained from it or the (labour) costs to the supplier then irrespective of our personal morality the product will become less attractive to clients.

Now, if you are fortunate enough to be a monopoly or state backed entity this might not be a concern. The BBC's value might have declined for some customers in recent years but while you can still be imprisoned for not paying the fee this may not be as important for the BBC Governors as it otherwise might be.

Where you do not enjoy such state protection or where consumers enjoy choice things will be different. And by "choice" here, I do not necessarily just mean competition from other commercial entities offering a similar service or product. That is a common misunderstanding.

For example, you make a widget which allows corked wine bottles to open easily without the use of a corkscrew. You enjoy revenues of £100m. This revenue is potentially threatened not just by competition but by choice in a range of areas :

- Consumers could buy another firms widget which does the same thing.
- Consumers could buy another device which achieves the same thing (e.g. a corkscrew)
- Producers of wine could stop using corks in their bottles.
- Consumers could switch to beer instead of wine.
- Consumers could switch to Islam and forsake alcohol completely.
- Consumers could borrow their friends widget and not buy their own.
- Consumers could make something in their own home which achieves the same thing as your widget.
...and so on.

And so our mapping dataset is not just threatened (as we've seen) by amateurish, slightly crazed DIY projects like mine. There are already projects which are producing public domain maps of varying quality. A steep decline in basic GPS equipment combined better free software to generate maps the idea of user-produced maps is much more of a viable option. If an organisation were to seriously contribute to such an effort then mapping data for a given area could feasibly be collected in months. And as stated, such Open Source projects already exist.

The analogy here with Wikipedia is of course obvious. And Wikipedia has many detractors, but these usually centre how easy it is to make amendments and how (theoretically) anything could be wrong at any moment. I personally feel these concerns are misplaced but if we so desired we could easily avoid them here. Wikipedia's "problem" is that needs (or at least, permits) very fast updates to account for changes in current events, etc. With a map of a local authority there is little reason to think things will change very quickly at all, short of the Rapture.

And with such conditions we could restrict updates to certain persons, or require all updates first be approved by a given set of individuals (again, if we found this necessary). And speaking selfishly, for those of who us live in high population density areas the task goes from merely achievable to almost trivial. London borough's (in which many thousands of people live and work, and whose local authorities have many millions to spend) usually only cover something like 20 to 60 square miles - which even given the patchwork of streets would be easy if even a few people from each area got involved.

Of course, you could argue that the map would never be as up-to-date as a professionally produced centrally authorised map. While this might be true, I would point out that the maps we use in my place of work were produced in the 1970s. And they suffice - we simply do not need (for day-to-day purposes) information that is razor accurate. Similarly, there might be a higher error rate in an open source map, but for any non-trivial query a surveyor, architect or planner would visit a site and make their own measurements. People do not build houses without visiting sites (or at least, they shouldn't) and we shouldn't over-emphasise how accurate such data needs to be.

And in this way open source projects can reduce the commercial value (to nothing) of certain products. Unless things change, I am unlikely to buy a commercial video file player for my home PC because the open source (and thus free) VideoLAN (VLC) is by far the best product I have tried. The utility value of such player remains as high, but the market value for them has collapsed, or to put another way, the labour-value of installing a competitor has been reduced to zero.

Of course, with data (and software) this model will not necessarily work for everything. I would not want stock quotes that were possibly out-of-date or edited incorrectly. Now, I'm sure any project could work given the right set of people involved, but it'll be made much harder depending on the level of accuracy required and the volatility of the dataset in question on one side versus the ease in collecting data and the number of likely reliable volunteers on the other.

Where the model does work, a small irony might be enjoyed. Open source projects (whose principles are described as either libertarian or communist) will help, over time to seriously diminish the "profits" of government suppliers through competitive market forces.

Who'd have thought, hey?

In the next article I want to look at how these sort of arguments affect the procurement of software and also how musicians and other workers are paid.

Postscript : Additional Notes On Terminology
I use the term 'Open Source' here to refer to projects where anyone can make a contribution, rather than the more common software reference to visible source code. In most datasets there is no "source code" as such, we are only interested in raw data. I use Open Source here because it is a commonly understood term.

However As Richard Stallman and others are quick to point out "Open Source" is not the same as "Free Software" (which is about guaranteeing the right to view source code and modify it). Stallman also emphasises the importance of terminology when discussing these issues. I would agree with this general point, but I think it suffices if when discussing in a non-technical sense we use any term, so long it is properly defined.

In general I am referring to (and saluting) projects which :
- do not charge for their end product (irregardless of whether they receive money)
- where the project puts no substantial restrictions on the use of the datasets.
- are developed at least partially through contributions from users.

In the above by "substantial" I am not including restrictions placed by licences like the General Public Licence or some of the Creative Commons licences.

While I understand how important legally the licence issue is, I do not find it an intellectually stimulating issue. I feel in a sane society most licences over data would simply be unenforceable and as such meaningless.

Friday, 12 January 2007

The Tragedy of the Enclosed Lands

I want to talk a little bit about the use of mapping data, something which periodically piques my interest at work.

First though, I want to quote parts of an email I received a while back (in 2005), when enquiring how much it would be to licence some electronic maps for use internally at my organisation :
This email is to confirm that you have successfully saved estimate reference 6610 on the OS MasterMap Data Selector, as detailed below.

This estimate excludes VAT and is valid until the date shown below, after which it will expire; it will remain on the Data Selector for six months from the date saved.

Price: £16,966.02

Area selected: Pre-defined polygons - Single London Borough

Licensing information
Licence period (years): 2
Number of terminals: 1


So, just to clarify - for the use of mapping data in one licence for two years of an individual London Borough would cost almost seventeen thousand pounds.

I mention this because as I say, I'm interested in GIS technologies. I was actually going to apply for a job at the Housing Corporation a year ago to work in their GIS team, but I decided against it. More generally though, it's obvious how cool applications of mapping data can be.

In the field I work in housing (asset management) the use of maps would be particularly beneficial. On top of this, I also help manage some patch related data for our housing management team. Being able to link all this into some automapping system would make many tasks I perform so much easier. In addition, visual tools are staggeringly useful when persuading people or explaining things to a new audience.

Being able to show a map which outlined that 80% of our Decent Homes failures are in one half of our stock would be a lot more powerful when persuading our board to release extra funds, for instance. The human mind responds strongly to visual stimuli, and maps in particular plug straight into a particular part of the brain.

Of course, to do all this, we need a certain amount of geographical data. There's the maps themselves and then there's geocoding our properties. Which leads to enquiries like the above. No doubt we could afford that, but in good conscience can we really spend £16k on what is effectively a glorified A to Z? £16k after all could supply brand new kitchens to four families, or ensure 8 homes have brand new central heating systems which will cut fuel bills and keep people warm should the weather change. We have to consider the opportunity costs.

Don't get me wrong, the price listed above is probably way above what is required, and there are undoubtedly other options which would be more cost-effective. But cost is not the only problem. Last year I attended a demonstration by some companies looking to supply us with a GIS solution. I did not get to hear any costs at this point, but what maddened me somewhat was the level of restrictions the data suppliers wanted to put on any information they gave us.

These included :

- Insisting that if we put map data on our intranet we'd have to buy a licence for every potential user, i.e. every person who has access to our intranet. Considering this is over a thousand people now (and growing) this is fairly ridiculous.

- Advising us that we would only be able to print out maps (to include in publications to customers) if we got additional licences for this.

- If we decided not to renew our licence for the data, we'd have to destroy all maps produced/printed as well as the more obvious step of deleting all data we'd produced and uninstalling the software.

Now, it is probably the case that in any pre-sale negotiation we could get out of some of these clauses (and they might not even be enforceable legally speaking) - but the fact people selling these sorts of services believe they're reasonable in the first place speaks volumes.

Free Lunch?

It's well accepted that in most cases, there's no such thing as a free lunch. In this particular case, there is a certain cost incurred by those producing / collecting information, organising it and making sure it's accurate and so on. The price we are willing to pay will depend (to an extent) on the expected use we will make of such data weighed against an estimate of how much it would cost to produce the data ourselves (or from another source).

So, what is a fair price? To be honest, I've no idea. £16k could be a reasonable price but you must remember that is payable every two years. This type of "subscription model" (for want of a better term) makes sense with some types of data. Stock prices for instance, change so often that one must keep up-to-date (sometimes up to the minute) with the latest change. So if I'm interested in that sort of information, I'll consider a subscription of an ongoing basis.

Not so with mapping data, or at least mapping data of this kind. Our properties (by and large) do not change locations, and where something major did occur (e.g. one of our blocks being demolished, or a new road cutting through one of our estates) we would certainly be aware of it - probably before any mapping agency. 80% of our stock has stayed the same for the last five years and probably will for the next five years too. So why would we want to be in a position of paying every two years for such information? Well, we wouldn't.

Beyond this, if we did decide to make a considerable outlay, would being able to use the data on one machine be adequate? Well, no doubt we could engineer our processes in such a fashion so this worked, but by and large for the information to be useful we'd want everyone to have access to it all the time - and this would include publications sent out to customers. This is fairly self-evident.

DIY?

And so what will we do? As is often the case, my answer is DIY : We'll collect the data ourselves. As ridiculous as it sounds, because of the restrictions on the data we'll be much better off simply collecting the information ourselves and using any one of a number of open source applications simply generate the maps ourselves. Or so is my intention.

GPS equipment is now within the reach of the average citizen and a procedure for geocoding our properties could easily be included within our stock condition procedure or even included in with caretaker duties or a void routine.

But...isn't this a bit ridiculous? Aren't we going against a sensible division of labour? Instead of information being collected by experts en masse, we're going to be taking a piecemeal amateur approach. Admittedly with modern GPS equipment it shouldn't be too taxing, but inevitably the quality of the end result will be significantly lower than a "professional" approach. But even given this, the DIY approach will still prove superior because we won't have to worry about any sort of legal nonsense when we have the dataset.

Let's imagine we're not the only person doing this for our area. A local business might feel the same and even the local authority might want to avoid paying the OS fees as well. So, in such a hypothetical situation there would be four parties (the OS, ourselves, the LA, a local business) collecting the same data. Totally unecessary duplication of effort.

But in some instances, duplication is not really duplication at all. There are numerous companies making shoes, but each one does things slightly differently (either in terms of design of the shoe, or the production technique, or whatever). Such diversity is desirable because it increases the chance of innovation and means that "better" designs might predominate.

This can hardly be said to be the case with mapping data - if we're all using the same standards (which we are) then if everyone does things perfectly then theoretically we would end up with identical end "products" - i.e. exactly the same data. We are measuring objectively real conditions and as such there is no artistic or creative flair involved. As such, duplication is not desirable, it is merely waste.

Which leads me to my point. In the United States, the government is restricted from holding certain kinds of copyright and so their mapping data is largely in the public domain (not counting classified military data). Not so in this country. Maps created by government employees are withheld from citizens unless they spend money. As seen in my quote above, in some instances these are not insubstantial sums of money.

In my second example, the cost was not an issue (since I never found out what it was) but the restrictions seemed utterly absurd to me from an operational standpoint. Even if each individual licence was priced at £50 per user there would still be an additional administrative burden on checking we were compliant and the threat if we ever cancelled our licence we'd have to undertake a significant audit of internal data to delete everything we once generated.

I believe these "quirks" are not the result of bad mangement decisions in the companies involved but rather are an inevitable outcome of the "private data" business model. The reason why companies have to be careful about licencing is that if they're not some wiseguy will simply buy one copy of their software and then put it on the web for everyone to use for all time. To maintain their viability they have to undertake measures which are deliberately annoying to end-users. Which often include technical restrictions.

To use an example from a related field - take the Royal Mail postal database. You might think that Post Code data would (as a list of facts) be public domain information. Infuriatingly, you wouldn't be entirely correct. And so, if you want to check your post-code data is accurate you have to hand over a thousand quid (for one user) to the Royal Mail. In some senses this is worse than the mapping data since the Royal Mail are a monopoly with regard to the issuing of post-codes (and you have to pay to get a new road post-coded too!).

In any sensible system, there would simply be a gigantic .CSV or XML or whatever file of all UK addresses hosted in a number of locations free for all to download so people could do interesting or innovative things with the data. Instead, we have slightly rubbish software which deliberately makes plain text exports difficult to do.

Such things are prime examples of a sort of an anti-"tragedy of the commons". If this data was owned collectively (that is to say, was not owned at all) and such basic factual documents were not seen as money making opportunities we would have so many advantages. Instead, we have a situation where hundreds of hours are being wasted simply because of outdated business models sadly adopted by our government. On top of this, such restrictions are stifling innovation. Google Maps may be able to afford to licence the OS data but the average bedroom developer cannot and so there is a less than optimal level of development in this area.

I actually believe that mapping data will be de-facto public domain within the next decade. Until then though, we have alternatives. Of the data we collect, I intend to submit it all to the Open Street Map project (http://wiki.openstreetmap.org/index.php/Main_Page) which is an excellent attempt to bypass some of the legal faggotry in the copyright datasets. Collectively, we can tear down the enclosures. We can rebuild a commons which can help organisations of all sizes innovate with GIS technologies (surely something which can only increase with better mobile devices?)

I'll let you know how things go.

Friday, 22 December 2006

Harnessing "Many Eyeballs" : Part 2 - Utilising Spare Capacity

How much wood would a woodchuck chuck, if a woodchuck could chuck wood?

Traditional ideas of measuring productivity have tended to rely on some kind of "work rate". You know the sort of thing : You can make ten widgets per hour, you work for seven hours per day. Therefore 70 widgets a day, 350 a week, and so on.

Job performance is therefore based on how well you produce widgets compared to this figure. The righteous exceeds targets every month while the slothful and the wicked fall behind.

As we've moved away from industrial production work targets have obviously got a bit more sophisticated but there is something quite seductive about this idea of a "work-rate" and so we still feel it's influence. It's got a good pedigree after all, and it's nice and simple. I watch how long it takes you to do one letter, I can then work out how long it should take you to do the other 1999.

There's problems with this sort of thinking though, as I'm sure you're aware.

1. The work people do is not necessarily as measurable as our "widgets per month". Even in call-centres (where calls per day is a simplistic target for anyone to work out) straight-forward targets are frowned upon for failing to measure quality and customer service.

2. Regardless whether the work is measurable, it's often not as straight-forward as the above example. I can type 50 words per minute, but I cannot write a 2000-word report in 40 minutes. The majority of the time will be taken up in research and in actually thinking about the issues. It's debatable whether you can measure either of those at all.

And so sometimes we have to abandon linear work rates (and with it, completition estimates). A problems I'm given may take a week to solve, but if Google is my firend then it might minutes or even seconds. And until it's done, you won't know.

3. A third, slightly more controversial argument is that "work rates" ignore the fact that people get bored. I can stuff ten envelopes in perhaps 2-3 minutes and by the time I've reached 50 I've become an expert. By 100 I'll have probably passed my peak. By the time I get to 1,000 I will be in tears, almost incapable of continuing.

And it's not just dull repetetive work where this happens. I personally find I can only do certain kinds of work in short bursts (perhaps only ten minutes at a time). Beyond that and I simply start to shut down.

It could be that this is because I have some sort of brain defect and that my concentration span is simply too low. But even if this is the case it seems universally true that humans find it difficult to concentrate on certain things for very long. There are mountains of psychological theories on this subject, but the consensus seems to be that there are limits to how long we can do a single activity. What this limit is depends on the individual (or the theory) and the type of activity. 18 minutes is one wall, 40 minutes is another, while 30 seconds is the top end of one theory. In any case, most agree that it's not very long, certainly not more than hour.

Surveying the landscape...

To look at the problem in another way, let's look at modern offices. Or more specifically, the office worker. What we find is that (especially among those in junior positions) the amount of time wasting reaches epic proportions.

Now time wasting is not a new phenemenon - indeed the British institution of tea-drinking seems to be built to achieve maximum work stoppage without officially engaging in industrial action. But the modern world has given us different avenues for our time wasting.

Combined with a web-accessible PCs, office workers are along with perverts looking for pornography and teenage girls wanting to attention whore one of the primary forces that drives the internet. Visit forums, blogs, social networking sites and a good proportion of the posting will come from the army of office workers & students, both of whom should technically be busy doing other things.

In their time wasting they are capable of reading and writing enormous amounts of information. I have been told by more than one person that my entire blog has been read in a series of time consuming activities. To put that into perspective, that's well over 200,000 words - easily the length of a paperback novel. I can assure you that this was not particularly compelling subject matter which gives you an idea of the power of this force. The only excuse for reading such drivel is a kind of pathological, unstoppable boredom.

Indeed, one blogger noted that most office jobs these days could, if all time wasting was ceased, be completed in a single hour of total focus. They were probably correct.

Who dares to defy the company internet usage rules?

In some cases it is likely that some of these people simply lazy, not doing their jobs and therefore are engaged in gross misconduct. What proportion of people this is true for, I do not know. But I do not hear of that many people being fired for such activities, although it does happen.

What is interesting to me is that I know from first hand experience that some of these people who spend hours reading blogs (or whatever else) are not bad employees. They are not people who are not doing their job. Indeed, in many cases they are keeping well up to date with all of their work duties. Some I have known to rise through the ranks even while maintaining this extraordinary time wasting habbit.

The immediate reaction is to think therefore that these people are simply under-utilised. If they can afford to waste time - they've obviously not been given enough to do. Or so the logic goes. If they usually complete 10 invoices in their 7 hour work day, and we know they waste at two hours a day, then logically they should be able to 14 invoices a day, right?

Wrong!

This sort of thinking assumes the old industrial / linear work-rates we discussed earlier. We cannot make assumptions on what someone could do in a given time because they may not be able to concentrate on invoices, or reports for seven hours a day, even with breaks. Without this time on other tasks they might simply go mad, or start stealing or making serious errors.

But all of this probably seems fairly unrelated to "many eyeballs". So what, our staff can't concentrate on their work and therefore they take unauthorised breaks. What does this mean?

Well, what I find interesting is that the things people do to "waste time" (or take a break) is not get a breath of fresh air or stretch their legs. It's not even to rest their eyes from a PC screen. It's to go and view a website on the same PC screen and spend long periods of time, reading often fairly dull information, or writing posts on forums.

I remember when I started my first full-time job - just after leaving university. I went from a 2 hour day of self-directed study to an eight hour day plus two hour commuting, talking on a telephone all day. And so for the first three months of working almost every day I got home from work in the evening, walked into my bedroom, and fell fast asleep. I was simply not used to that type of regimented work day, and could not cope. I was exhausted.

Now, I suspect if you asked me how I felt at the end of most of my work days I'd probably still say "Tired.". Yet I rarely fall asleep when I get home. Instead, I spend hours on my PC, or hours talking to friends, and last year I worked for another four hours in the evening in a second job.

The point is that while I am tired of whatever I am doing, I am not tired generally - at least not compared to how I was. I can still function mentally, and carry out a range of work-like tasks.

I suspect the same applies to the vast legion of office workers who can't do any more work beyond 3:30pm and are simply waiting for for 5pm or to come. They are bored of what they are doing, but they stll have mental energy left. If their work was more diverting, perhaps they would feel differently. Indeed - I would note behavoiur from my own office. When 5pm comes, the people who staff our call-centre collectively rush from their floor to leave the building. However, two hours later it's not uncommon to see Senior Managers or Directors still at their desk. A range of differences are at work here (career mindedness, varying domestic obligations, dedication, salary differences, autonomy etc) but I would suggest that some of it could be put down to the type of work each undertakes. Senior managers generally have a varied working week. Different locations, different tasks, sometimes on the road, sometimes heads in figures, sometimes recruiting, and so on. For the call-centre staff there is almost no variation. Certainly, the calls might be different but one is basically sitting at the same desk, performing the same basic task over and over again, seven hours a day, almost every day. Is it suprising the latter is in a rush to leave the office?

Now, while we may not want peope to stay past 5pm, it does seem rather wasteful their mental energies are not at least being utilised from 3:30pm onwards.

So what's the solution? I would suggest that this would be a perfect opportunity for an overview role of other parts of the business. There have been times where I have read documents entirely unrelated to my area of work. On occassion this has been noticed and managers have remarked something along the lines of : "Don't waste your time on things like that".

But these people don't understand. They were still in the old industrial work-rate mindset. My *time* (in terms of time at my office) is not particularly scarce. It's why I waste some of it every single day. What is scarce is my concentration, or rather, my work-role concentration.

So instead of scolding reading of other departments procedures we should be encouraging it. Procedures, policies and projects should all open and structured precisely so anyone can view them, understand them and most importantly, comment on them.

Why shouldn't the accountant, while he rests his mind from the monotony of invoices comment on the building surveyors project? We all realise that a fresh pair of eyes can sometimes spot a mistake, or that someone from outside a discipline can bring a fresh approach. So why don't we use it?

Now there is a training issue here. Not everyone would feel confident commenting on an area of work they are (basically) totally ignorant of, and some might find it rude to look at other teams work. But these are irrational fears for the most part, and things we can overcome. Moreover, external scrutiny might help encourage people properly document their projects (all but the most technical of projects should be easily comprehendible to an educated laymen with a brief explanation) and avoid sloppy work.

How such a mass-cross-collaboration exercise might be encouraged in staff is another matter. It would certainly be possible to make participation mandatory in such exercises, and even have set time periods where people do this. I feel this misses the point however, and indeed put us back into a position where people will feel that this new "work" is identical to their normal duties. If however people could do this under their own steam (with it understood that people who were helpful across the organisation would tend to do well in the long run) I feel the benefits would be enormous.

Postscript
Finally, I would add that one of the reasons why time-based deadlines are so unreliable is because of the issues discussed here. Saying a report will take 2 weeks to complete is of course nonsense - it will take about 2-3 hours, but when and how these hours will be put in is not clear. By avoiding the larger deadline I feel we'd avoid the "leaving it until the last possible minute" strategy so many of us engage in time after time.

Monday, 18 December 2006

Harnessing "Many Eyeballs" : Part 1 - Why do we miss the obvious?

I'm a bit of a fan of Eric Steven Raymond's work "The Cathedral and the Bazaar" and often find myself trying to implement some of it's ideas when working.

Outside of programming though, how is the principle applied? Some of the issues are similar, some are not.

We must all be familiar with the following scenario.

1. A change in business practice is suggested and agreed by Senior Management.

2. Project brief is formed to implement above change.

3. Committee/working party/steering group is formed to make sure all parties (/departments/stakeholders) are involved in project (or at least represented).

4. Because of 2, decision making is perhaps longer than desired since everyone needs to be involved before certain milestones can be passed. In particular, decisions get delayed until the next meeting, which themselves get delayed depending on availability of staff.

5. Process moves on. Just prior to change being implemented, all staff are emailed with a summary of what to expect.

6. Change is implemented.

7. Person X points out that the change is flawed as it fails to take into account something of vital importance.

The scale of the problem uncovered in 6 can vary wildly, but we'll concentrate on the show-stoppers here. I've heard someone, quite late into a project point out to a project team that what they were proposing was basically illegal. In another case deep into implementation of another scheme (when thousands of man hours had already been expended) someone noticed that something (representing about 20 or 30% of total expenditure) had been missed out entirely.

On the face of it, this scenario (which I have witnessed more than once) seems like a blunt rejection of the 'many eyeball' principle. Lots of people are involved in a process and yet it still fails. Major things are overlooked which should (in theory) be the major strength

So, on such occassions - what went wrong?

A lot of criticisms might occur of the project, and I'm going to talk about three that I've heard

1. The wrong people were involved in the project.

Certainly, in the example given above, the obvious solution would be for Person X to have been involved from the start. But is that it? More people involved at the group stage? Maybe. But won't that slow things down even more?

Beyond more people there is also the issue of the people who were involved. I have seen a number of projects where the people involved were far too senior. There are good reasons why senior people are involved, but often it's the wrong reasons which see's their time wasted in detail-heavy work which they have no interest (or knowledge) in. In addition, it's not considered how much overhead adding a "disproportionately" senior person will cost in terms of difficulty in scheduling meetings, etc.

2. The project was poorly led, or there was no sense of accountability.

In many cases it is true that it is either not clear who is leading on a particular project or their leadership is so weak as to be non-present. In addition, in the quasi-public sector I'm not sure it's particularly common for people to face any disciplinary measures or even censure for failing.

If we are talking about accountability for sections of a project then this can be a problem, but in the specific example we are considering - where something has been missed - it might not have possible to make someone accountable for that side of the project. Sure, if we divide up the project and so all legal isuses lie with a named individual then perhaps that person is much more likely to make sure everything is covered than when collective responsibility reigns. I think that might depend on the individual in question, and the organisations attitude to making mistakes.

3. Too many people were involved.
Have you ever heard the adage "If Moses was a committee, the Israelites would still be stuck in Egypt."?

It's not the most common of attitudes now, although it persists in some (men more than women, I find) and applies more generally to project failures and slowness in decision making in particular. With information gaps like this it seems counter-intuitive that you could ever have too many people involved. I do think however that combined with an unclear decision making process and a lack of accountability (see above) this can be an issue.

4. Project was rushed.

For anyone who has sat through dozens of hours of meetings, or waited weeks for action points to be fulfilled it might seem incredible that the project was anything approaching "rushed". But in fact, from one point of view, it must have been rushed if glaringly obvious things were missed, unless we're saying that the staff involved would never have noticed, in which case perhaps we do need to consider if we have the correct people in place.

There are other criticisms, but that's a reasonable summary of the main ones, keeping this as non-specific as possible. But I want to focus on how we could have managed the process, how we could have harnessed "many eyeballs" as elegantly as possible to try and remedy issues like this.

Many Eyeballs (An Approach)

1. Eliminate De-Facto Secrecy

The type of work environment I am talking about in this blog is social housing. We are blessed (in my opinion) in that our work is unlikely to be financially confidential and it is doubtful there will be any intellectual property issues. While the government is keen to encourage competetition within the sector hopefully people do not take this to it's logical conclusion.

Anyway, if we presume the change above does not affect any individual persons privacy (e.g. is not a HR change) then there is no reason why anything should be kept secret. And in fact, for most projects I've worked in, nothing was a secret. Indeed, we were keen to discuss with as many as possible. But in quite a few regards we had a kind of "de-facto secrecy" in place. If I have a document on a shared drive, ten directories deep and named obscurely then it may as well be password protected for all the people who are likely to look at it.

What does this mean - email the whole company with your project status notes every week? Almost certainly not! (although this is what you will find some people actually end up doing) Email is a pretty poor medium for such communication, and as I have discussed elsewhere simply imposes a cost on the whole organisation (many of which do not care about your project).

The obvious way of running a project then is via an intranet based system, a topic I'll come back to in later posts. For now, if you'd like think about something that's a cross between Microsoft Sharepoint and vBulletin's forum software.

2. Reevaluating Crunch Dates

On most of the projects I've seen operate, that weren't particularly large in size (perhaps involving up to about 7 or 8 core project team members) emphasis is always given to deadlines through the project. In particular, extra emphasis is gvien to the final deadline, the implementation date or D-Day or whatever you choose to call it. I'm calling these "crunch dates".

The logic behind the emphasis on specific dates (or the end date) is fairly clear; they focus everyone's minds on what needs to be done. Indeed, they're so ubiquitous to be an almost invisible part of the process; obviously we'll have a deadline.

But here's a question : Do deadlines actually work?

By 'work' I don't mean whether we always meet our project objectives - obviously we don't but that isn't down to using x or y methodology - it's just a fact of life. What I do mean by work is whether they ensure the most efficient and effective use of our resources. I'm not sure they do.

Go to any University on or the day before a big essay / thesis deadline date. I guarantee you will see libraries full to the brim with people desperately trying to print their work or get it bound, or worse still people still writing or researching the damn thing. A month or so before some of these same people could probably be found propping up the bar.

The traditional analysis would be that sure, that happens but that's just because students are lazy bums. And no doubt they are. But are we saying that this behaviour is limited to students? Then why do I get emails from Quantity Surveyors in their 40's or 50's at 11pm the day a project is due in? Why have I been asked by directors for assistance on a presentation they were due to deliver that afternoon?

I do not want to say that it's "human nature" to do everything at the last minute - clearly that's not true. But it does seem like behaviour common amongst certain types of people in certain circumstances. Is it a problem? From experience, yes. The projects I've done near the deadline (and by lord there's been a few) have common problems; I'll often realise that a resource I need for the project is not available "at the last minute" - I won't have time to properly error check it and it will generally be inferior to something which has had time to settle.

Now, of course, some deadlines are "natural". The government want RSLs and local authorities to have achieved Decent Homes by December 31st 2010. Therefore, that is our deadline for carrying out the works. Working backwards we can easily build up a coherent (and semi-realistic) schedule of how many properties we should have made decent each month from project start to the end of 2010. This is slightly different from many projects though - the actual difficulty with Decent Homes (aside from funding and consultation and programming and....) is going to be actually getting the works done. It's a "physical" problem.

A lot of projects don't fit this framework. If your project is "develop a strategy to tackle anti-social behaviour" then you cannot simplistically gantt chart it out - the work will be heavily uneven.

So what's the alternative to "crunch dates"? I would suggest an approach that could broadly be termed 'incrementali' wherever it's practical. Instead of working in blindly in isolation with monthly reviews we should be working collaboratively with feedback on an hourly basis. I do not mean that everything is reviewed in full every sixty minutes but that someone (a project manager if you like) has a continuous "feel" for how things are going - the health of the project. Using the health analogy further; one could wait for a yearly physical to see how your health was doing, but more generally it's going to be a better idea to always know how one feels (physically) and adjust your behaviour accordingly.

Another advantage of an incremental approach is that when there is "extra" work (e.g. someone notices we've missed something) then there is much less incentive to ignore a problem for want of meeting a beloved deadline.

3. Release Early, Release Often...

OK, this one isn't mine but it applies here. As I've suggested, it's better if we are continually updating as we go along. That way, the project team is continually aware of how well (or not) they're doing. But more than this, I think (where practical) this should be an opportunity to release more globally.

An example of what I mean, (and more importantly, what I do NOT mean) is as follows. A new rent module is being installed in a housing management system. One would not change the way something behaved if it affected transactions (for instance) as an error would be incredibly costly (from all sorts of perspectives). Changes here would only be made after significant testing in an isolated environment, with much more traditional testing procedures in place.

But what if the change is simply to the way the screen operates (e.g. cosmetic changes). Could it be possible to warn an arrears team (or a selection of the team) and then warn them of any changes, and then change their screens on a daily (or more often) basis, in line with their feedback?

Clearly a lot of this depends on how systems operate, the type of data that could be affected, whether clients could realistically be updated to individual team members so quickly and so on. Handled horribly such changes could lead to system crashes, users giving our incorrect information to customers or simply people being confused and suffering productivity falls as a result. But I do not think these sorts of issues will necessarily arise.

Traditionally, any changes (even if only to the cosmetic look and feel) of a system would be handled along the lines I outlined at the beginning. The housing management system my organisation uses is basically a Unix application running in a widow on Win9x/2k boxes. A transfer to an up-to-date windows look-and-feel application (based on IBM's Genero) is scheduled to take place. Towards the end of 2008!

Such changes are ripe for gaps to arise. One user (sometimes the most senior in a department, or the most willing or even helpful) will be selected and will test the new screen in depth. But one user can miss something. Most systems offer more than one way to do something and it might be that half the department does something in a way this user is not familiar with (this is particular true where systems have been in place for years and have legacy functions in place). In addition to this, testing is an extraordinarily artificial experience for most people and you will probably find most users get bored incredibly quickly. The only way something can truly be tested is through usage.

The truth of the above was amply shown when my organisation updated their main CRM system. An entire office was now being brought on-line which had never used the system before. The potential number of users going from 300 to 600 in one day. Perhaps not unexpectedly the system crashed repeatedly on the first few days and the problem was identified as simply one of load. Load testing is probably quite a difficult thing to model accurately when you take into account of people in different offices using a system for different things and so on, but still, didn't someone think perhaps it would be better to go live a bit at a time? Was it really necessary to have everyone start together?

The details of this particular example aren't very important but in my opinion it was not. But decisions like this are made (in line with the cruch-date philosophy) because it's sometimes thought it will be too "confusing" if people come online at different times. But confusing to who? Users? Well, no - as a user in team x you would simply be told that you'd be starting on the new system on June 1st. The fact other people would be starting on another date is essentially irrelevant in most cases (there are exceptions of course).

No. I think when people talk about systems being too complicated they are talking from a top-down perspective. This is a traditional problem when looking at economics. The global (or even national) economy can seem incredibly complex and complicated when view as a unit. It can be tempting to want to "simplify" things - perhaps centralising production decisions to a single office/computer/individual and other ideas of that ilk. But in most cases, from a user (or individual) perspective economics is fairly simple. I go to work to earn an income. My decision to work where I work is based on a number of factors like salary, location, type of work, respect from co-workers and so on. I do not need for my decision making to be "simplified" any further by being assigned a job for life. Back on our project, different users using different systems at different times, or perhaps certain users using different versions of different modules, etc is only complicated if one chooses to view it. So long as there is solid management of the specifics by those involved there is no problem.

Another final example : There is a form we ask all residents to fill out when they leave their property (a termination of tenancy form). We wish to make some ammendments to increase the total amount of informaiton collected (which will eventually go into our housing management system, which already has fields in place for recording this data). This is simply an addition to the form - we are still collecting everything we used to. We have a "Alpha Version" of the new form which has been knocked up by three staff members. Traditionally, this would go to their manager who would give his or her OK before passing onto some decision making body (e.g. Senior Management Team meeting or Board). Such a process could take two months. But why not simply put into place almost immediately after the first manager has seen? I am presuming that he or she has the ability to see potential problems (e.g. if any information requested could cause problem with internal audit or diversity & equality group) and the later approval is a rubber stamping exercise. In these circumstances, I see no reason why you wouldn't put the form to use straight away. If later we find that there is more information we have to collect then so what? Yes, some fields will be blank in the housing management system but that is already the case now! It is like Richard Dawkins' exasperated response to creationists : "Half an eye is much better than no eye at all!". Similarly, in some cases (but not all) half a form is better than no form at all.

The advantage of this approach is that aside from mere speed we will get real feedback from the people filling the form in and the front-line staff who assist them. From experience, this feedback is far more valuable than committees stuffed with the most learned of persons...

Summary

So, is that it? Not by a long shot, but that's a couple of things to think about. A project which works collabaratively, with immediate updates to information where possible and who tests work produced as regularly as possible is one which is half-way there. Without wishing to get all zen here, this "flow" of action, testing and refinement is one which is almost always superior to the existing model of project teams, milestones, and cataclysmic changes. In project terms at least.

Before I finish it should be noted that my remarks here should not be extended to areas where they do not belong. The above discussion could be misconstrued as a re-hash of the reformism vs revolution debates which socialists and other leftists have argued about for centuries. There are similarities and some of the arguments I have made above apply but there is one key difference.

In my model I am assuming that everyone in the organisation is on the same side. Where it is presumed there is conflict (or genuine disagreement in end goal) then it is quite possible incremental changes are not at all desirable. To take a housing related problem : Anti-social behaviour. Let's imagine there's an estate which is plagued with low level nusiance and related problems (grafitti, vandalism). This estate has a three foot fence around it's perimeter which is not sufficient to keep the perpetrators of such behaviour out. It would be silly to say that we would want to increase the size of the fence by one inch per week - and even if that were something practical then it would be unwise because it is likely that the individuals concerned would continue their behaviour and adapt as the changes took place. If however, overnight the fence went from 3 to 7ft then it's much more likely they might give up altogether. This is especially true if the fence was combined with other measures (e.g. general clean-up of area, introduction of community schemes, higher police presence and so on). In short, in conflict - blitzkreig tactics may often be superior to a program of continuous improvement.

In general : With all projects (whether they are minor or the transformation of human society) key importance needs to be placed on the effect and result of human psychology.

Friday, 15 December 2006

Tabula Rasa

My last entry was a mix of the negative and the positive, I think.

A lot of the details ended up being depressing, but it's tinged with some kind of hope - we're imagining we can somehow turn this round and get somewhere better. Or, more simply to use Gramsci's words ; "pessimism of the intellect, optimism of the will".

I often feel like that. While I'm incredibly hopeful about the prospects for improvement it is difficult, when trawling through a set of old records, to avoid feeling like you're dealing with the accumulation of the mistakes from all hitherto generations.

This is not just an IT issue, of course. Take the following scenario. Heating system in new build property (built in the last three years). System has not been flushed during install, nor since (i.e. was not installed properly). System fails and will need almost total replacement. I've no idea who will pay for the mistake, but it will cost over a thousand pounds.

As he exited our building for the evening I heard one of my colleagues mumble some of the calculations involved if the other units in the block were similarly affected. It didn't sound too promising.

Such incidents are frustrating not just because they could be avoided (we all make mistakes as they say, and the vast majority are avoidable) but because they are a recent mistake. Something which was done incorrectly thirty or forty years ago is almost amusing. The implication, I think is something like :

"Ah, look at the poor fools all those years ago. How could they dither through life not knowing everything we know now?"

Now, I think this attitude is somewhat misplaced, but it's understandable. There's a chance at least we've learnt some valuable things since the 60's. It's less likely we've learnt anything dramatic in the last 18 months. So recent mistakes are frightening. If we did it wrong 18 months ago, are we doing it wrong today?

This in turn means I am always very conscious, when beginning a project (e.g. for a new process) that it goes relatively well. That it's reasonably well thought out. That I cover all the angles. That nothing is missed.

Of course, this doesn't always happen and when I come to continue my 'Worst Practice' notes I'll discuss some of the reasons why. But one which contributes to the problem is that I am very rarely "starting" a project at all. I am almost always continuing a project. There are very few new stories, there is a selection of old favourites, with lots and lots of post-scripts.

The following analogy may help.

The Confused Customer

A customer walks into a restaurant. He is already holding a sandwich of some sort. A waiter asks the man what he would like to eat. The customer passes the waiter the sandwich and says
'This is what I am currently eating.'

'Ah', says the waiter. 'So you would like another sandwich.'

'Well...' said the customer 'Not exactly. You see, I do not like how this one tastes.'

'Ah, well I can ask the cook to prepare you something else if you like.'

'That could work. What do you have?'

'Well...we have lots of things. The Bologonese is very nice.'

'Oh, no.' said the customer. 'It has to be a sandiwch. And it has to have the same basic ingredients as that sandwich there.'

'Right.' says the waiter, slowly. 'So what were ingredients?'

'I don't know.' says the customer, slightly embarrassed. 'The label came off the packet before I came in, and I don't know who made it either.'

So the waiter takes the sandwich to the chef and outlines the problem. The chef examines the sandwich and after some thought prepares another which the waiter takes out, on a silver tray.

The customer tastes the sandwich and is disgusted. "This doesn't have one of the ingredients we had before! Take it back."

So the cook rejigs the sandwich and it is re-served.

"Well this has all the ingredients as before, but it has some of those problems; the taste chicken of this doesn't go at all well with the taste of chocolate."

'But you are saying both chocolate and chicken were in your original sandwich, sir.'

'That's right.'

'So what's the issue?'

'I don't like the taste of them together!' said the customer, getting slightly exasperated.

'So why have them in a sandwich together in the first place?!' who is also beginning to lose his cool.

'Because that's what I was ALREADY EATING!' says the customer, angrily.

And so again...there is a rejig. The cook does his best to mask the taste clash by adding another ingredient - peanut butter. And so once more, the waiter takes the sandwich out to the customer who has one more taste.

Surprisingly, he is delighted.

'Aha, perfect - the sandwich is just right. You and your cook are geniuses.'

'It is our pleasure sir!' says the waiter, happily.

And so the customer pays and leaves the shop, happy and content. The waiter and chef both feel satisfaction at a job well done.

A few hours later the customer is rushed to hospital because he failed to tell the waiter he was allergic to peanut butter.
  -The End

The above is a rough outline of most of the processes where I am involved.

Most of the time I am joining a project mid-point, and can only really work with what I have. I am dealing with "ingredients" which were selected by people who will never eat the food (government, in this case) or years ago in a different context. Meals are not ordered in advanced and lovingly consumed but rushed, poorly prepared and eaten a bite at a time (with alterations at each mouthful).

Now, this should not be taken as a complaint, I actually prefer quick incremental many-revision projects as they seem to, on average, produce the most results.

But variety is the spice of life and it's nice occassionally to have an opportunity to start afresh. Which is where I come to today's task.

One of our contracts is due to renewal early next year. Because the contract will be valued at well over £144k and it is classed as a service it will need to go through EU Procurement (and advertised in a pan-European journal). So it's going to be a long, bureaucratic tender process.

What it does mean though is that I get to write into either the tender or commencement agreement some form of data guidance notes / requirements.

At the moment, this contract is run well operationally speaking (reasonable costs, customer satisfaction is OK) but with dreadful IT protocols. Our IT department's involvement was minimal in the drafting of the original processes and as a result there are re-occurring problems; inaccuracies in data, difficulty reporting, huge duplication of data entry work and so on. It is not an exaggeration to say that an entire staff members's time is wasted on doing something which would not be required at all if the process was run differently.

Of course, it is silly to blame the contractor for these problems - they are simply following the instruction we have given them.

But we now have opportunity to tighten up the process, and at the same time try and make some kind 'Best Practice' guide for anyone who we wish to give work to. Perhaps not using my powers for good, but certainly for mediocrity. Which is still better than evil.

The service we are advertising for is relatively specialist, and while there are a few companies who could do the work, it does mean that making demands like 'You must only use Free or Open Source products in your entire business' is likely to yield either
i) no-one at all
ii) someone critically weak in another area of the business

So there's an element of pragmatism here. Here are some ideas I've had. Very much a first draft.

Proposed Data Guidelines
1. All data collected and produced through course of service/project to remain our property.

2. As such, while data may reside on your systems it must be ready to transfer in full to ourselves within a reasonable (e.g. 24hr) notice period on demand. As part of running the contract, such a regular transfer will probably take place as well.

3. Any transfer of information, or report produced will need to be in (where possible) a non-propietary file format, human-readable (without additional software) and/or in an industry standard format.

In general our requirements will be plain-text comma-separated values file for basic reports on multiple properties, plain-text XML for more complex files, .PDF for reports suitable for printing and ocassionally Microsoft Excel or Word files on request.

If files can only be stored / send in other formats which do not meet these basic criteria, there will need to be specific justification given for this.

4. The exact nature of what we will require from you (in terms of reporting) will be established at commencement of contract, but you should expect to send us, at least :
- Electronic copy of individual property survey (suitable for printing)
- Paper copy of individual survey to pass directly to resident.
- Electronic report detailing survey items for multiple properties (i.e. 1 Row per Property). The structure of this report will need to be agreed with us in detail. We will ask for this on a weekly basis but you should also be capable of producing it on demand.

5. We will pass you the key details of the properties which are within our stock. We will expect you to process this information so you can confirm details of survey for each request you receive.

6. All our property information is co-ordinated through our UPRN (Unique Property Reference Number). We will need you to use this at all times as our reference. It will also need to appear on everything you send us if we are to match this with our other datasets.

7. We will strongly prefer data transfer discussed above to take place without human intervention (i.e. as part of cron / scheduled task). We can accept files emailed or via an FTP. Details of either will be established at commencement of contract.

8. We will be providing you with data relating to our customers. You will of course have to respect their confidentiality and privacy as well as your legal obligation under data protection legislation.

9. If, for any reason, software development takes place (either jointly or individually) and is solely or mainly for our agreement we will expect all code to be released under the GNU Public Licence Version 2 unless you have had prior written consent from ourselves.
---
That's it so far. I shudder at my use of 'property' in relation to information in #1, but I can think of no better way of putting it. One of our sister companies had an experience where a contractor simply did not give them any of the data they had collected during the course of a works project because they "fell out". Incredibly it was not stated in the original contract who the information belonged to.

This, among other things, is the sort of nonsense we wish to avoid.

Wednesday, 13 December 2006

Let Us Gather Round And Share Worst Practice


As always, my to-do list stretches out somehwere into the middle distance at present, perhaps overwhelming me if I ever gave it some thought. My strategy for coping is to focus on one key task for the morning and one for the afternoon. I cannot pretend this actually increases productivity in any sense, but it does mean I have a clear idea of what I've failed to do when a day is done.

Yesterday afternoon my primary focus was simple:
To design an online form which would allow customers to submit repair requests online.
Now this is fairly representative of the work I undertake on a day to day basis. It is something that almost every housing provider will have to do. It is a task which requires the input from a range of people (stakeholders, if you will). Most depressingly, it is a task which someone in my organisation has almost certainly done before. Maybe more than once.

What does this last point show us, aside from the futility of human endeavour?

That my time is potentially being wasted here. To explain : We used to have an old website. On this website, I seem to remember a form existing which allowed customers to submit repair requests on this website. So, in theory, I don't need to start this from scratch. Or so it seems. Things are rarely this straight-forward however.

But why? Well, what I'm going to come on to talk about is my organisations worst practices. From what I've identified elsehwere, I doubt we're alone so this is not to be seen as overly self-critical, simply to admit where something is not working as well as it might.

So, why didn't I simply use the old form? Well, to begin it is not clear where the old form is. It was hosted externally a long time ago. Someone in the organsisation might have it, but it's also just as likely that no-one has it, or that the person who had it has left, or deleted it. I know the form would take about two hours or so to develop. I had no idea how long it would take to find the form (if at all).

Our first piece of bad practice is thus :
1) Information is poorly organised in general. It is not properly appreciated that not being able to find information (sometimes quickly) means we may not as well have the information in the first place.

There's a point reached for all processes which you suspect you or someone else has done before. It's a point at which the x number of seconds, minutes, hours or days that it would take you to do the work from scratch overtakes the time taken to finding the old version, learning about it and evaluating it. If your information is in poor enough state, you'll often find it quicker to start something again.

But why wouldn't someone have saved the form? Why would someone have deleted it? Why wouldn't someone emailed everyone in the organisation with it to make sure it didn't get lost? Point two of bad practice.

2) Information is often seen as worthless or worse - a burden. And so, it is treated as such : reduced, removed, devalued, degraded and ignored. Systems, processes and habbits all actively encourage this attitude.

People will often be told to have a "clean up" of their files, or emails. Absolutley, but what should be understood that cleaning up does not mean removing data. Nothing should be deleted (or even slight value) if it cannot be easily reassembled. Deleting the only copy of a report run from a database is fine; the report can be run again. Deleting the only copy of a survey which a surveyor spent five hours in the field collecting is NOT fine.

Similarly, most of the 'email best practice' guides I have read plead with people not to send unnecessary emails. One of the largest, most frequent complaint of staff (especially at manager levels) at my organisation is they get too many emails, and many of them are irrelevant to them. Information can directly be a burden.

In other cases, a person may be trying to hoarde information for themselves. Their motives will almost always be good, but they may have become the sole custodians of massive chunks of information. They will sometimes have no method (or desire) to distribute this information to others. Why?

Well, Thomas Jefferson one said :
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.
This peculiarity about ideas, or information generally is still not properly grasped by some and this, I believe, is the source of many errors in this field.

And so, bad practice item number three.

3) Where information is valued, its nature is sometimes misunderstood. Some see it as a personal treasure they must amass, and not let others gaze on for fear of corrupting, or losing it. Information is not correctly identified as an organisational asset.

The above occurs I believe, partially because people through their work career they have become familiar with physical (paper) files. With paper files, there are obvious restrictions in place. Usually:
a) Only one person can easily view the file at the same time.
b) It is relatively easy for a file to become damaged or lost.
c) A file, if lost or damaged cannot easily be recovered and often a pre-emptive backup or copying is not always feasible.

There are more, but because of the above three in particular it will often be necessary to have rigid controls in place for where files are kept (so everyone knows where to find them), who can use the file (to minimise damage / loss) and so on. Therefore, it is common for individuals to become protective of "their files" (a phrase I hear commonly). An understandable response, given the characteristics of paper.

Computer files are, to some extent, built on the physical file / file system model. They sometimes share charecteristics with physical files :
Many computer files can only be viewed by one person at a time, and there is certainly the need to ensure that files are kept in a particular directory structure (the equivalent of a physical place) to ensure they can be found again. Once again, when people refer to documents they have created, it is common for people to talk about "my files".

And while the file may be theirs, the data within is ours. It belongs to the organisation. This is best understood when looking at databases. Regardless of how often a user inputs into a database it is rare to hear them talk of "their data" or "their recordset" (this preciousness will usually be limited to the designer or administrator of such a system.

Related to this "personalising" of information, is another issue. The idea of person-specific knowledge, or what I term "human knowledge".

4) We rely in the wrong way on our "human-knowledge" and do not do enough to distribute, capture or evaluate it.

"Human knowledge" is information held only by human beings (as opposed by paper or electronic files). Almost all organisations rely on this asset to an extent, but in the wrong way. We depend on people being around knowing things for basic enquiries to be answered. We do not ask people to record things properly either generally or when they leave. And finally, perhaps more challengingly, we do not evaluate it. What if what we know is wrong?

There is movement in all of these areas, which I will talk about at another time.

For now, it's only necessary to see that information is not always shared evenly. Working backwards : People may know the answer to a question, but this knowledge is just in their head, and thus useless once they leave or are unavailable.

People may think they should keep personal hold of information, fearing it will be damaged somehow if others see it. In addition, they will not want to burden colleagues by passing on non-relevant information. This is where people speak of "information overload". The issue is not an excess of information, but that it is poorly organised and poorly distributed.

So, to return to my dilemma. I do not know who has the old form (if anyone, if it even existed) or where it is. Which leads to my next problem.

5. Our information is not organised with searching in mind. Our filing system assumes familiarity. Guests are not expected, nor welcome.

We have tens of thousands of documents on dozens of servers. Each server will have multiple drives. These shared drives are not necessarily poorly laid out. They form, to an extent, a certain logical flow, although each section will have it's own style (e.g. by sub-team, by area, by category, etc). But the filing system assumes you know what you're looking for in at least the title and location of the document. If you only have imperfect informaiton about the file (e.g. perhaps some of it's contents) then you are in difficulty. Searching (an increasing area of focus for the IT industry generally) is an option, but not feasible with present technologies for some tasks.

But enough of that. Let's imagine I did find the old form. Would I use it without hesitation? Well, actually, no - I wouldn't. Why? Quite simply : I do not know if it was ever any good.

I remember the website not being very good generally, and this form would have formed part of this website. Do I want to be proposing such junk as part of a new system? Clearly, improvement is the goal here.

And so we find our final lesson for today.

6. While measurements and KPIs abound about the organisation overall performance, it is rare that individual forms or processes will have been evaluated properly. Most of the time, records or metrics do not exist to tell us how certain things are doing.

So, how do we work out what to do then?

On a day-to-day basis, the people applying a certain process, if they are competent will know how things are going and therefore what decisions to take. They will be able to tell you what is working and what isn't. What could change and what should stay. These opinions may not necessarily be "objective" - in the sense of being statistically demonstrable, but that isn't necessarily the problem here. It may also be that these opinions are not listen to, but that is a management problem, not what I am talking about here.

In the longer term a problem develops with this working from this informal approach. For one, staff leave. My organisations turnover is something like 12%. 12% in one year. My team has over ten people, I've got to work on the assumption that at least one will be gone by next year. In fact, I know that at least two are going in the next six months and another will be reducing their hours.

Even if staff are around, their memories like all of us, will be imperfect. If you ask someone what is wrong with a system they use every day they'll probably be able to give you a list of specific problems they've encountered. Ask them in three months and their criticisms may be more vague.

In two years all they may remember is something like "It never seemed to work. Everyone hated it." Which might be enough to review, but is it reliable?

So with my repair form - did it work well? I cannot find minutes of any meeting which has discussed it. But was it used? I have no idea how many repairs were raised using the old system. Who created the form? I have no idea, we've had a large staff turnover in IT, I doubt anyone would know.

So....what do we find at the end of this? I can't find the form, don't know who was involved and even if I could, don't know enough about it to go forward without further evaluation. It's reached the point I discussed earlier : It was quicker to start from scratch.

I will continue this discussion at some point in the future.