Friday, May 20, 2011

The Merit of Software Experiments

To contrast the ideas presented in my last post [On Doing Things the Right Way] I felt it necessary to say something about the other type of software development project - known as "experiments". About half all the projects I worked on were experimental in nature.

I personally have worked on several experiments: Emulators, Data spike detectors, Performance and load testing utilities, Object-oriented version control, Parallelized software build systems, etc. All of these are not in current use, but some have been re-written and deployed as "real" software projects by others.

Experimental software projects are kind of an enigma. They are less planned and more ad-hoc. They are essentially license for the programmer to operate with impunity in designing and steering the behavior of the program. From stakeholders, an experiment rarely defines more than a few requirements. The lead developer determines most aspects of the design.

Conceptually, experiments have business value, but the need they fill is often only secondary and non-critical in nature. The experiment is almost alsways disposable, and lacks commitment from developers and management to succeed beyond its first incarnation.

The failure rate of experiments is extremely high. On the first iteration, with a single developer, they are almost always guaranteed to fail as deployable entities. Depending on their degree of criticality to the business, they may or may not be re-written and permanently deployed, but rarely by the same people who wrote them initially.

In that way, experiments are not real software projects, but more loosely-defined forays into the unknown. Their hidden purpose is to discover facts and measure degree of desirability of a service or program. Unlike prototypes, they are not marketing props but tools of investigation. There is always something to be learned from their construction about business process, technical needs, and obstacles that might be encountered in the dark.

An unsuccessful experiment is one where the business did not gain much for the time and effort placed into it. Little or no advantage or knowledge was gained compared to the cost.

It can be hard to distingush between an unsuccesful experiment and an unsuccessful 'real' software project. The main disctinction is that experiments are acknowledged as experiments from the beginning, even if they are deployed for a time. Real software projects have enough organizational support and commitment to drive things to production. Experiments generally lack that organizational backing.

A successful experiment is one in which time and effort in development gave insight into business needs, helped to define requirements or new architectural approaches for future projects, a kind of initial marker-stone to navigate future development efforts.

Successful experiments can become successful software projects if the organization commits to re-developing and maintaining the software. Re-usability of a successful experiment is not the code, but the ideas behind the code which are adopted by the company and built-upon to gain a product advantage.

On doing things the Right Way

An old school friend of mine recently approached me asking for some programming help on an institutional project he was working on. After a few e-mails, I had to come to terms with the fact that I could not help him with the thing he wanted the most - someone to program a web interface front-end to a database he created.

Well, specifically it's not that I couldn't help, just that I have a thing about promising stuff I am not sure I can deliver. The old newspaper editing adage: "when in doubt, leave it out" applies to the list of things you can or cannot do to satisfy requests from potential customers. In other words, if someone asks you to do something, and you are not fully confident you can do it, don't agree to doing it.

This is different if you are writing code for your own project. You would obviously attempt to use unfamiliar technologies, take a lot of risks, do a lot of hacking because you could forgive yourself for failure. Promises to yourself can be broken without consequence.

But when pressed by a customer: "can you do this for me?" and "Not sure, but I can try" (aka "maybe") is not a firm enough ("yes" or "no" type) answer, then you must say "no." The reason is due to the potential for failure, and the possibility of having to deal with the accusation that you promised something you could not deliver.

That being said, another issue crops up, and it's about who manages the process of development. If the customer is in the driver's seat about what he wants, that's fine, as long as he understands and accepts some basic prerequisites of development. The stakeholder can sometimes be the primary domain expert, but not a development expert. If this is the case, he must be educated on the basics before anything is agreed-to.

The basics? Software is a planned, deliberate thing. You intend to produce something. You develop plans. You write code, test against the plans and deploy. There is no ambiguity. The intent is to build a deployable, functioning item that people can use.

Developing software among two or more consenting adults is an act of intent, one has no choice but to consider the state of the deployed article at the time of conception. If you want people to like what you do, you have to plan for it.

This means, before any code is written:

a.) The developer must understand the conceptual domain of the customer.

b.) the developer must build a Requirements Document with the customer.

c.) The developer must build a detailed Functional Specification describing the behavior of all parts of the product.

Even if no code ever gets written, and the project is called off at this point, at least you did not waste money building something you never wanted or were never able to build in the first place.

Even if the current developer is not the person who will ultimately write the code, at least you have working plans to hand to the next developer, which is a huge advantage.

Even if your plans change, you only have to change documents, and will not have to change code too.

If the customer can agree to these things, then the project can move forward. If not, or if the customer waffles, the project cannot move forward.

For example, the first thing a customer might do is trivialize the work needed to complete the project: "All I want is something that just does this or that". If the project is sufficiently trivialized, there would be no need to waste time with engineering pre-requisites, such as planning, management, documentation and expense. Code could just be "whipped-up" without a care.

Often times, this comes in the form of a half-baked request for a complete, but easy-to-make product - a kind of mythical animal that is half-lion, half-mouse. Often the request can be described in terms of contradiction: that of a "working prototype", which doesn't exist in the software business.

Prototypes are to programs as mannequins are to robots. Prototypes are not real programs. They may resemble programs, but are only designed to emulate or "fake" some kind of behavior of the real thing. Prototypes are for display purposes only. Prototypes never get retro-fitted with code and developed into functioning, deployed software.

In the software world, the difference between developing a prototype and a developing working product is one of intent. You either intend to develop a prototype or you intend to develop a software product.

Often, people who do not know what they are doing will attempt to save money by canoodling developers into building a "working prototype", promising it will only be used for the purpose of proof-of-concept. Then, at a later date, they decide to renege on that promise, and treat the prototype as a working product, telling developers to "tweak it" for production.

This leads to disaster. Economic loss, credibility loss and a lot of wasted time. This is the Working Prototype Trap. It looks like a good idea, and with a few misconceived plans, it becomes a real nightmare.

The problem is, some customers don't know that we just don't want to go there. They need to be educated, and have their expectations controlled.

This is not to say that there isn't someone out there who has the talent, experience and enterprise to "whip-up" a quick website that satisfies all the requirements. In fact, if the project is small enough and the expertise of the implementer is great enough, and the knowledge of the domain space is easy enough to acquire, it can be done. It's just a question of locating the right person.

But often this person cannot be easily recruited, and he or she must be able to take the project to the next level since no documentation exists to support other less optimal participants.