Note: This was taken from a feed reader and is linked to an article that is no longer available. This wasn’t written by me and I’m not sure who the author is but I think this is really insightful and enough of a pain in the behind to access that it’s worth a repost. If you know who the author is, please let me know so I can properly attribute the work, relink, or pull it if so desired by the original author.
Three Ages of the Developer
There’s a story from Greek mythology that I first read as a child, having found it in my grandparent’s red, leather-bound family encyclopedia. I always liked Volume 14 of that encyclopedia, which was devoted to fables and folklore, and was illustrated with beautiful black-and-white plates. The picture that accompanied the story of Oedipus and the Sphinx has always stayed with me as a mnemonic of the tale.
The story begins as Oedipus is traveling to Thebes. As he makes his way along a circuitous mountain path he is suddenly assailed by a vicious monster called a Sphinx (no relation to the Sphinx of Egyptian mythology). The Sphinx descends from a precipice high above the path and lands directly in front of Oedipus, blocking his way. Before allowing him to pass, the Sphinx demands that Oedipus answer a riddle. If he answers incorrectly, the Sphinx will eat him alive. The Sphinx then poses this question: “What creature walks on four legs in the morning, two at noon, and three in the evening?” Oedipus thinks for a moment then, being the wise man that he is, replies “A man … who crawls on all fours as a baby, walks upright on two legs as an adult, and then leans upon a cane in his dotage.” The Sphinx is so enraged that Oedipus has answered its riddle correctly, that it throws itself from the cliff, plummeting to its death.
Oedipus’s characterization of mans personal journey from cradle to grave as being a three-stage process has numerous occupational counterparts. For instance, a tradesman is considered to progress from apprentice, to practitioner, to master. A medieval warrior was first a page, then a squire, then a knight. Of course, any occupation in which there is an advancement in capability and skill lends itself to such a characterization, including software development.
I believe such metaphors reached their vomit-inducing zenith with Alistair Cockburn’s “Shu-Ha-Ri” model. A sad attempt to cast a software developer’s advancement as something akin to the ostensible spiritual advancement of a martial arts practitioner. The Karate Kid wannabes really loved that one. Personally, if an analogy is to be used at all, I would prefer it have a more prosaic flavor.
So it was with some surprise that after a recent discussion with a colleague about “the getting of wisdom” as it applies to our work, that some reflection upon my own sense of professional priority had progressed over the years through three more-or-less distinct phases. I’m not sure if this progression is indicative of the acquisition of knowledge or a spiraling descent into all-consuming cynicism. Perhaps it’s a little of both.
So with much reservation, and with tongue planted firmly in cheek, I would like to offer the following three-stage model of professional growth for a software developer, in so far as I’ve experienced it. The model identifies the following phases – The Age of Coding, The Age of Design and The Age of Requirements.
The Age of Coding
Fresh into the workforce, the young software developer begins their professional life with a focus set almost exclusively upon coding. There is often little recognition or awareness that software development consists of any activity other than programming. This myopia is often reinforced by the sort of work that junior developers are typically given, small coding tasks such as writing utilities to help the “real” developers do their jobs. Their work product is solely code, so they naturally come to think of their occupation as being one dedicated only to the production of code.
Those looking to broaden their skill base will likely seek to learn new programming languages, new APIs and the standards they may embody. Hence they come to consider the “worth” of a professional developer to be a function of the number of languages in which they are fluent and the quantity of low level technical arcana they have internalized.
The environment in which the nascent developer works is often subtly retarding their professional development. So many software shops work at a low level of maturity, and the impressionable junior quickly acquires the manifest bad habits of their colleagues. When their work mates brag of marathon coding sessions and all-nigh hack fests, they observe the admiration and reward that accompanies the production of large quantities of code in a short time, and so come to think of Lines Of Code as a macho metric of achievement.
When they see their colleagues racing around to fix a production problem of their own creation, they see the self-congratulation that occurs when the problem is fixed, but remain as unaware as their work mates that more mature development practices could’ve prevented the problem from occurring to begin with. So they come to think of reactive fire fighting as something to be proud of, rather than a symptom of undisciplined work habits.
If they look outwards to the wider activities of the industry they are unlikely to get a healthier perspective, for they will find a community that is obsessed with novel syntax, constantly engaged in language wars, and saturated with the marketing of developer tools promising quick fixes and amazing cure-alls.
Observably, some young coders become middle aged coders, and some middle aged coders will become old coders, without ever leaving behind their code-centric view of the software world. But thankfully, a significant number will start to question whether all this coding and fixing is really getting them anywhere.
The challenges to a code dominant view often arrive in the form of failed projects and other corporate misadventures. Sooner or later, the young coder finds themselves on a project staffed by bright programmers with great technical chops who produce mountains of code every week, yet never seem to make significant progress towards the achievement of concrete goals. They may see that everyone is writing code, but no one seems to have a cohesive notion of how everybody’s individual contributions are going to cooperate to achieve the system’s intended functionality. They may observe that each programmer on the project seems to be focused on their little bit of the project to the exclusion of everything else. It’s almost as if the code base is Divided into discrete fiefdoms, each one ruled by a feudal lord jealously guarding his territory and its technical perimeter against intruders.
On such projects, when integration cannot be delayed any longer, individual systems are bound together with duct tape and an architecture somehow “emerges” from their union that is not quite fit for any particular purpose, and which has qualities nobody anticipated. Performance is typically lacking, there is duplication of functionality, incompatible assumptions and impedance mismatches all over. The team, geniuses when working alone, have together created a programmatic Frankenstein too horrible for any of them to look upon without shame and embarrassment.
After the initial disappointment fades, some reflection by our young coder brings them to the realization that the most brilliant programming in the world is of no use if it’s not organized into a coherent whole by some overarching design and architecture.
By now they may also have had their first experience dealing with a code base that is so large that it becomes cognitively impossible to maintain a detailed level understanding of it all at once, thus necessitating the use of abstraction to cope with such a volume of information. Indeed, if our coder aspires to be responsible for these larger units of work, they come to understand that they will need to become comfortable dealing with such abstractions.
Thus is born an appreciation for the utility of design and modeling. And so it comes to pass that, motivated by experiences of code-centric failure and a desire to tackle ever larger problems, the coder gradually realizes that programming, though essential, is not the most important part of software development. They come instead to think of design as being the dominant factor in project success, and the area whose mastery they should next pursue.
The Age of Design
Once he takes a step back from the coal face of programming it quickly becomes obvious to the burgeoning designer that what’s been missing from her development efforts to date is the guidance and direction provided by a solid architectural definition. Those who have spent any time “hands on” in the OO domain will know that the promise of code reuse that helped spur OO to prominence has mysteriously failed to appear. But one kind of reuse we can definitely lay claim to is the reuse of common solutions to design problems as captured in design patterns.
Design patterns are seductive. They lure in the naive and enthusiastic with the possibility of cataloguing, in the abstract, all the knowledge about the structuring of software that the experts have spent years acquiring the hard way. “Perhaps”, thinks the designer, “I can bypass all that hard work and become an expert myself just by learning and applying these patterns.” So begins the tendency to view software design as the process of aggregating pre-existing design patterns. Those with such a view are sometimes referred to as being “pattern happy.”
An infatuation with abstraction can become a huge productivity sink. Writing “Hello World” takes the careful orchestration of dozens of classes – builders, factories, abstract factories, strategies and the like – all working in concert to achieve the magnificent generation of a two word string.
This same predilection, applied at the system level, can lead the unwary to become so-called “Architecture Astronauts.” These senior developers are compelled to so complicate and over-generalize the architecture of any system they develop, that even the most mundane of efforts is presumed to demand a distributed, component-based, highly scalable, transactional approach capable of handling a massive amount of throughput.
Graduation from the Age of Design, as for that from the Age of Coding, is generally the result of experiencing multiple project failures. Presented with a project that, even though appropriately designed and well coded, was still considered a failure by its user base, the erstwhile designer cannot help but wonder “Where did it go wrong?”. How can a technically excellent piece of software that so delights those that built it, not make its users equally happy?” Of course, the answer is “Because it doesn’t do what they need it to do”. And so, in what is a rude shock to any technical person, comes the unsettling realization that the very best of designs and the most well executed code are all for nothing if sufficient effort has not been invested in finding out what the users really want of their software.
The Age of Requirements
Once they begin to focus upon the gathering of quality requirements, it tends to strike the budding analyst how little attention the subject is generally afforded by the broader community. In particular, it becomes evident how often the subject is deliberately ignored – even by the very organizations within which software development is occurring. The most fundamental of questions – “Should we be developing this software at all?” – frequently goes unaddressed, perhaps because answering it honestly would require confronting some uncomfortable truths about the organization’s capabilities (or lack thereof). Also, techies keen to get started playing with new technical toys are hesitant to encourage any discussion that might jeopardize their opportunity to do so.
Not all applications are actually worth developing. Just as discretion is the better part of valor, sometimes it is just as important to know what development should not be undertaken, as it is to know how to perform the development task well. For example, just because it is technically possible to automate some part of a workflow with software does not mean that the development effort required to do so is justified. Even if it is worthwhile writing such software, it is not always the organization performing the workflow that should be writing it.. One needs to consider the prevailing skills within the organization and the ongoing cost of maintaining the code base.
Requirements elicitation is the very foundation of the software development process. When mistakes are made here, the cost of subsequently misdirected design, code and testing efforts can be enormous. The multiplicative effect of requirements errors makes it imperative that they be gathered carefully and verified with constant referral with the users themselves, or faithful representatives thereof.
A common pattern of failure with respect to requirements elicitation is to let managers, on either the development or user side, dictate the requirements that they think the user base has. This is problematic because management often has an idealized or superficial impression of what the users actually do, but you want your application to facilitate their real workflow rather than the idealized one. Additionally, managers are likely to be of a different personality type and different mindset from the user base, making their ideas on usability quite different from those of the application’s target demographic.
Above all, one should be wary of letting programmers themselves dictate the function of the software. Not only do they approach the subject with a strong technical bias, but they are likely to favor whatever is easiest for them to implement rather than what’s easiest for the user to employ.
Conclusion
From the perspective of one in the Age of Requirements, there are two main things to be learnt from all of this:
The earlier in the software development life cycle an activity is, the greater are the consequences of doing it poorly.
The amount of attention the industry pays to the various aspects of software development is in inverse proportion to their true significance.
The first point is an old one, known empirically thanks to Boehm and others. The second point is rather alarming, but observably true. Consider – what are the news items that cause a fuss in our industry, but the arrival of new technologies and new programming languages?
Such announcements often carry the veiled promise of being the solution to all your development woes. But they aren’t, obviously, because they are concerned principally with the least important part of software development – the coding.
Less frequently we see some attention given to design concerns, but here too the focus is often upon selling rather than encouraging good practice – witness the recent enthusiasm of some for SOAs (Service Oriented Architectures) even though SOA proponents are still struggling to define exactly what an SOA is.
But when was the last time you saw any coverage of issues relating to requirements elicitation? How much coverage do we give the most important topic of all – working out exactly what it is we’re meant to be building, and whether we should even be building it at all?
Approximately none.
Clearly, there is something wrong here, and I think it speaks to the general malaise that so much of our industry finds itself in, that we collectively appear to have our priorities completely upside down.
If you’re interested in learning more about requirements gathering, let me recommend the following books as a starting point:
- “Are Your Lights On? – How to Figure Out What the Problem REALLY Is” – D. Gause, G. Weinberg, Dorset House, 1990
- “Exploring Requirements – Quality Before Design” – D. Gause, G. Weinberg, Dorset House, 1989
- “Applying Use Cases – A Practical Guide” – G. Schneider, J. Winters, Addison-Wesley, 1998
- “Software Requirements – 2nd Edition” – Karl Weigers, Microsoft Press, 2003
End note: This wasn’t written by the author of this blog. If you know who wrote this posting or if you wrote it yourself and would like it pulled or relinked, please let me know.