“Knowledge workers are the key to the future, not factory workers.
The question is, what is the best way to organize and optimize teams of knowledge workers?”
– Glass, Robert (2006), Software Creativity 2.0

In 1995, Robert Glass published Software Creativity as a multifaceted look at the role of creativity in software development. The book slowly gained a following over time, particularly with the rise of the agile process movement. Demand rose such that reprints were required, eventually leading Glass to publish a new edition in 2006.

A Fundamental Divide

There was a large push, heavily in the 1980s and peaking in the early 1990s, to have software development emulate the factory model. After all, so the thinking goes, history shows us that most industries start out as small, individualistic efforts and then mature and become commoditized with efficient factory production. With proper process and industry maturity, the expensive engineers currently producing software could be replaced by much cheaper labor using CASE (computer-aided software engineering) tools with visual drag-n-drop modeling. The days of the software craftsman were numbered, just like blacksmiths and cobblers of the pre-industrial age, destined to be replaced by lower skilled workers using efficient automation.

As time went by, it became clear the shift to a factory model wasn’t unfolding as expected and promised productivity breakthroughs weren’t occurring. In fact, the problem actually got worse as software became larger and more complex. The factory analogy drew more critics and the exploration of other approaches garnered momentum. In some ways, the agile movement that took hold in the late 1990s can be seen, in part, as a counter reaction to the push for a process heavy approach that had held sway in the prior decade. Still though, history tells us software development of any size can’t simply be an ad-hoc effort. There appears to be a dichotomy in development that favors neither rigid nor ad-hoc processes and that while software is not an assembly line neither is it solely individual craftsmanship.

Glass’s Software Creativity 2.0 explores that dichotomy by discussing the merits and failures of various facets of each side of the dichotomy, backed by a wealth of references to industry studies and academic research.

Discipline vs. Flexibility


First, the concept of discipline needs some elaboration. As a whole, engineers don’t need much individualized discipline as an encouragement to work. They’re drawn to the industry precisely because they enjoy solving problems and crafting solutions. They are intrinsically motivated. For our purposes, discipline refers to the rigor of the processes and group dynamics that facilitate creating the product by keeping everyone moving towards a common goal.

In looking to history for an example of discipline in an emerging industry, software process researchers latched onto the previous great shift that occurred with industrialization, often with Henry Ford as an archetype. If we could be that disciplined, breaking software development into small single-purpose steps, we could crank out software like Ford did with the Model T. The steps to enable this discipline were tantalizingly clear in concept:

  • Domain experts would use formal methods in defining the requirements in detail.
  • There would be purpose built tools so that, with limited training, an analyst could connect boxes on a GUI to move the product from requirements to blueprint.
  • The code would be generated automatically from the blueprint.
  • A rigid process would guide the workers at each step to ensure proper hand off down the software factory line.

Of course, this vision was not remotely universal, but it held significant advocates. After all, if achieved, the gains would be incredible!


The fundamental problem with the factory view is that factories produce a single thing repeatedly, but each software project is reasonably unique since it addresses a previously unmet need. Complicating the matter, software projects vary widely in both complexity and effort. A factory line approach isn’t terribly off the mark for a simple problem. Imagine needing to create 100 new reports as part of a system upgrade. The work may be large, but it’s probably not complex. Conversely, a 3D diagraming tool requires heavy mathematics, a sophisticated user interface, demanding processor/memory constraints, and so forth where each work element is both difficult and very little like the others. The key in addressing both types of projects is in a flexible application of process. The approach to processes, tools, and personnel must align with the software to be built.

So while discipline steers towards a rigorous and repeatable process, flexibility is about adapting to the problem to be solved. Given most software projects tend towards the complex end of the spectrum, finding that balance is one of the key challenges in the industry and one of the motivators behind iterative development processes that are intended to be adaptive.

Formal Methods vs. Heuristics

Formal Methods

As hinted in the description of discipline, formal methods have long been thought to be a route to better software. And, indeed, there are cases where the ability to prove software is correct is incredibly valuable. Unfortunately, formal methods require extensive training and may require large investments in even small software projects. Formal methods can still be of great value in highly critical portions of a project, but their application is limited.


On the opposite side of the coin are heuristics, trial and error guided by rules of thumb determined through experience. Heuristic solutions are often inferior to those found by a formal method, but the heuristic approach is more feasible for complex problems and can reach a solution with significantly less investment. Some, such as Nobel Prize and Turing Award winner Herbert Simon, note that as software complexity is ever growing, it may prove out that heuristics are the only methods capable of finding a solution.

Optimizing vs. Satisficing

There are multiple variants of the saying “better is the enemy of good enough.” Software is complex enough that they are typically multiple solutions and the challenge is deciding upon the best solution. Unfortunately, it’s prohibitively expensive to reach a point where multiple solutions can be conclusively compared, so it’s all but impossible to determine the “best” solution before a solution must be chosen. The more practical approach is to find a satisficing solution, one that is “good enough.” What is good enough? The answer is project dependent and requires a process with substantial flexibility because good enough is often subjective and may require customer feedback as an input into further development. Good enough can also shift over time as the market itself shifts. Even when it’s not clear what good enough is ahead of time, it’s important to try to identify the areas where feedback is required or where the criteria aren’t stable (often the case in emerging markets) as way to manage risk and prioritization.

Quantitative vs. Qualitative


We like data, though as an industry we frequently don’t do a good job of understanding what data to gather or how. For example, we rarely estimate in more than an ad-hoc fashion and rarely track our results or re-evaluate based on knowledge gained, though we frequently manage from those estimates as if they are hard facts rather than fuzzy guesses. Metrics are also often taken based on what’s easily available rather than based on what’s needed. Metrics proponents argue our data gathering strategy should be the reverse, frequently referencing the GQM model (Goal, Quality, Metric): first decide what the data needs to tell us (the goal), then decide the qualities (e.g. accuracy and/or granularity) the data must possess, and finally decide what data meets those needs (the metric).


Data is great, though once the data is gathered, there is the question of how it is used. Metrics are proxies, simple representations of a complex property. For example, for the sake of planning and reporting most managers desire some insight into the work remaining to address the bugs in a “feature complete” product. Determining the work to fix bugs turns out to be a rather difficult thing since bugs are almost by definition unexpected work and notoriously high variance in difficulty, but typically some variant of a bug count metric is used. The danger is in forgetting the bug count is just a proxy. Sometimes there is a temptation to think of the metric as the truth rather than a rough estimation of a more nebulous reality. A zero bug count does not mean zero bugs.

There is also a related temptation to morph the proxy from a metric into a goal, mistakenly thinking that by influencing the metric the reality will follow. Building on our bug count example, here is a scenario I’m sure many have seen take place. A manager one day states “We have 100 open critical bugs. If we reach zero by the end of the month, there will be a bonus!” The open bug count metric, which was a proxy for real work remaining, has now been turned into a goal. While the manager’s intent might have been reasonable (e.g. facilitate meeting a deadline), the metric has been turned from informational into an incentive. People now have a motivation to game the metric by not filing a bug or hiding it. There will be arguments about what constitute a “critical” bug since now a bonus is at stake. The bug count has lost its value as a proxy and has instead become a political leverage point.

It’s good to gather data, but consider carefully not only what data is gathered but also what purposes the data is used for lest the data lose its value completely.

Process vs. Product

The holy grail of the factory analogy was the idea of a high quality, predictable product. The theory was that if the process was good enough, the outcome would follow. Though the analogy didn’t end up as a good fit for software, we know ad-hoc development doesn’t work either. Process is more appropriately seen as an enabler. Software teams need discipline and cohesion and the work needs to flow all the way from idea to product. A good process helps ease the flow of work and provides a framework for team coordination. The important thing to remember, something that can get lost in the desire to follow the process, is that the product is the goal.

Good process focuses on the product. Iterative processes encourage a product focus by having each iteration address product functionality. Contrast that with the waterfall model where “iterations” are related to process phases. A good process is not necessarily “heavy” or “light.” The Rational Unified Process (RUP) is often considered heavy weight, but calls itself “use case driven” because the main point of the process is a product that fulfills the use cases. Agile processes follow much the same pattern, but typically have shorter iterations and rely on frequent customer communication to flesh out abbreviated user stories on an as needed basis.

Finally, a word about checklists. It can be easy and tempting to fall into the habit of thinking of the completed checklist as the goal of a review or meeting. Checklists can be fantastic tools to help ensure important items aren’t overlooked, but the checklist is essentially a proxy for those things that were deemed desirable. When a conflict arises between the product and the checklist think back to the dichotomy of discipline vs. flexibility. Take a moment and ensure the discipline of satisfying a contentious checklist item is still appropriate for the product at hand.

Intellectual vs. Clerical

Although Glass comments on creativity’s role in each of the previous sections, in this final section Glass delves deepest into the question of creativity in software development. One of the great assumptions in the factory model is that with sufficient work up front, the construction of software, like the construction on a factory line, is largely clerical (or manual) rather than intellectual. Only with that assumption can automation reap large rewards. If a step truly involves intellectual work, or creativity as Glass puts it, then it can’t be automated.

To determine just how much of software development is intellectual, studies were conducted where developers were filmed as they worked. Each activity was judged to be intellectual, clerical, or indeterminate. Intellectual activities dominated, accounting for roughly 80% of the time spent. Other variations on the study were done, but each resulted in similar numbers confirming that, without a doubt, development is largely an intellectual task. The reason all those hopes for automation never panned out was because there simply isn’t much about software development to automate. These findings are eerily reminiscent of Frederick Brooks’ discussions about essence and accident and why there is No Silver Bullet.

That’s not to say improved tools aren’t of substantial value. There are consistent, incremental productivity gains to be had with tools that help in various types of intellectual work: debuggers, code analyzers, syntax awareness, domain modeling visualizers, etc.


All of the above topics are the focus of the first two thirds of Software Creativity 2.0. Having given fairly overwhelming evidence that creativity is a critical part of software development, Glass devotes the back third of his book to discussing ways to foster creativity and examining how creativity is handled in other industries. However, that is a topic for another time…

“What’s fun about programming is also what’s awful about it.
It’s weaving a thousand tiny, intricate details into a functioning tapestry, an executable work of art.”

Leave a Reply