These eight myths about modeling tools and modeling languages might sound manifestly ridiculous given what we now know about how to best go about developing software in ways which ensure delivery of business benefit and minimize the scope for defects to go undetected. Yet one decade or so ago belief in the validity of these ideas below drove a global market in heavy-weight software modeling tools. This was at a time when the software industry was awash with well funded software projects all drawing from a relatively small pool of skilled programmers who took the opportunity to command strong hourly rates for their efforts. High programmer rates in turn made it easy for modeling tool vendors to enchant project managers with stories of programmer requirements being slashed through the use of sophisticated modeling tools. Remember that at the top of the price range it was typical to be charged USD10,000 for the tool, plus the same amount again for training per developer.
Myth 1. Programmers spend a long time typing.
Generating skeleton code will therefore save a lot of expensive programmer time as there will be a lot less typing to do. Skeleton code is source code without executable statements – in other words the class declarations, fields and method signatures with stubbed out method bodies. Generation of skeleton code is the centerpiece of the forwards engineering feature set, the idea being that the business-specific design work is best performed by an analyst/designer pictorially, using a modeling tool. The skeleton code can then be fleshed out by programmers who need only to focus on the detail of one method at a time without having to worry about the larger model as a whole.
Why is this a myth?
While the skeleton can account for, say, 20% or so of the final source code, it would be naive to think that this therefore equates to 20% of the whole software coding effort. The actual saving in development cost due to the generation of skeleton code is likely to have been negligible for the following reasons: 1. The slowest single-fingered programmer spends much more time thinking about algorithms, looking up examples of how to use third-party libraries, thinking of descriptive variable names, writing tests and fixing defects. In other words, even if all the time spent typing can been eliminated, programming still takes about the same length of time. 2. Even if actual typing was the rate determining step, the names of all the classes, fields and methods still have to be typed into the modeling tool. So the programmer s typing effort has simply been shifted elsewhere.
Myth 2. Programmers spend a long time deciphering hand-drawn design diagrams due to the absence of a single unified standard meaning for the symbols.
Having arrived via separate routes, leading authors had prescribed parallel object oriented modeling languages comprised of symbols to represent constructs like class, inheritance, composition and information hiding. Standardizing the nomenclature saves time spent switching between competing standards. By using a modeling tool then the tool can enforce the modeling language standards.
Why is this a myth?
People work with non-unified language all the time because there is a trade off between uniformity and freedom of expression. Here is an example to illustrate this point. If you are asked by a visitor to your office for directions to the train station, you might respond wait a second, I will draw you a map . But which pictorial language should you adopt, and will the visitor understand it? The map you draw will likely contain a clear start and finish point, with direction arrows, left and right turns, road junctions and significant landmarks. Would the communication be helped by the strict application of a modeling language in this instance? Or would, as is far more likely to be the case, a language created on-the-fly using unimpeded expression be fast, expressive and comprehensible. Obviously there are differences between specifying object oriented models and a short journey across a few streets, but there are also many similarities. Relevance of the information displayed is more significant to the communication than its strict adherence to an agreed language. Think how much an application for drawing ad-hoc street direction maps would be a pain compared to a quick sketch on a piece of scrap paper.
Myth 3. After training, customers and analyst/programmers can both understand and validate software design diagrams sat down together around a table. The diagrams can work as a middle ground that both customers and implementers can understand.
Using a requirements capture method, the functions of the new system can be described, firstly in English, and then onwards to greater abstractions detailing the static and dynamic behavior of the model.
Why is this a myth?
In practice customers relate well to use cases, but usually get lost somewhere en route to state transition diagrams and class diagrams. Even some developers fail to grasp the differences between composition and aggregation as these are notions which exist only in the model and not in the code. These days we show working software to customers, not diagrams of what the software will be.
Myth 4. Left to their own devices, programmers write software without caring whether the finished product will be used. Giving programmers a blueprint of the required system means that the goals of the system as a whole are guaranteed to be met.
A detailed up-front design adds predictability to the software creation process and assists planning the implementation effort. The more detail in the design, the greater the certainty that the end result will meet the requirements.
Why is this a myth?
Only a minority of programmers would churn out code without caring if the finished product is of use to the customer. Arguably, constraining programmers to work on small functions one at a time creates detachment from the bigger picture. Instead of the goal being to satisfy the customer, the goal is to implement a given number of methods within a given timescale. It should also be noted that this approach assumes that the design is correct . The design may be incorrect, or there may be no such thing as a correct design if there are conceptual flaws in the requirements as stated by the customer.
Myth 5. Expensive modeling tools pay for themselves in the long term.
Software programmers are expensive, so it stands to reason that reducing the need for programmer time is going to save a lot of money across the duration of the project lifecycle.
Why is this a myth?
Bearing in mind that, as stated above, this could stretch to USD10,000 a seat plus nearly as much again for training, this is quite a claim. Not that the cost of the tool and training buys more than a few weeks of programmer time, but the tool would be unlikely to save the equivalent cost because the premise upon which this claim is based assumes the project contains a high proportion of repetitive programming work which can be condensed down to a simpler modeling task. In reality, programmers are adept at avoiding repetition, using domain specific languages and other code generators, aspects, templates, inheritance and frameworks like Spring and Hibernate all to keep down the volume of verbose, hand-written code.
Myth 6. Project teams often change their mind about which programming language to use multiple times throughout the implementation phase of the project lifecycle.
Model Driven Architecture (MDA) makes the target programming language swappable. Unlike the generation of skeleton code, for MDA the result of translation of the model to an implementation (programming) language is an executable system.
Why is this a myth?
The effectiveness of MDA as an overall development approach is not the question here – rather whether the flexibility on target implementation language is that useful an option. The question boils down to: have you ever got part way through a project and wished you d picked an alternative programming language, and furthermore wished you could distance yourself from the details of the programming language? This is unlikely for two reasons. Firstly, you pick a programming language before you start development based on availabilities of libraries, tools, workforce and community knowledge. Secondly, when bugs need fixing, you need to focus in on the detail of the action of a single line of code, where platform and runtime version suddenly start to matter, not step back from the detail.
Myth 7. It is a good thing to keep the code separate to the documentation.
The code has the primary purpose of running correctly. If it fails in this purpose, the customer will not pay for the product. Usually, the code contains some work-arounds, fudges, fixes and boilerplate code all of which are ugly and obscure the picture of the business object model which project stakeholders want to see. A clean model of the software separate to the detail of the implementation can provide a more meaningful view, and it is worth keeping the model up to date as the code is developed.
Why is this a myth?
It is not a myth that a model is useful. It is a myth that it is worth the effort of maintaining the model, because that effort is considerable, and has a knock-on effect. The effort is that programmers move the codebase forwards rapidly, but in ways which respond to the discovery of limitations in the design, and to exploit better implementations as they are discovered. For example, a programmer may find that the design contains some repetition, which makes sense at the business object level, but would be a wasteful pattern to follow in the construction. However, this causes a headache at the model level. Should the model reflect the requirements as captured, or be updated to reflect the implementation as optimized? And what happens when it transpires that the design was inadequate – which we will look at next.
Myth 8. The design can be got right before the programming starts.
This is the most dangerous of the myths because it sounds the most plausible. The construction industry, for example, executes projects with a detailed up-front plan every time. The design is modeled, visualized and reviewed thoroughly before construction work commences. It therefore does not seem unreasonable that software can also be designed and planned to fit together seamlessly before the programming starts, making the programming somewhat of a formality with a predictable timeline. Therefore anything which assists the production of clear, unambiguous design takes software development nearer to that goal of assured, timely delivery of a product fit for purpose. Software modeling tools offer the promise a complete, detailed design, with a traceable history from requirements capture through to each facet of the final design.
Why is this a myth?
It is a myth because the software industry and the construction industry are different. Here are the differences: The purpose of a construction project is normally obvious. While the details of a bridge or a building require a lot of thought, the overall function can be summarized in a few sentences. This is not so with most software which has to satisfy a purpose which is hard to specify. The materials and components of construction change slowly. This is not to say that new materials do not come into use in construction, but in software the platforms, tools, libraries and frameworks change far more rapidly making it more difficult to specify up-front how easy the implementation will be and how well the finished product will perform. The complexity of the requirements is much higher for software. There simply are more conditions, rules, exceptions and special cases to the definition of a software system. Software can be adapted and changed and it makes sense to exploit that flexibility.
Closing thoughts
The software industry has come a long way in the past ten years, but the vestigial influence of modeling tools remains to this day in a couple of key places. Software tool catalogs and awards competition categories in particular still adhere this antiquated classification of development tools without waking up to the fact that the industry has long since grown skeptical of wild claims about programmer productivity from modeling tools and moved on to better approaches more solidly grounded in measurable productivity.