Metaphors and analogies can be valuable tools for learning new software concepts and tapping subject matter expertise during embedded software development.
Several years ago, I attended an "Object-Oriented Analysis and Design" training course. The instructor began with brushing up on the fundamental OO concepts. When explaining inheritance, the instructor spontaneously compared inheriting from a class to passing traits from parents to the offspring in a family. At first, this "family tree" metaphor seemed to make a lot of sense and most attendees nodded approvingly. However, when the instructor discussed inheritance further, she mentioned that inheritance establishes the "is a" (is-a-kind-of) relationship between the child and the parent classes, and that the child class has all the attributes and behaviors of the parent class, but is typically more specialized. At this point, one attendee raised an objection that he didn't have all of his father's attributes and behaviors and he didn't feel like a more specialized version of his father. For example, he had no coronary disease "attribute", didn't support the smoking "operation", and didn't feel fully substitutable for his father (or his grandmother, for that matter). Consequently, the Liskov Substitution Principle didn't seem to apply to a family tree, as it should to a class hierarchy.
Apparently, the "family tree" metaphor wasn't working very well, although it evidently inspired much of the terminology used in OOP (parent, child, ancestor, sibling, class family, etc.). I don't recall exactly how the instructor helped to resolve the confusion, but I do remember that she didn't offer any better analogy for class inheritance. Most probably, the instructor made this analogy up "on the fly" without paying much attention to the facts, but the other possibility is that the inaccurate analogy reflected her way of thinking about inheritance. Both possibilities offer interesting insights. First, the story indicates that analogies and metaphors appear in software quite frequently and often spontaneously, but their role is usually underestimated. This is unfortunate, because analogies and metaphors, when chosen correctly, may help students climb the learning curve more quickly. Conversely, severe difficulties arise when the analogies chosen are incomplete, inaccurate, or irrelevant. The second insight from the story is that analogies and metaphors offer a unique way to quickly "look" into somebody's mind—an ability that is critical in the software business.
Coming back to the unfortunate "family tree" metaphor, I was included in a small group of attendees that discussed this matter during a break. In a short brainstorming session, we came up with a much better metaphor, based on the classification of living organisms in biology. This "biological classification" metaphor had no problems explaining correctly the "is a" relationship between classes, and what it means that a subclass is a more specialized version of the superclass. For instance, a class RedRose is a kind of Flower, and a Flower is a kind of Plant. RedRose has all the attributes of a Flower as well as of a Plant, but is more specialized. When you have a collection of Flowers, some of them may very well be RedRoses, because it always makes sense to substitute a RedRose for a Flower. The "biology classification" metaphor provides countless more examples that illustrate virtually all the nuances and concepts encountered in class inheritance, including polymorphism. Such a metaphor is more valuable than you might initially think, because it establishes the correct pattern of thinking about the fundamental software concept.
System metaphors and conceptual integrity
Metaphors and analogies have been widely used in software for decades. It is not an exaggeration to say that the venerable desktop metaphor, the spreadsheet metaphor, or more recently the shopping-cart metaphor have literally changed civilization. All of these are examples of system metaphors, that is, metaphors that provide a general idea of how a whole system works. The system metaphor has been subject of many studies, books, and papers.1 Recently, eXtreme Programming (XP) has brought the system metaphor once again to the attention of developers by elevating it to a key programming practice.2
Contrary to widespread beliefs, the primary intention of the desktop, the spreadsheet, or the shopping-cart metaphor is not to exactly imitate office furniture, accounting pads, or physical aisles in a supermarket. Indeed, users don't expect the computer screen to be exactly as hard to erase or change as real paper is, or the virtual aisles of an online store to be exactly as cluttered and hard to navigate as real ones. The function of a system metaphor is not a strict mapping between the source and target domains. Rather, the critically important role of a system metaphor is in establishing a consistent user's conceptual model, which represents what the user is likely to think, and how the user is likely to respond.
As observed in XP however, a consistent conceptual model is helpful not just for the end-users; it is invaluable for software developers because it provides the most difficult part of the design—the coherent architecture. With such a model in hand, designers don't need to invent potentially inconsistent policies and behaviors—they can simply consult the real-life model to see how it solves various problems. The system metaphor thus serves as an objective arbiter in resolving various design conflicts and ultimately guards the conceptual integrity of the design.
Concept metaphors and learning
While the system metaphor can vastly benefit software developers, it is mostly a side effect of the conceptual integrity, which is then reflected in better software structure. Other types of metaphors and analogies specifically aimed at guiding the learning process and improving the understanding of new concepts can benefit software developers much more directly.
In order to appreciate how metaphors influence learning new software concepts, it is interesting to realize how humans organize and modify their knowledge. When people learn new things, they automatically try to map new concepts to familiar ones in the spontaneous process of "making analogies". A problem occurs when these spontaneous analogies are incorrect. The new knowledge interferes with the old knowledge (learning interference), and the learning process is more difficult than it would be if the individual did not have the conflicting knowledge in the first place.
For example, a study on retraining procedure-oriented developers in object-oriented programming shows clearly that software developers are more likely to recall, rather than forget their existing knowledge.3 This research reveals that attempts to map familiar procedural concepts often resulted in inaccurate analogies, which increased the time it took the learners to make effective use of the object-oriented development model. The study further emphasized the role of the instructor in helping the learners make correct analogies. Instructors who are aware of any incorrect analogies their students make can guide the students toward understanding which aspects in the analogies are correct and which are incorrect. It's much better for the instructor to offer correct analogies explicitly, rather than to rely on the spontaneous analogies made up (consciously or subconsciously) by the students. It is in this spirit that I offered the mapping between procedural and object-oriented programming in the sidebar "Object-Oriented Programming (OOP) in C" to my article "State Machines for Event-Driven Systems". Similarly, in "Introduction to Hierarchical State Machines (HSMs)" , I showed that many familiar OOP concepts have their counterparts in hierarchical state machines.
The benefits of using such "concept metaphors" for improving the learning process apply more generally than to learning entirely new concepts. Metaphors and analogies have a unique capacity to shed new light on already known ideas and thus can help you discover new aspects of already familiar concepts. For example, the "assertions as fuses" analogy (discussed in "Design by Contract (DbC) for Embedded Software") offers a novel view of assertions as a controlled-damage protection mechanism, which complements the "assertions as contracts" metaphor that underlies the design by contract methodology.
Mining concept metaphors
In response to the original publication of "Introduction to Hierarchical State Machines (HSMs)", a reader wrote me: "while [traditional] state machines supply an abstract representation of a system in the real world, the concept of states within states does not correspond to any reality." At first this statement struck me as being exactly backwards, because I always thought that the only "real" reactive systems correspond to hierarchical state machines, of which the flat (non-hierarchical) versions are merely special, somewhat degenerated corner cases. After a while, I realized that probably the reader's mental model of a state machine and mine were based on different metaphors. Metaphors it must be, since only through analogies and metaphors can abstract software concepts correspond to any "reality" and thus gain meaning for us. The whole discussion of what it means to "understand" a software concept borders on philosophy. However, given that analogies and metaphors can be so beneficial for understanding and mastering new concepts, the pragmatic question is: What is a good metaphor for a state machine?
Just as in the case of the "biological classification" metaphor for class inheritance, a good strategy for mining concept metaphors seems to be to look for examples of the concept successfully applied in existing systems (e.g., biological classification as an example of class taxonomy). The source domain for the abstract concept metaphor, such as state machine, is most likely to be science, and not the familiar physical objects or everyday activities, as in the case of the much less abstract system metaphor. Therefore, the problem of finding a good metaphor for the state machine concept boils down to finding a good example of stateful behavior in other branches of science. Such behavior is fundamental and pervasive in the microscopic world of elementary particles, atoms, and molecules, as explained by the laws of quantum mechanics.4
A possible critique of the "quantum metaphor" for state machines might be that most programmers are not familiar enough with the source domain—contemporary physics in this case. However, the physics background necessary to benefit from the metaphor is really at the level of high-school textbooks and popular science articles. I believe that most programmers have heard about electrons, atoms, quantum numbers, quantum states, or quantum leaps.
If you make the effort to understand the basic principles governing the microscopic world, the "quantum metaphor" might turn out to be very valuable to you, because it has a very rich structure and correctly explains many specific aspects of state machines (all of these are criteria that Kim Halskov Madsen proposed for choosing successful metaphors).5 Perhaps the most interesting is the observation that virtually all quantum states are hierarchically decomposed (degenerated in physics language). Degeneration is always an indication of some symmetry of the system. For example, the degeneration of angular momentum substates of an atom comes from the spherical symmetry of the atom. Simple experiments, such as subjecting atoms to magnetic fields lower the symmetry, and partially remove the degeneration allowing us to "see" the state hierarchy (Zeeman effect). The concept of nested states corresponds thus to physical reality and is not just theoretical.
I found the quantum metaphor very valuable, and in fact, it has shaped my understanding of state machines. As always with a good analogy, the quantum metaphor explains the fundamental character of state nesting and its function in capturing and representing the symmetry of the system. This observation alone (i.e., the link between state nesting and the symmetry of a system) is already a sufficient payoff from the metaphor. Yet, there is so much more to be learned from the quantum analogy. For instance, the mechanism of quantum leaps provides excellent insights into the true role of the run-to-completion (RTC) processing model in state machines. Likewise, the mechanism of virtual particle exchange provides a very good model of communication among concurrent state machines (active object computing model).
The main point here is that in contrast to system metaphors, "concept metaphors" involve thinking that is more abstract and typically draws from science. In other words, don't sign off on all of the stuff that you once learned in math, biology, chemistry, or physics, because it can come in handy in your software work. The results might surprise you.
Naive psychology and social intelligence
The beneficial impact of metaphors on improving communication within software teams has already been observed for the system metaphor. As a natural byproduct, the source domain of such a metaphor provides a common system of names for objects and the relationships among them. This can become jargon in the best sense: a powerful, specialized, shorthand vocabulary for all stakeholders of a software project.
Actual case studies indicate, however, that while metaphorical representations indeed dominate software discussions, system metaphors are not the ones most commonly applied. The most frequently used are anthropomorphic comparisons known as naive psychology.6 Developers often speak in terms of what a software component "needs", "knows", "is trying to do", or "thinks" is happening. These anthropomorphic metaphors (identified in vast majority of all spoken representations in the Herbsleb's study) strike many as sloppy and imprecise. Dijkstra, for example, has gone so far as to propose that computer science faculty implement a system of fines to stamp out such sloppiness among students.
However, the new research suggests that instead of fighting the "sloppy" metaphorical representations, software practitioners and teachers could use them to their advantage. Naive psychology seems to be a special kind of metaphor, in that it exploits a specific cognitive capability that has evolved to handle the increasingly complex social interactions in early human societies.7 These natural skills, which develop without any special instruction in all intact humans, match many tasks encountered in software development. In designing or understanding a complex, multi-component software system, it is extremely important to be able to keep track of what state each component is in, what it is "trying" to do, what it "knows" about the rest of the system, how is it likely to behave, how is it likely to fail, and so on. The skills of naive psychology seem to allow a software developer to keep track of this sort of information with little effort, because of a built-in capability known as social intelligence.
The pervasive use of naive psychology in virtually all software discussions, although certainly not part of any standard software engineering curriculum, indicates how critical a role social intelligence plays in collaborative software development. Perhaps nowhere is it more evident than in pair programming advocated by XP. In pair programming, software is created in small incremental steps, as ideas move back and forth between the more stable media such as a whiteboard or a computer screen, and the transitory verbal medium of conversation between the two developers. The point is that most of the critical creative work occurs during these periods of discussion when the primary representation is just the spoken language. If you've ever tried to pair-program, you must have noticed how much of the discussion revolves around anthropomorphic representations and role-playing. By insisting on informal, high-bandwidth, face-to-face communication between pairs of developers, XP makes a much better use of the social intelligence than other "heavyweight" methodologies. Indeed, due to engaging the social intelligence, a pair of programmers typically creates more code and better code than the same two developers working separately.
Another way of tapping the potential of social intelligence is to employ programming paradigms and techniques that offer more opportunities to use naive psychology than others. For example, the object-oriented method encourages partitioning the problem into a specialized, encapsulated "society" of objects, which collaborate on the common task. Some visual representations such as UML sequence diagrams, lend themselves to naive psychological discussions. More specific to embedded software, the active object computing model8 makes very good use of the social intelligence, because applications based on autonomous stateful active objects (actors) closely resemble human organizations.9 There is some growing evidence that such programming models and techniques reduce the need for clarification in the discussion of design ideas.
An adjunct to visualization and design patterns
Most software engineering tasks are sufficiently complex that many kinds of methods and many specific cognitive abilities of the brain must be used in turn, or better yet, simultaneously. For example, the particular value of visual languages (such as UML), lies in tapping the potential of high-bandwidth "spatial intelligence", as opposed to "lexical intelligence" used with textual information. While clearly not all visual representations are useful for all purposes, when visualization matches a task well, the visual processing capabilities of the brain allow a tremendous amount of information to be conveyed in compact form. Moreover, this effect is not just limited to faster information transfer; the quality of the thinking has improved.10
In this context, metaphorical representations based on naive psychology are a very important adjunct to visualization techniques because they make explicit use of another powerful cognitive capability of the human brain identified as the "social intelligence". In this sense, the old saying "a picture is worth a thousand words" can be modified to "a metaphor is worth a thousand words".
Other types of metaphors (concept metaphors and system metaphors to some degree) seem to complement the design-pattern approach to software development. Design patterns excel in capturing solutions to recurring software problems, but they haven't been able to capture the patterns of thinking that experts use. Metaphors and analogies offer a unique view into experts' minds and convey the particular way the experts see and understand the problem.
A word of caution
The special cognitive abilities of the human brain, such as the "social intelligence" (or the "spatial intelligence", for that matter) didn't evolve to solve software problems. Analogies and metaphors (or visualization techniques) are just a trick to take advantage of these powerful "hardware accelerators" of the brain to solve different problems than those for which they were designed. The catch is that it might not work with all people, or might work differently in different people. If the "quantum metaphor" for state machines doesn't do the trick for you, move on, and try to understand hierarchical state machines some other way. One excellent explanation how state hierarchy arises naturally in every non-trivial reactive system appeared as a point/counterpoint debate between Steve Mellor and Bran Selic in Embedded Systems Programming magazine.11,12
End around check
I'm sure that you have used system metaphors, concept metaphors, or naive psychology before, but perhaps you didn't fully realize the fact. I hope that this article will make you more aware of the different roles metaphors and analogies can play in software development, and that there is more to it than just the venerable system metaphor. I hope you start paying more attention to how you and others use metaphors with the goal of making better use of the natural god-given abilities built into your brain. I hope you start thinking more about metaphors, learn their limits, and remember their critical aspects. Perhaps you will start collecting metaphors?
In doing my online research for this article, I was amazed at the number and quality of publications related to metaphors in software. It seems that a slow, but noticeable groundswell is developing in favor of metaphorical thinking in software. In fact, it appears that metaphors and analogies have the potential to one day become as hot a topic as design patterns are today.
I'd like to close with the quotation from Stefan Banach, a Polish mathematician who founded modern functional analysis and made major contributions to the theory of topological vector spaces. He wrote: "Good mathematicians see analogies between theorems and theories. The very best ones see analogies between analogies."
Related Barr Group Courses:
TDD & Agile: Power Techniques for Better Embedded Software Development
For a full list of Barr Group courses, go to our Course Catalog.
Endnotes
1. Madsen, Kim Halskov. "A Guide to Metaphorical Design". Communications of the ACM, December, 1994. [back]
2. Beck, Kent. eXtreme Programming Explained. Addison-Wesley, 2000. [back]
3. Manns, Mary L., and Nelson, H. James. "Retraining Procedure-Oriented Developers: An Issue of Skill Transfer". Journal of Object-Oriented Programming, Vol. 9, No. 7, 1996. [back]
4. Samek, Miro. "Quantum Programming for Embedded Systems: Toward a Hassle-Free Multithreading". C/C++ Users Journal, March, 2003. [back]
5. Madsen, Kim Halskov. "A Guide to Metaphorical Design". Communications of the ACM, December, 1994. [back]
6. Herbsleb, James D. "Metaphorical Representation in Collaborative Software Engineering". [back]
7. Humphrey, Nicholas. The Inner Eye: Social Intelligence in Evolution. Oxford Paperbacks, 2002. [back]
8. Samek, Miro. "Quantum Programming for Embedded Systems: Toward a Hassle-Free Multithreading". C/C++ Users Journal, March, 2003. [back]
9. Allen, Arthur. "Actor-Based Computing: Vision Forestalled, Vision Fulfilled". [back]
10. Harel, David. "Biting the Silver Bullet: Toward a Brighter Future for System Development". Computer, January, 1992. [back]
11. Mellor, Stephen J. "UML Point/Counterpoint: Modeling Complex Behavior Simply." Embedded Systems Programming, March, 2000. [back]
12. Selic, Bran. "UML Point/Counterpoint: How to Simplify Complexity." Embedded Systems Programming, March, 2000. [back]